zebra/zebra-consensus/Cargo.toml

56 lines
1.6 KiB
TOML
Raw Normal View History

[package]
name = "zebra-consensus"
version = "1.0.0-alpha.19"
authors = ["Zcash Foundation <zebra@zfnd.org>"]
license = "MIT OR Apache-2.0"
edition = "2018"
Send crawled transaction IDs to downloader (#2801) * Rename type parameter to be more explicit Replace the single letter with a proper name. * Remove imports for `Request` and `Response` The type names will conflict with the ones for the mempool service. * Attach `Mempool` service to the `Crawler` Add a field to the `Crawler` type to store a way to access the `Mempool` service. * Forward crawled transactions to downloader The crawled transactions are now sent to the transaction downloader and verifier, to be included in the mempool. * Derive `Eq` and `PartialEq` for `mempool::Request` Make it simpler to use the `MockService::expect_request` method. * Test if crawled transactions are downloaded Create some dummy crawled transactions, and let the crawler discover them. Then check if they are forwarded to the mempool to be downloaded and verified. * Don't send empty transaction ID list to downloader Ignore response from peers that don't provide any crawled transactions. * Log errors when forwarding crawled transaction IDs Calling the Mempool service should not fail, so if an error happens it should be visible. However, errors when downloading individual transactions can happen from time to time, so there's no need for them to be very visible. * Document existing `mempool::Crawler` test Provide some depth as to what the test expect from the crawler's behavior. * Refactor to create `setup_crawler` helper function Make it easier to reuse the common test setup code. * Simplify code to expect requests Now that `zebra_network::Request` implement `Eq`, the call can be simplified into `expect_request`. * Refactor to create `respond_with_transaction_ids` A helper function that checks for a network crawl request and responds with the given list of crawled transaction IDs. * Refactor to create `crawler_iterator` helper A function to intercept and respond to the fanned-out requests sent during a single crawl iteration. * Refactor to create `respond_to_queue_request` Reduce the repeated code necessary to intercept and reply to a request for queuing transactions to be downloaded. * Add `respond_to_queue_request_with_error` helper Intercepts a mempool request to queue transactions to be downloaded, and responds with an error, simulating an internal problem in the mempool service implementation. * Derive `Arbitrary` for `NetworkUpgrade` This is required for deriving `Arbitrary` for some error types. * Derive `Arbitrary` for `TransactionError` Allow random transaction errors to be generated for property tests. * Derive `Arbitrary` for `MempoolError` Allow random Mempool errors to be generated for property tests. * Test if errors don't stop the mempool crawler The crawler should be robust enough to continue operating even if the mempool service fails to download transactions or even fails to handle requests to enqueue transactions. * Reduce the log level for download errors They should happen regularly, so there's no need to have them with a high visibility level. Co-authored-by: teor <teor@riseup.net> * Stop crawler if service stops If `Mempool::poll_ready` returns an error, it's because the mempool service has stopped and can't handle any requests, so the crawler should stop as well. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Conrado Gouvea <conrado@zfnd.org>
2021-10-04 17:55:42 -07:00
[features]
default = []
proptest-impl = ["proptest", "proptest-derive", "zebra-chain/proptest-impl"]
[dependencies]
blake2b_simd = "0.5.11"
bellman = "0.10.0"
bls12_381 = "0.5.0"
chrono = "0.4.19"
displaydoc = "0.2.2"
jubjub = "0.7.0"
lazy_static = "1.4.0"
once_cell = "1.8"
rand = "0.8"
serde = { version = "1", features = ["serde_derive"] }
futures = "0.3.17"
futures-util = "0.3.17"
metrics = "0.13.0-alpha.8"
thiserror = "1.0.30"
tokio = { version = "0.3.6", features = ["time", "sync", "stream", "tracing"] }
tower = { version = "0.4", features = ["timeout", "util", "buffer"] }
tracing = "0.1.29"
tracing-futures = "0.2.5"
tower-fallback = { path = "../tower-fallback/" }
tower-batch = { path = "../tower-batch/" }
zebra-chain = { path = "../zebra-chain" }
zebra-state = { path = "../zebra-state" }
zebra-script = { path = "../zebra-script" }
wagyu-zcash-parameters = "0.2.0"
Send crawled transaction IDs to downloader (#2801) * Rename type parameter to be more explicit Replace the single letter with a proper name. * Remove imports for `Request` and `Response` The type names will conflict with the ones for the mempool service. * Attach `Mempool` service to the `Crawler` Add a field to the `Crawler` type to store a way to access the `Mempool` service. * Forward crawled transactions to downloader The crawled transactions are now sent to the transaction downloader and verifier, to be included in the mempool. * Derive `Eq` and `PartialEq` for `mempool::Request` Make it simpler to use the `MockService::expect_request` method. * Test if crawled transactions are downloaded Create some dummy crawled transactions, and let the crawler discover them. Then check if they are forwarded to the mempool to be downloaded and verified. * Don't send empty transaction ID list to downloader Ignore response from peers that don't provide any crawled transactions. * Log errors when forwarding crawled transaction IDs Calling the Mempool service should not fail, so if an error happens it should be visible. However, errors when downloading individual transactions can happen from time to time, so there's no need for them to be very visible. * Document existing `mempool::Crawler` test Provide some depth as to what the test expect from the crawler's behavior. * Refactor to create `setup_crawler` helper function Make it easier to reuse the common test setup code. * Simplify code to expect requests Now that `zebra_network::Request` implement `Eq`, the call can be simplified into `expect_request`. * Refactor to create `respond_with_transaction_ids` A helper function that checks for a network crawl request and responds with the given list of crawled transaction IDs. * Refactor to create `crawler_iterator` helper A function to intercept and respond to the fanned-out requests sent during a single crawl iteration. * Refactor to create `respond_to_queue_request` Reduce the repeated code necessary to intercept and reply to a request for queuing transactions to be downloaded. * Add `respond_to_queue_request_with_error` helper Intercepts a mempool request to queue transactions to be downloaded, and responds with an error, simulating an internal problem in the mempool service implementation. * Derive `Arbitrary` for `NetworkUpgrade` This is required for deriving `Arbitrary` for some error types. * Derive `Arbitrary` for `TransactionError` Allow random transaction errors to be generated for property tests. * Derive `Arbitrary` for `MempoolError` Allow random Mempool errors to be generated for property tests. * Test if errors don't stop the mempool crawler The crawler should be robust enough to continue operating even if the mempool service fails to download transactions or even fails to handle requests to enqueue transactions. * Reduce the log level for download errors They should happen regularly, so there's no need to have them with a high visibility level. Co-authored-by: teor <teor@riseup.net> * Stop crawler if service stops If `Mempool::poll_ready` returns an error, it's because the mempool service has stopped and can't handle any requests, so the crawler should stop as well. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Conrado Gouvea <conrado@zfnd.org>
2021-10-04 17:55:42 -07:00
proptest = { version = "0.10", optional = true }
proptest-derive = { version = "0.3.0", optional = true }
[dev-dependencies]
color-eyre = "0.5.11"
Send crawled transaction IDs to downloader (#2801) * Rename type parameter to be more explicit Replace the single letter with a proper name. * Remove imports for `Request` and `Response` The type names will conflict with the ones for the mempool service. * Attach `Mempool` service to the `Crawler` Add a field to the `Crawler` type to store a way to access the `Mempool` service. * Forward crawled transactions to downloader The crawled transactions are now sent to the transaction downloader and verifier, to be included in the mempool. * Derive `Eq` and `PartialEq` for `mempool::Request` Make it simpler to use the `MockService::expect_request` method. * Test if crawled transactions are downloaded Create some dummy crawled transactions, and let the crawler discover them. Then check if they are forwarded to the mempool to be downloaded and verified. * Don't send empty transaction ID list to downloader Ignore response from peers that don't provide any crawled transactions. * Log errors when forwarding crawled transaction IDs Calling the Mempool service should not fail, so if an error happens it should be visible. However, errors when downloading individual transactions can happen from time to time, so there's no need for them to be very visible. * Document existing `mempool::Crawler` test Provide some depth as to what the test expect from the crawler's behavior. * Refactor to create `setup_crawler` helper function Make it easier to reuse the common test setup code. * Simplify code to expect requests Now that `zebra_network::Request` implement `Eq`, the call can be simplified into `expect_request`. * Refactor to create `respond_with_transaction_ids` A helper function that checks for a network crawl request and responds with the given list of crawled transaction IDs. * Refactor to create `crawler_iterator` helper A function to intercept and respond to the fanned-out requests sent during a single crawl iteration. * Refactor to create `respond_to_queue_request` Reduce the repeated code necessary to intercept and reply to a request for queuing transactions to be downloaded. * Add `respond_to_queue_request_with_error` helper Intercepts a mempool request to queue transactions to be downloaded, and responds with an error, simulating an internal problem in the mempool service implementation. * Derive `Arbitrary` for `NetworkUpgrade` This is required for deriving `Arbitrary` for some error types. * Derive `Arbitrary` for `TransactionError` Allow random transaction errors to be generated for property tests. * Derive `Arbitrary` for `MempoolError` Allow random Mempool errors to be generated for property tests. * Test if errors don't stop the mempool crawler The crawler should be robust enough to continue operating even if the mempool service fails to download transactions or even fails to handle requests to enqueue transactions. * Reduce the log level for download errors They should happen regularly, so there's no need to have them with a high visibility level. Co-authored-by: teor <teor@riseup.net> * Stop crawler if service stops If `Mempool::poll_ready` returns an error, it's because the mempool service has stopped and can't handle any requests, so the crawler should stop as well. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Conrado Gouvea <conrado@zfnd.org>
2021-10-04 17:55:42 -07:00
proptest = "0.10"
proptest-derive = "0.3.0"
rand07 = { package = "rand", version = "0.7" }
spandoc = "0.2"
tokio = { version = "0.3.6", features = ["full"] }
tracing-error = "0.1.2"
tracing-subscriber = "0.2.25"
zebra-chain = { path = "../zebra-chain", features = ["proptest-impl"] }
zebra-state = { path = "../zebra-state", features = ["proptest-impl"] }
zebra-test = { path = "../zebra-test/" }