zebra/zebrad/tests/acceptance.rs

3023 lines
104 KiB
Rust
Raw Normal View History

//! Acceptance test: runs zebrad as a subprocess and asserts its
//! output for given argument combinations matches what is expected.
//!
//! ## Note on port conflict
//!
//! If the test child has a cache or port conflict with another test, or a
//! running zebrad or zcashd, then it will panic. But the acceptance tests
//! expect it to run until it is killed.
//!
//! If these conflicts cause test failures:
//! - run the tests in an isolated environment,
//! - run zebrad on a custom cache path and port,
//! - run zcashd on a custom port.
//!
//! ## Failures due to Configured Network Interfaces or Network Connectivity
//!
//! If your test environment does not have any IPv6 interfaces configured, skip IPv6 tests
//! by setting the `ZEBRA_SKIP_IPV6_TESTS` environmental variable.
//!
//! If it does not have any IPv4 interfaces, IPv4 localhost is not on `127.0.0.1`,
//! or you have poor network connectivity,
//! skip all the network tests by setting the `ZEBRA_SKIP_NETWORK_TESTS` environmental variable.
//!
//! ## Large/full sync tests
//!
//! This file has sync tests that are marked as ignored because they take too much time to run.
//! Some of them require environment variables or directories to be present:
//!
//! - `FULL_SYNC_MAINNET_TIMEOUT_MINUTES` env variable: The total number of minutes we
//! will allow this test to run or give up. Value for the Mainnet full sync tests.
//! - `FULL_SYNC_TESTNET_TIMEOUT_MINUTES` env variable: The total number of minutes we
//! will allow this test to run or give up. Value for the Testnet ful sync tests.
//! - `/zebrad-cache` directory: For some sync tests, this needs to be created in
//! the file system, the created directory should have write permissions.
//!
//! Here are some examples on how to run each of the tests:
//!
//! ```console
//! $ cargo test sync_large_checkpoints_mainnet -- --ignored --nocapture
//!
//! $ cargo test sync_large_checkpoints_mempool_mainnet -- --ignored --nocapture
//!
//! $ sudo mkdir /zebrad-cache
//! $ sudo chmod 777 /zebrad-cache
//! $ export FULL_SYNC_MAINNET_TIMEOUT_MINUTES=600
//! $ cargo test full_sync_mainnet -- --ignored --nocapture
//!
//! $ sudo mkdir /zebrad-cache
//! $ sudo chmod 777 /zebrad-cache
//! $ export FULL_SYNC_TESTNET_TIMEOUT_MINUTES=600
//! $ cargo test full_sync_testnet -- --ignored --nocapture
//! ```
//!
//! Please refer to the documentation of each test for more information.
//!
//! ## Lightwalletd tests
//!
//! The lightwalletd software is an interface service that uses zebrad or zcashd RPC methods to serve wallets or other applications with blockchain content in an efficient manner.
//! There are several versions of lightwalled in the form of different forks. The original
//! repo is <https://github.com/zcash/lightwalletd>, zecwallet Lite uses a custom fork: <https://github.com/adityapk00/lightwalletd>.
//! Initially this tests were made with `adityapk00/lightwalletd` fork but changes for fast spendability support had
//! been made to `zcash/lightwalletd` only.
//!
//! We expect `adityapk00/lightwalletd` to remain working with Zebra but for this tests we are using `zcash/lightwalletd`.
//!
//! Zebra lightwalletd tests are not all marked as ignored but none will run unless
//! at least the `ZEBRA_TEST_LIGHTWALLETD` environment variable is present:
//!
//! - `ZEBRA_TEST_LIGHTWALLETD` env variable: Needs to be present to run any of the lightwalletd tests.
//! - `ZEBRA_CACHED_STATE_DIR` env var: The path to a zebra blockchain database.
//! - `LIGHTWALLETD_DATA_DIR` env variable. The path to a lightwalletd database.
//! - `--features lightwalletd-grpc-tests` cargo flag. The flag given to cargo to build the source code of the running test.
//!
//! Here are some examples of running each test:
//!
//! ```console
//! $ export ZEBRA_TEST_LIGHTWALLETD=true
//! $ cargo test lightwalletd_integration -- --nocapture
//!
//! $ export ZEBRA_TEST_LIGHTWALLETD=true
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
//! $ export ZEBRA_CACHED_STATE_DIR="/path/to/zebra/state"
//! $ export LIGHTWALLETD_DATA_DIR="/path/to/lightwalletd/database"
//! $ cargo test lightwalletd_update_sync -- --nocapture
//!
//! $ export ZEBRA_TEST_LIGHTWALLETD=true
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
//! $ export ZEBRA_CACHED_STATE_DIR="/path/to/zebra/state"
//! $ cargo test lightwalletd_full_sync -- --ignored --nocapture
//!
//! $ export ZEBRA_TEST_LIGHTWALLETD=true
//! $ cargo test lightwalletd_test_suite -- --ignored --nocapture
//!
//! $ export ZEBRA_TEST_LIGHTWALLETD=true
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
//! $ export ZEBRA_CACHED_STATE_DIR="/path/to/zebra/state"
//! $ cargo test fully_synced_rpc_test -- --ignored --nocapture
//!
//! $ export ZEBRA_TEST_LIGHTWALLETD=true
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
//! $ export ZEBRA_CACHED_STATE_DIR="/path/to/zebra/state"
//! $ export LIGHTWALLETD_DATA_DIR="/path/to/lightwalletd/database"
//! $ cargo test sending_transactions_using_lightwalletd --features lightwalletd-grpc-tests -- --ignored --nocapture
//!
//! $ export ZEBRA_TEST_LIGHTWALLETD=true
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
//! $ export ZEBRA_CACHED_STATE_DIR="/path/to/zebra/state"
//! $ export LIGHTWALLETD_DATA_DIR="/path/to/lightwalletd/database"
//! $ cargo test lightwalletd_wallet_grpc_tests --features lightwalletd-grpc-tests -- --ignored --nocapture
//! ```
//!
//! ## Getblocktemplate tests
//!
//! Example of how to run the get_block_template test:
//!
//! ```console
//! ZEBRA_CACHED_STATE_DIR=/path/to/zebra/state cargo test get_block_template --features getblocktemplate-rpcs --release -- --ignored --nocapture
//! ```
//!
//! Example of how to run the submit_block test:
//!
//! ```console
//! ZEBRA_CACHED_STATE_DIR=/path/to/zebra/state cargo test submit_block --features getblocktemplate-rpcs --release -- --ignored --nocapture
//! ```
//!
//! Please refer to the documentation of each test for more information.
add(test): test disabled `lightwalletd` mempool gRPCs via zebrad logs (#5016) * add grpc mempool test research * add a config flag for mempool injection of transactions in test * Only copy the inner state directory in the send transactions test * Preload Zcash parameters in some transaction verification tests * Add a block and transaction Hash method to convert from display order bytes * Update test coverage docs * Add debugging output for mempool transaction verification * Test fetching sent mempool transactions using gRPC * Add extra log checks to the send transaction test * Wait for zebrad mempool activation before running gRPC tests * Update send transaction test for lightwalletd not returning mempool transactions * Check zebrad logs instead of disabled lightwalletd gRPCs * Add a debug option that makes RPCs pretend the sync is finished * Remove an unused debug option * Remove unused test code and downgrade some logs * Fix test log checks * Fix some rustdoc warnings * Fix a compilation error due to new function arguments * Make zebrad sync timeouts consistent and remove outdated code * Document how to increase temporary directory space for tests * Stop checking for a log that doesn't always happen * Remove some commented-out code Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com> * Update a comment about run time Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com> * Add new config to new tests from the `main` branch * Add transactions to the list, rather than replacing the list with each new block Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
2022-09-06 06:32:33 -07:00
//!
//! ## Shielded scanning tests
//!
//! Example of how to run the scan_task_commands test:
//!
//! ```console
//! ZEBRA_CACHED_STATE_DIR=/path/to/zebra/state cargo test scan_task_commands --features shielded-scan --release -- --ignored --nocapture
//! ```
//!
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
//! ## Checkpoint Generation Tests
//!
//! Generate checkpoints on mainnet and testnet using a cached state:
//! ```console
//! GENERATE_CHECKPOINTS_MAINNET=1 ENTRYPOINT_FEATURES=zebra-checkpoints ZEBRA_CACHED_STATE_DIR=/path/to/zebra/state docker/entrypoint.sh
//! GENERATE_CHECKPOINTS_TESTNET=1 ENTRYPOINT_FEATURES=zebra-checkpoints ZEBRA_CACHED_STATE_DIR=/path/to/zebra/state docker/entrypoint.sh
//! ```
//!
add(test): test disabled `lightwalletd` mempool gRPCs via zebrad logs (#5016) * add grpc mempool test research * add a config flag for mempool injection of transactions in test * Only copy the inner state directory in the send transactions test * Preload Zcash parameters in some transaction verification tests * Add a block and transaction Hash method to convert from display order bytes * Update test coverage docs * Add debugging output for mempool transaction verification * Test fetching sent mempool transactions using gRPC * Add extra log checks to the send transaction test * Wait for zebrad mempool activation before running gRPC tests * Update send transaction test for lightwalletd not returning mempool transactions * Check zebrad logs instead of disabled lightwalletd gRPCs * Add a debug option that makes RPCs pretend the sync is finished * Remove an unused debug option * Remove unused test code and downgrade some logs * Fix test log checks * Fix some rustdoc warnings * Fix a compilation error due to new function arguments * Make zebrad sync timeouts consistent and remove outdated code * Document how to increase temporary directory space for tests * Stop checking for a log that doesn't always happen * Remove some commented-out code Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com> * Update a comment about run time Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com> * Add new config to new tests from the `main` branch * Add transactions to the list, rather than replacing the list with each new block Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
2022-09-06 06:32:33 -07:00
//! ## Disk Space for Testing
//!
//! The full sync and lightwalletd tests with cached state expect a temporary directory with
//! at least 300 GB of disk space (2 copies of the full chain). To use another disk for the
//! temporary test files:
//!
//! ```sh
//! export TMPDIR=/path/to/disk/directory
//! ```
use std::{
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
cmp::Ordering,
collections::HashSet,
env, fs, panic,
path::PathBuf,
time::{Duration, Instant},
};
use color_eyre::{
eyre::{eyre, Result, WrapErr},
Help,
};
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
use semver::Version;
use serde_json::Value;
use zebra_chain::{
change(state): Write non-finalized blocks to the state in a separate thread, to avoid network and RPC hangs (#5257) * Add a new block commit task and channels, that don't do anything yet * Add last_block_hash_sent to the state service, to avoid database accesses * Update last_block_hash_sent regardless of commit errors * Rename a field to StateService.max_queued_finalized_height * Commit finalized blocks to the state in a separate task * Check for panics in the block write task * Wait for the block commit task in tests, and check for errors * Always run a proptest that sleeps once * Add extra debugging to state shutdowns * Work around a RocksDB shutdown bug * Close the finalized block channel when we're finished with it * Only reset state queue once per error * Update some TODOs * Add a module doc comment * Drop channels and check for closed channels in the block commit task * Close state channels and tasks on drop * Remove some duplicate fields across StateService and ReadStateService * Try tweaking the shutdown steps * Update and clarify some comments * Clarify another comment * Don't try to cancel RocksDB background work on drop * Fix up some comments * Remove some duplicate code * Remove redundant workarounds for shutdown issues * Remode a redundant channel close in the block commit task * Remove a mistaken `!force` shutdown condition * Remove duplicate force-shutdown code and explain it better * Improve RPC error logging * Wait for chain tip updates in the RPC tests * Wait 2 seconds for chain tip updates before skipping them * Remove an unnecessary block_in_place() * Fix some test error messages that were changed by earlier fixes * Expand some comments, fix typos Co-authored-by: Marek <mail@marek.onl> * Actually drop children of failed blocks * Explain why we drop descendants of failed blocks * Clarify a comment * Wait for chain tip updates in a failing test on macOS * Clean duplicate finalized blocks when the non-finalized state activates * Send an error when receiving a duplicate finalized block * Update checkpoint block behaviour, document its consensus rule * Wait for chain tip changes in inbound_block_height_lookahead_limit test * Wait for the genesis block to commit in the fake peer set mempool tests * Disable unreliable mempool verification check in the send transaction test * Appease rustfmt * Use clear_finalized_block_queue() everywhere that blocks are dropped * Document how Finalized and NonFinalized clones are different * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * updates comments, renames send_process_queued, other minor cleanup * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * updates comments, renames send_process_queued, other minor cleanup * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * removes duplicate field definitions on StateService that were a result of a bad merge * update NotReadyToBeCommitted error message * Appear rustfmt * Fix doc links * Rename a function to initial_contextual_validity() * Do error tasks on Err, and success tasks on Ok * Simplify parent_error_map truncation * Rewrite best_tip() to use tip() * Rename latest_mem() to latest_non_finalized_state() ```sh fastmod latest_mem latest_non_finalized_state zebra* cargo fmt --all ``` * Simplify latest_non_finalized_state() using a new WatchReceiver API * Expand some error messages * Send the result after updating the channels, and document why * wait for chain_tip_update before cancelling download in mempool_cancel_mined * adds `sent_non_finalized_block_hashes` field to StateService * adds batched sent_hash insertions and checks sent hashes in queue_and_commit_non_finalized before adding a block to the queue * check that the `curr_buf` in SentHashes is not empty before pushing it to the `sent_bufs` * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Fix rustfmt * Check for finalized block heights using zs_contains() * adds known_utxos field to SentHashes * updates comment on SentHashes.add method * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * return early when there's a duplicate hash in QueuedBlocks.queue instead of panicking * Make finalized UTXOs near the final checkpoint available for full block verification * Replace a checkpoint height literal with the actual config * Update mainnet and testnet checkpoints - 7 October 2022 * Fix some state service init arguments * Allow more lookahead in the downloader, but less lookahead in the syncer * Add the latest config to the tests, and fix the latest config check * Increase the number of finalized blocks checked for non-finalized block UTXO spends * fix(log): reduce verbose logs for block commits (#5348) * Remove some verbose block write channel logs * Only warn about tracing endpoint if the address is actually set * Use CloneError instead of formatting a non-cloneable error Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Increase block verify timeout * Work around a known block timeout bug by using a shorter timeout Co-authored-by: teor <teor@riseup.net> Co-authored-by: Marek <mail@marek.onl> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-10-11 12:25:45 -07:00
block::{self, Height},
parameters::Network::{self, *},
};
use zebra_network::constants::PORT_IN_USE_ERROR;
use zebra_node_services::rpc_client::RpcRequestClient;
use zebra_state::{constants::LOCK_FILE_ERROR, state_database_format_version_in_code};
use zebra_test::{
args,
command::{to_regex::CollectRegexSet, ContextFrom},
net::random_known_port,
prelude::*,
};
mod common;
use common::{
check::{is_zebrad_version, EphemeralCheck, EphemeralConfig},
config::random_known_rpc_port_config,
config::{
config_file_full_path, configs_dir, default_test_config, persistent_test_config, testdir,
},
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
launch::{
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
spawn_zebrad_for_rpc, spawn_zebrad_without_rpc, ZebradTestDirExt, BETWEEN_NODES_DELAY,
EXTENDED_LAUNCH_DELAY, LAUNCH_DELAY,
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
},
lightwalletd::{can_spawn_lightwalletd_for_rpc, spawn_lightwalletd_for_rpc},
sync::{
create_cached_database_height, sync_until, MempoolBehavior, LARGE_CHECKPOINT_TEST_HEIGHT,
LARGE_CHECKPOINT_TIMEOUT, MEDIUM_CHECKPOINT_TEST_HEIGHT, STOP_AT_HEIGHT_REGEX,
STOP_ON_LOAD_TIMEOUT, SYNC_FINISHED_REGEX, TINY_CHECKPOINT_TEST_HEIGHT,
TINY_CHECKPOINT_TIMEOUT,
},
test_type::TestType::{self, *},
};
use crate::common::cached_state::{
wait_for_state_version_message, wait_for_state_version_upgrade, DATABASE_FORMAT_UPGRADE_IS_LONG,
};
fix(state): Use correct end heights for end of block subtrees during the full sync (#7566) * Avoid manual handling of previous sapling trees by using iterator windows instead * Avoid manual sapling subtree index handling by comparing prev and current subtree indexes instead * Simplify adding notes by using the exact number of remaining notes * Simplify by skipping the first block, because it can't complete a subtree * Re-use existing tree update code * Apply the sapling changes to orchard subtree updates * add a reverse database column family iterator function * Make skipping the lowest tree independent of iteration order * Move new subtree checks into the iterator, rename to end_height * Split subtree calculation into a new method * Split the calculate and write methods * Quickly check the first subtree before running the full upgrade * Do the quick checks every time Zebra runs, and refactor slow check error handling * Do quick checks for orchard as well * Make orchard tree upgrade match sapling upgrade code * Upgrade subtrees in reverse height order * Bump the database patch version so the upgrade runs again * Reset previous subtree upgrade data before doing this one * Add extra checks to subtree calculation to diagnose errors * Use correct heights for subtrees completed at the end of a block * Add even more checks to diagnose issues * Instrument upgrade methods to improve diagnostics * Prevent modification of re-used trees * Debug with subtree positions as well * Fix an off-by-one error with completed subtrees * Fix typos and confusing comments Co-authored-by: Marek <mail@marek.onl> * Fix mistaken previous tree handling and end tree comments * Remove unnecessary subtraction in remaining leaves calc * Log heights when assertions fail * Fix new subtree detection filter * Move new subtree check into a method, cleanup unused code * Remove redundant assertions * Wait for subtree upgrade before testing RPCs * Fix subtree search in quick check * Temporarily upgrade subtrees in forward height order * Clarify some comments * Fix missing test imports * Fix subtree logging * Add a comment about a potential hang with future upgrades * Fix zebrad var ownership * Log more info when add_subtrees.rs fails * cargo fmt --all * Fix unrelated clippy::unnecessary_unwrap * cargo clippy --fix --all-features --all-targets; cargo fmt --all * Stop the quick check depending on tree de-duplication * Refactor waiting for the upgrade into functions * Wait for state upgrades whenever the cached state is updated * Wait for the testnet upgrade in the right place * Fix unused variable * Fix a subtree detection bug and comments * Remove an early reference to reverse direction * Stop skipping subtrees completed at the end of blocks * Actually fix new subtree code --------- Co-authored-by: Marek <mail@marek.onl>
2023-09-19 07:49:36 -07:00
/// The maximum amount of time that we allow the creation of a future to block the `tokio` executor.
///
/// This should be larger than the amount of time between thread time slices on a busy test VM.
///
/// This limit only applies to some tests.
pub const MAX_ASYNC_BLOCKING_TIME: Duration = zebra_test::mock_service::DEFAULT_MAX_REQUEST_DELAY;
/// The test config file prefix for `--feature getblocktemplate-rpcs` configs.
pub const GET_BLOCK_TEMPLATE_CONFIG_PREFIX: &str = "getblocktemplate-";
/// The test config file prefix for `--feature shielded-scan` configs.
pub const SHIELDED_SCAN_CONFIG_PREFIX: &str = "shieldedscan-";
#[test]
fn generate_no_args() -> Result<()> {
let _init_guard = zebra_test::init();
let child = testdir()?
.with_config(&mut default_test_config(Mainnet)?)?
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
.spawn_child(args!["generate"])?;
let output = child.wait_with_output()?;
let output = output.assert_success()?;
// First line
output.stdout_line_contains("# Default configuration for zebrad")?;
Ok(())
}
#[test]
fn generate_args() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?;
let testdir = &testdir;
// unexpected free argument `argument`
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = testdir.spawn_child(args!["generate", "argument"])?;
let output = child.wait_with_output()?;
output.assert_failure()?;
// unrecognized option `-f`
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = testdir.spawn_child(args!["generate", "-f"])?;
let output = child.wait_with_output()?;
output.assert_failure()?;
// missing argument to option `-o`
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = testdir.spawn_child(args!["generate", "-o"])?;
let output = child.wait_with_output()?;
output.assert_failure()?;
// Add a config file name to tempdir path
let generated_config_path = testdir.path().join("zebrad.toml");
// Valid
let child =
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
testdir.spawn_child(args!["generate", "-o": generated_config_path.to_str().unwrap()])?;
let output = child.wait_with_output()?;
let output = output.assert_success()?;
assert_with_context!(
testdir.path().exists(),
&output,
"test temp directory not found"
);
assert_with_context!(
generated_config_path.exists(),
&output,
"generated config file not found"
);
Ok(())
}
#[test]
fn help_no_args() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?.with_config(&mut default_test_config(Mainnet)?)?;
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = testdir.spawn_child(args!["help"])?;
let output = child.wait_with_output()?;
let output = output.assert_success()?;
// The first line should have the version
output.any_output_line(
is_zebrad_version,
&output.output.stdout,
"stdout",
fix(zebrad): accept default subcommand arguments and print consistent usage information for top-level 'help' subcommand (#6801) * updates Cargo.toml * Migrate to abscissa 0.7.0 * Avoid panic from calling color_eyre::install twice * Uses 'start' as the default subcommand * updates default cmd logic * Fixes minor cli issues * removes outdated check in acceptance test * Adds a test for process_cli_args, fixes version_args test. Adds -V to process_cli_args match case * Revert "fix(clippy): Silence future-incompat warnings until we upgrade Abscissa (#6024)" This reverts commit dd90f79b4824af6b1ae22ee69abdf0f605da84cd. * Drops the worker guard to flush logs when zebra shuts down * Adds cargo feature to clap * restores process_cli_args * updates deny.toml * Updates EntryPoint help template * Updates subcommand help msgs * removes trailing whitespace, capitalizes sentences * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * revert parts of revert "Revert fix(clippy): Silence future-incompat warnings until we upgrade Abscissa" * Applies suggestions from code review * Moves EntryPoint to its own module * fixes version_args test * Updates changelog * Prunes redundant test cases * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Revert "Prunes redundant test cases" This reverts commit 3f7397918489144805c17d0594775aa699e87b9d. * Update zebrad/src/commands/entry_point.rs Co-authored-by: teor <teor@riseup.net> * Add missing import * Updates `process_cli_args` to return a result --------- Co-authored-by: teor <teor@riseup.net>
2023-06-06 23:03:42 -07:00
"are valid zebrad semantic versions",
)?;
fix(zebrad): accept default subcommand arguments and print consistent usage information for top-level 'help' subcommand (#6801) * updates Cargo.toml * Migrate to abscissa 0.7.0 * Avoid panic from calling color_eyre::install twice * Uses 'start' as the default subcommand * updates default cmd logic * Fixes minor cli issues * removes outdated check in acceptance test * Adds a test for process_cli_args, fixes version_args test. Adds -V to process_cli_args match case * Revert "fix(clippy): Silence future-incompat warnings until we upgrade Abscissa (#6024)" This reverts commit dd90f79b4824af6b1ae22ee69abdf0f605da84cd. * Drops the worker guard to flush logs when zebra shuts down * Adds cargo feature to clap * restores process_cli_args * updates deny.toml * Updates EntryPoint help template * Updates subcommand help msgs * removes trailing whitespace, capitalizes sentences * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * revert parts of revert "Revert fix(clippy): Silence future-incompat warnings until we upgrade Abscissa" * Applies suggestions from code review * Moves EntryPoint to its own module * fixes version_args test * Updates changelog * Prunes redundant test cases * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Revert "Prunes redundant test cases" This reverts commit 3f7397918489144805c17d0594775aa699e87b9d. * Update zebrad/src/commands/entry_point.rs Co-authored-by: teor <teor@riseup.net> * Add missing import * Updates `process_cli_args` to return a result --------- Co-authored-by: teor <teor@riseup.net>
2023-06-06 23:03:42 -07:00
// Make sure we are in help by looking for the usage string
output.stdout_line_contains("Usage:")?;
Ok(())
}
#[test]
fn help_args() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?;
let testdir = &testdir;
// The subcommand "argument" wasn't recognized.
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = testdir.spawn_child(args!["help", "argument"])?;
let output = child.wait_with_output()?;
output.assert_failure()?;
// option `-f` does not accept an argument
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = testdir.spawn_child(args!["help", "-f"])?;
let output = child.wait_with_output()?;
output.assert_failure()?;
Ok(())
}
#[test]
fn start_no_args() -> Result<()> {
let _init_guard = zebra_test::init();
// start caches state, so run one of the start tests with persistent state
let testdir = testdir()?.with_config(&mut persistent_test_config(Mainnet)?)?;
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let mut child = testdir.spawn_child(args!["-v", "start"])?;
// Run the program and kill it after a few seconds
std::thread::sleep(LAUNCH_DELAY);
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
output.stdout_line_contains("Starting zebrad")?;
// Make sure the command passed the legacy chain check
output.stdout_line_contains("starting legacy chain check")?;
output.stdout_line_contains("no legacy chain found")?;
// Make sure the command was killed
output.assert_was_killed()?;
Ok(())
}
#[test]
fn start_args() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?.with_config(&mut default_test_config(Mainnet)?)?;
let testdir = &testdir;
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let mut child = testdir.spawn_child(args!["start"])?;
// Run the program and kill it after a few seconds
std::thread::sleep(LAUNCH_DELAY);
child.kill(false)?;
let output = child.wait_with_output()?;
// Make sure the command was killed
output.assert_was_killed()?;
output.assert_failure()?;
// unrecognized option `-f`
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = testdir.spawn_child(args!["start", "-f"])?;
let output = child.wait_with_output()?;
output.assert_failure()?;
Ok(())
}
#[tokio::test]
async fn db_init_outside_future_executor() -> Result<()> {
let _init_guard = zebra_test::init();
let config = default_test_config(Mainnet)?;
let start = Instant::now();
change(state): Write non-finalized blocks to the state in a separate thread, to avoid network and RPC hangs (#5257) * Add a new block commit task and channels, that don't do anything yet * Add last_block_hash_sent to the state service, to avoid database accesses * Update last_block_hash_sent regardless of commit errors * Rename a field to StateService.max_queued_finalized_height * Commit finalized blocks to the state in a separate task * Check for panics in the block write task * Wait for the block commit task in tests, and check for errors * Always run a proptest that sleeps once * Add extra debugging to state shutdowns * Work around a RocksDB shutdown bug * Close the finalized block channel when we're finished with it * Only reset state queue once per error * Update some TODOs * Add a module doc comment * Drop channels and check for closed channels in the block commit task * Close state channels and tasks on drop * Remove some duplicate fields across StateService and ReadStateService * Try tweaking the shutdown steps * Update and clarify some comments * Clarify another comment * Don't try to cancel RocksDB background work on drop * Fix up some comments * Remove some duplicate code * Remove redundant workarounds for shutdown issues * Remode a redundant channel close in the block commit task * Remove a mistaken `!force` shutdown condition * Remove duplicate force-shutdown code and explain it better * Improve RPC error logging * Wait for chain tip updates in the RPC tests * Wait 2 seconds for chain tip updates before skipping them * Remove an unnecessary block_in_place() * Fix some test error messages that were changed by earlier fixes * Expand some comments, fix typos Co-authored-by: Marek <mail@marek.onl> * Actually drop children of failed blocks * Explain why we drop descendants of failed blocks * Clarify a comment * Wait for chain tip updates in a failing test on macOS * Clean duplicate finalized blocks when the non-finalized state activates * Send an error when receiving a duplicate finalized block * Update checkpoint block behaviour, document its consensus rule * Wait for chain tip changes in inbound_block_height_lookahead_limit test * Wait for the genesis block to commit in the fake peer set mempool tests * Disable unreliable mempool verification check in the send transaction test * Appease rustfmt * Use clear_finalized_block_queue() everywhere that blocks are dropped * Document how Finalized and NonFinalized clones are different * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * updates comments, renames send_process_queued, other minor cleanup * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * updates comments, renames send_process_queued, other minor cleanup * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * removes duplicate field definitions on StateService that were a result of a bad merge * update NotReadyToBeCommitted error message * Appear rustfmt * Fix doc links * Rename a function to initial_contextual_validity() * Do error tasks on Err, and success tasks on Ok * Simplify parent_error_map truncation * Rewrite best_tip() to use tip() * Rename latest_mem() to latest_non_finalized_state() ```sh fastmod latest_mem latest_non_finalized_state zebra* cargo fmt --all ``` * Simplify latest_non_finalized_state() using a new WatchReceiver API * Expand some error messages * Send the result after updating the channels, and document why * wait for chain_tip_update before cancelling download in mempool_cancel_mined * adds `sent_non_finalized_block_hashes` field to StateService * adds batched sent_hash insertions and checks sent hashes in queue_and_commit_non_finalized before adding a block to the queue * check that the `curr_buf` in SentHashes is not empty before pushing it to the `sent_bufs` * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Fix rustfmt * Check for finalized block heights using zs_contains() * adds known_utxos field to SentHashes * updates comment on SentHashes.add method * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * return early when there's a duplicate hash in QueuedBlocks.queue instead of panicking * Make finalized UTXOs near the final checkpoint available for full block verification * Replace a checkpoint height literal with the actual config * Update mainnet and testnet checkpoints - 7 October 2022 * Fix some state service init arguments * Allow more lookahead in the downloader, but less lookahead in the syncer * Add the latest config to the tests, and fix the latest config check * Increase the number of finalized blocks checked for non-finalized block UTXO spends * fix(log): reduce verbose logs for block commits (#5348) * Remove some verbose block write channel logs * Only warn about tracing endpoint if the address is actually set * Use CloneError instead of formatting a non-cloneable error Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Increase block verify timeout * Work around a known block timeout bug by using a shorter timeout Co-authored-by: teor <teor@riseup.net> Co-authored-by: Marek <mail@marek.onl> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-10-11 12:25:45 -07:00
// This test doesn't need UTXOs to be verified efficiently, because it uses an empty state.
let db_init_handle =
zebra_state::spawn_init(config.state.clone(), config.network.network, Height::MAX, 0);
// it's faster to panic if it takes longer than expected, since the executor
// will wait indefinitely for blocking operation to finish once started
let block_duration = start.elapsed();
assert!(
block_duration <= MAX_ASYNC_BLOCKING_TIME,
"futures executor was blocked longer than expected ({block_duration:?})",
);
db_init_handle.await?;
Ok(())
}
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
/// Check that the block state and peer list caches are written to disk.
#[test]
fn persistent_mode() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?.with_config(&mut persistent_test_config(Mainnet)?)?;
let testdir = &testdir;
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let mut child = testdir.spawn_child(args!["-v", "start"])?;
// Run the program and kill it after a few seconds
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
std::thread::sleep(EXTENDED_LAUNCH_DELAY);
child.kill(false)?;
let output = child.wait_with_output()?;
// Make sure the command was killed
output.assert_was_killed()?;
let cache_dir = testdir.path().join("state");
assert_with_context!(
cache_dir.read_dir()?.count() > 0,
&output,
"state directory empty despite persistent state config"
);
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
let cache_dir = testdir.path().join("network");
assert_with_context!(
cache_dir.read_dir()?.count() > 0,
&output,
"network directory empty despite persistent network config"
);
Ok(())
}
#[test]
fn ephemeral_existing_directory() -> Result<()> {
ephemeral(EphemeralConfig::Default, EphemeralCheck::ExistingDirectory)
}
#[test]
fn ephemeral_missing_directory() -> Result<()> {
ephemeral(EphemeralConfig::Default, EphemeralCheck::MissingDirectory)
}
#[test]
fn misconfigured_ephemeral_existing_directory() -> Result<()> {
ephemeral(
EphemeralConfig::MisconfiguredCacheDir,
EphemeralCheck::ExistingDirectory,
)
}
#[test]
fn misconfigured_ephemeral_missing_directory() -> Result<()> {
ephemeral(
EphemeralConfig::MisconfiguredCacheDir,
EphemeralCheck::MissingDirectory,
)
}
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
/// Check that the state directory created on disk matches the state config.
///
/// TODO: do a similar test for `network.cache_dir`
#[tracing::instrument]
fn ephemeral(cache_dir_config: EphemeralConfig, cache_dir_check: EphemeralCheck) -> Result<()> {
use std::io::ErrorKind;
let _init_guard = zebra_test::init();
let mut config = default_test_config(Mainnet)?;
let run_dir = testdir()?;
let ignored_cache_dir = run_dir.path().join("state");
if cache_dir_config == EphemeralConfig::MisconfiguredCacheDir {
// Write a configuration that sets both the cache_dir and ephemeral options
config.state.cache_dir = ignored_cache_dir.clone();
}
if cache_dir_check == EphemeralCheck::ExistingDirectory {
// We set the cache_dir config to a newly created empty temp directory,
// then make sure that it is empty after the test
fs::create_dir(&ignored_cache_dir)?;
}
let mut child = run_dir
.path()
.with_config(&mut config)?
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
.spawn_child(args!["start"])?;
// Run the program and kill it after a few seconds
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
std::thread::sleep(EXTENDED_LAUNCH_DELAY);
child.kill(false)?;
let output = child.wait_with_output()?;
// Make sure the command was killed
output.assert_was_killed()?;
let expected_run_dir_file_names = match cache_dir_check {
// we created the state directory, so it should still exist
EphemeralCheck::ExistingDirectory => {
assert_with_context!(
ignored_cache_dir
.read_dir()
.expect("ignored_cache_dir should still exist")
.count()
== 0,
&output,
"ignored_cache_dir not empty for ephemeral {:?} {:?}: {:?}",
cache_dir_config,
cache_dir_check,
ignored_cache_dir.read_dir().unwrap().collect::<Vec<_>>()
);
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
["state", "network", "zebrad.toml"].iter()
}
// we didn't create the state directory, so it should not exist
EphemeralCheck::MissingDirectory => {
assert_with_context!(
ignored_cache_dir
.read_dir()
.expect_err("ignored_cache_dir should not exist")
.kind()
== ErrorKind::NotFound,
&output,
"unexpected creation of ignored_cache_dir for ephemeral {:?} {:?}: the cache dir exists and contains these files: {:?}",
cache_dir_config,
cache_dir_check,
ignored_cache_dir.read_dir().unwrap().collect::<Vec<_>>()
);
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
["network", "zebrad.toml"].iter()
}
};
let expected_run_dir_file_names = expected_run_dir_file_names.map(Into::into).collect();
let run_dir_file_names = run_dir
.path()
.read_dir()
.expect("run_dir should still exist")
.map(|dir_entry| dir_entry.expect("run_dir is readable").file_name())
// ignore directory list order, because it can vary based on the OS and filesystem
.collect::<HashSet<_>>();
assert_with_context!(
run_dir_file_names == expected_run_dir_file_names,
&output,
"run_dir not empty for ephemeral {:?} {:?}: expected {:?}, actual: {:?}",
cache_dir_config,
cache_dir_check,
expected_run_dir_file_names,
run_dir_file_names
);
Ok(())
}
#[test]
fn version_no_args() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?.with_config(&mut default_test_config(Mainnet)?)?;
fix(zebrad): accept default subcommand arguments and print consistent usage information for top-level 'help' subcommand (#6801) * updates Cargo.toml * Migrate to abscissa 0.7.0 * Avoid panic from calling color_eyre::install twice * Uses 'start' as the default subcommand * updates default cmd logic * Fixes minor cli issues * removes outdated check in acceptance test * Adds a test for process_cli_args, fixes version_args test. Adds -V to process_cli_args match case * Revert "fix(clippy): Silence future-incompat warnings until we upgrade Abscissa (#6024)" This reverts commit dd90f79b4824af6b1ae22ee69abdf0f605da84cd. * Drops the worker guard to flush logs when zebra shuts down * Adds cargo feature to clap * restores process_cli_args * updates deny.toml * Updates EntryPoint help template * Updates subcommand help msgs * removes trailing whitespace, capitalizes sentences * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * revert parts of revert "Revert fix(clippy): Silence future-incompat warnings until we upgrade Abscissa" * Applies suggestions from code review * Moves EntryPoint to its own module * fixes version_args test * Updates changelog * Prunes redundant test cases * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Revert "Prunes redundant test cases" This reverts commit 3f7397918489144805c17d0594775aa699e87b9d. * Update zebrad/src/commands/entry_point.rs Co-authored-by: teor <teor@riseup.net> * Add missing import * Updates `process_cli_args` to return a result --------- Co-authored-by: teor <teor@riseup.net>
2023-06-06 23:03:42 -07:00
let child = testdir.spawn_child(args!["--version"])?;
let output = child.wait_with_output()?;
let output = output.assert_success()?;
// The output should only contain the version
output.output_check(
is_zebrad_version,
&output.output.stdout,
"stdout",
"a valid zebrad semantic version",
)?;
Ok(())
}
#[test]
fn version_args() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?.with_config(&mut default_test_config(Mainnet)?)?;
let testdir = &testdir;
fix(zebrad): accept default subcommand arguments and print consistent usage information for top-level 'help' subcommand (#6801) * updates Cargo.toml * Migrate to abscissa 0.7.0 * Avoid panic from calling color_eyre::install twice * Uses 'start' as the default subcommand * updates default cmd logic * Fixes minor cli issues * removes outdated check in acceptance test * Adds a test for process_cli_args, fixes version_args test. Adds -V to process_cli_args match case * Revert "fix(clippy): Silence future-incompat warnings until we upgrade Abscissa (#6024)" This reverts commit dd90f79b4824af6b1ae22ee69abdf0f605da84cd. * Drops the worker guard to flush logs when zebra shuts down * Adds cargo feature to clap * restores process_cli_args * updates deny.toml * Updates EntryPoint help template * Updates subcommand help msgs * removes trailing whitespace, capitalizes sentences * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * revert parts of revert "Revert fix(clippy): Silence future-incompat warnings until we upgrade Abscissa" * Applies suggestions from code review * Moves EntryPoint to its own module * fixes version_args test * Updates changelog * Prunes redundant test cases * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Revert "Prunes redundant test cases" This reverts commit 3f7397918489144805c17d0594775aa699e87b9d. * Update zebrad/src/commands/entry_point.rs Co-authored-by: teor <teor@riseup.net> * Add missing import * Updates `process_cli_args` to return a result --------- Co-authored-by: teor <teor@riseup.net>
2023-06-06 23:03:42 -07:00
// unrecognized option `-f`
let child = testdir.spawn_child(args!["tip-height", "-f"])?;
let output = child.wait_with_output()?;
output.assert_failure()?;
fix(zebrad): accept default subcommand arguments and print consistent usage information for top-level 'help' subcommand (#6801) * updates Cargo.toml * Migrate to abscissa 0.7.0 * Avoid panic from calling color_eyre::install twice * Uses 'start' as the default subcommand * updates default cmd logic * Fixes minor cli issues * removes outdated check in acceptance test * Adds a test for process_cli_args, fixes version_args test. Adds -V to process_cli_args match case * Revert "fix(clippy): Silence future-incompat warnings until we upgrade Abscissa (#6024)" This reverts commit dd90f79b4824af6b1ae22ee69abdf0f605da84cd. * Drops the worker guard to flush logs when zebra shuts down * Adds cargo feature to clap * restores process_cli_args * updates deny.toml * Updates EntryPoint help template * Updates subcommand help msgs * removes trailing whitespace, capitalizes sentences * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * revert parts of revert "Revert fix(clippy): Silence future-incompat warnings until we upgrade Abscissa" * Applies suggestions from code review * Moves EntryPoint to its own module * fixes version_args test * Updates changelog * Prunes redundant test cases * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Revert "Prunes redundant test cases" This reverts commit 3f7397918489144805c17d0594775aa699e87b9d. * Update zebrad/src/commands/entry_point.rs Co-authored-by: teor <teor@riseup.net> * Add missing import * Updates `process_cli_args` to return a result --------- Co-authored-by: teor <teor@riseup.net>
2023-06-06 23:03:42 -07:00
// unrecognized option `-f` is ignored
let child = testdir.spawn_child(args!["--version", "-f"])?;
let output = child.wait_with_output()?;
fix(zebrad): accept default subcommand arguments and print consistent usage information for top-level 'help' subcommand (#6801) * updates Cargo.toml * Migrate to abscissa 0.7.0 * Avoid panic from calling color_eyre::install twice * Uses 'start' as the default subcommand * updates default cmd logic * Fixes minor cli issues * removes outdated check in acceptance test * Adds a test for process_cli_args, fixes version_args test. Adds -V to process_cli_args match case * Revert "fix(clippy): Silence future-incompat warnings until we upgrade Abscissa (#6024)" This reverts commit dd90f79b4824af6b1ae22ee69abdf0f605da84cd. * Drops the worker guard to flush logs when zebra shuts down * Adds cargo feature to clap * restores process_cli_args * updates deny.toml * Updates EntryPoint help template * Updates subcommand help msgs * removes trailing whitespace, capitalizes sentences * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * revert parts of revert "Revert fix(clippy): Silence future-incompat warnings until we upgrade Abscissa" * Applies suggestions from code review * Moves EntryPoint to its own module * fixes version_args test * Updates changelog * Prunes redundant test cases * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Revert "Prunes redundant test cases" This reverts commit 3f7397918489144805c17d0594775aa699e87b9d. * Update zebrad/src/commands/entry_point.rs Co-authored-by: teor <teor@riseup.net> * Add missing import * Updates `process_cli_args` to return a result --------- Co-authored-by: teor <teor@riseup.net>
2023-06-06 23:03:42 -07:00
let output = output.assert_success()?;
// The output should only contain the version
output.output_check(
is_zebrad_version,
&output.output.stdout,
"stdout",
"a valid zebrad semantic version",
)?;
Ok(())
}
/// Run config tests that use the default ports and paths.
///
/// Unlike the other tests, these tests can not be run in parallel, because
/// they use the generated config. So parallel execution can cause port and
/// cache conflicts.
#[test]
fn config_tests() -> Result<()> {
valid_generated_config("start", "Starting zebrad")?;
// Check what happens when Zebra parses an invalid config
invalid_generated_config()?;
// Check that we have a current version of the config stored
last_config_is_stored()?;
// Check that Zebra's previous configurations still work
stored_configs_work()?;
// We run the `zebrad` app test after the config tests, to avoid potential port conflicts
app_no_args()?;
Ok(())
}
/// Test that `zebrad` runs the start command with no args
#[tracing::instrument]
fn app_no_args() -> Result<()> {
let _init_guard = zebra_test::init();
// start caches state, so run one of the start tests with persistent state
let testdir = testdir()?.with_config(&mut persistent_test_config(Mainnet)?)?;
tracing::info!(?testdir, "running zebrad with no config (default settings)");
let mut child = testdir.spawn_child(args![])?;
// Run the program and kill it after a few seconds
std::thread::sleep(LAUNCH_DELAY);
child.kill(true)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
output.stdout_line_contains("Starting zebrad")?;
// Make sure the command passed the legacy chain check
output.stdout_line_contains("starting legacy chain check")?;
output.stdout_line_contains("no legacy chain found")?;
// Make sure the command was killed
output.assert_was_killed()?;
Ok(())
}
/// Test that `zebrad start` can parse the output from `zebrad generate`.
#[tracing::instrument]
fn valid_generated_config(command: &str, expect_stdout_line_contains: &str) -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?;
let testdir = &testdir;
// Add a config file name to tempdir path
let generated_config_path = testdir.path().join("zebrad.toml");
tracing::info!(?generated_config_path, "generating valid config");
// Generate configuration in temp dir path
let child =
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
testdir.spawn_child(args!["generate", "-o": generated_config_path.to_str().unwrap()])?;
let output = child.wait_with_output()?;
let output = output.assert_success()?;
assert_with_context!(
generated_config_path.exists(),
&output,
"generated config file not found"
);
tracing::info!(?generated_config_path, "testing valid config parsing");
// Run command using temp dir and kill it after a few seconds
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let mut child = testdir.spawn_child(args![command])?;
std::thread::sleep(LAUNCH_DELAY);
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
output.stdout_line_contains(expect_stdout_line_contains)?;
// [Note on port conflict](#Note on port conflict)
output.assert_was_killed().wrap_err("Possible port or cache conflict. Are there other acceptance test, zebrad, or zcashd processes running?")?;
assert_with_context!(
testdir.path().exists(),
&output,
"test temp directory not found"
);
assert_with_context!(
generated_config_path.exists(),
&output,
"generated config file not found"
);
Ok(())
}
/// Check if the config produced by current zebrad is stored.
#[tracing::instrument]
#[allow(clippy::print_stdout)]
fn last_config_is_stored() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?;
// Add a config file name to tempdir path
let generated_config_path = testdir.path().join("zebrad.toml");
tracing::info!(?generated_config_path, "generated current config");
// Generate configuration in temp dir path
let child =
testdir.spawn_child(args!["generate", "-o": generated_config_path.to_str().unwrap()])?;
let output = child.wait_with_output()?;
let output = output.assert_success()?;
assert_with_context!(
generated_config_path.exists(),
&output,
"generated config file not found"
);
tracing::info!(
?generated_config_path,
"testing current config is in stored configs"
);
// Get the contents of the generated config file
let generated_content =
fs::read_to_string(generated_config_path).expect("Should have been able to read the file");
// We need to replace the cache dir path as stored configs has a dummy `cache_dir` string there.
change(state): Write non-finalized blocks to the state in a separate thread, to avoid network and RPC hangs (#5257) * Add a new block commit task and channels, that don't do anything yet * Add last_block_hash_sent to the state service, to avoid database accesses * Update last_block_hash_sent regardless of commit errors * Rename a field to StateService.max_queued_finalized_height * Commit finalized blocks to the state in a separate task * Check for panics in the block write task * Wait for the block commit task in tests, and check for errors * Always run a proptest that sleeps once * Add extra debugging to state shutdowns * Work around a RocksDB shutdown bug * Close the finalized block channel when we're finished with it * Only reset state queue once per error * Update some TODOs * Add a module doc comment * Drop channels and check for closed channels in the block commit task * Close state channels and tasks on drop * Remove some duplicate fields across StateService and ReadStateService * Try tweaking the shutdown steps * Update and clarify some comments * Clarify another comment * Don't try to cancel RocksDB background work on drop * Fix up some comments * Remove some duplicate code * Remove redundant workarounds for shutdown issues * Remode a redundant channel close in the block commit task * Remove a mistaken `!force` shutdown condition * Remove duplicate force-shutdown code and explain it better * Improve RPC error logging * Wait for chain tip updates in the RPC tests * Wait 2 seconds for chain tip updates before skipping them * Remove an unnecessary block_in_place() * Fix some test error messages that were changed by earlier fixes * Expand some comments, fix typos Co-authored-by: Marek <mail@marek.onl> * Actually drop children of failed blocks * Explain why we drop descendants of failed blocks * Clarify a comment * Wait for chain tip updates in a failing test on macOS * Clean duplicate finalized blocks when the non-finalized state activates * Send an error when receiving a duplicate finalized block * Update checkpoint block behaviour, document its consensus rule * Wait for chain tip changes in inbound_block_height_lookahead_limit test * Wait for the genesis block to commit in the fake peer set mempool tests * Disable unreliable mempool verification check in the send transaction test * Appease rustfmt * Use clear_finalized_block_queue() everywhere that blocks are dropped * Document how Finalized and NonFinalized clones are different * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * updates comments, renames send_process_queued, other minor cleanup * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * updates comments, renames send_process_queued, other minor cleanup * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * removes duplicate field definitions on StateService that were a result of a bad merge * update NotReadyToBeCommitted error message * Appear rustfmt * Fix doc links * Rename a function to initial_contextual_validity() * Do error tasks on Err, and success tasks on Ok * Simplify parent_error_map truncation * Rewrite best_tip() to use tip() * Rename latest_mem() to latest_non_finalized_state() ```sh fastmod latest_mem latest_non_finalized_state zebra* cargo fmt --all ``` * Simplify latest_non_finalized_state() using a new WatchReceiver API * Expand some error messages * Send the result after updating the channels, and document why * wait for chain_tip_update before cancelling download in mempool_cancel_mined * adds `sent_non_finalized_block_hashes` field to StateService * adds batched sent_hash insertions and checks sent hashes in queue_and_commit_non_finalized before adding a block to the queue * check that the `curr_buf` in SentHashes is not empty before pushing it to the `sent_bufs` * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Fix rustfmt * Check for finalized block heights using zs_contains() * adds known_utxos field to SentHashes * updates comment on SentHashes.add method * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * return early when there's a duplicate hash in QueuedBlocks.queue instead of panicking * Make finalized UTXOs near the final checkpoint available for full block verification * Replace a checkpoint height literal with the actual config * Update mainnet and testnet checkpoints - 7 October 2022 * Fix some state service init arguments * Allow more lookahead in the downloader, but less lookahead in the syncer * Add the latest config to the tests, and fix the latest config check * Increase the number of finalized blocks checked for non-finalized block UTXO spends * fix(log): reduce verbose logs for block commits (#5348) * Remove some verbose block write channel logs * Only warn about tracing endpoint if the address is actually set * Use CloneError instead of formatting a non-cloneable error Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Increase block verify timeout * Work around a known block timeout bug by using a shorter timeout Co-authored-by: teor <teor@riseup.net> Co-authored-by: Marek <mail@marek.onl> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-10-11 12:25:45 -07:00
let processed_generated_content = generated_content
.replace(
zebra_state::Config::default()
.cache_dir
.to_str()
.expect("a valid cache dir"),
"cache_dir",
)
.trim()
.to_string();
// Loop all the stored configs
//
// TODO: use the same filename list code in last_config_is_stored() and stored_configs_work()
for config_file in configs_dir()
.read_dir()
.expect("read_dir call failed")
.flatten()
{
let config_file_path = config_file.path();
let config_file_name = config_file_path
.file_name()
.expect("config files must have a file name")
.to_string_lossy();
if config_file_name.as_ref().starts_with('.') || config_file_name.as_ref().starts_with('#')
{
// Skip editor files and other invalid config paths
tracing::info!(
?config_file_path,
"skipping hidden/temporary config file path"
);
continue;
}
// Read stored config
let stored_content = fs::read_to_string(config_file_full_path(config_file_path))
change(state): Write non-finalized blocks to the state in a separate thread, to avoid network and RPC hangs (#5257) * Add a new block commit task and channels, that don't do anything yet * Add last_block_hash_sent to the state service, to avoid database accesses * Update last_block_hash_sent regardless of commit errors * Rename a field to StateService.max_queued_finalized_height * Commit finalized blocks to the state in a separate task * Check for panics in the block write task * Wait for the block commit task in tests, and check for errors * Always run a proptest that sleeps once * Add extra debugging to state shutdowns * Work around a RocksDB shutdown bug * Close the finalized block channel when we're finished with it * Only reset state queue once per error * Update some TODOs * Add a module doc comment * Drop channels and check for closed channels in the block commit task * Close state channels and tasks on drop * Remove some duplicate fields across StateService and ReadStateService * Try tweaking the shutdown steps * Update and clarify some comments * Clarify another comment * Don't try to cancel RocksDB background work on drop * Fix up some comments * Remove some duplicate code * Remove redundant workarounds for shutdown issues * Remode a redundant channel close in the block commit task * Remove a mistaken `!force` shutdown condition * Remove duplicate force-shutdown code and explain it better * Improve RPC error logging * Wait for chain tip updates in the RPC tests * Wait 2 seconds for chain tip updates before skipping them * Remove an unnecessary block_in_place() * Fix some test error messages that were changed by earlier fixes * Expand some comments, fix typos Co-authored-by: Marek <mail@marek.onl> * Actually drop children of failed blocks * Explain why we drop descendants of failed blocks * Clarify a comment * Wait for chain tip updates in a failing test on macOS * Clean duplicate finalized blocks when the non-finalized state activates * Send an error when receiving a duplicate finalized block * Update checkpoint block behaviour, document its consensus rule * Wait for chain tip changes in inbound_block_height_lookahead_limit test * Wait for the genesis block to commit in the fake peer set mempool tests * Disable unreliable mempool verification check in the send transaction test * Appease rustfmt * Use clear_finalized_block_queue() everywhere that blocks are dropped * Document how Finalized and NonFinalized clones are different * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * updates comments, renames send_process_queued, other minor cleanup * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * updates comments, renames send_process_queued, other minor cleanup * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * removes duplicate field definitions on StateService that were a result of a bad merge * update NotReadyToBeCommitted error message * Appear rustfmt * Fix doc links * Rename a function to initial_contextual_validity() * Do error tasks on Err, and success tasks on Ok * Simplify parent_error_map truncation * Rewrite best_tip() to use tip() * Rename latest_mem() to latest_non_finalized_state() ```sh fastmod latest_mem latest_non_finalized_state zebra* cargo fmt --all ``` * Simplify latest_non_finalized_state() using a new WatchReceiver API * Expand some error messages * Send the result after updating the channels, and document why * wait for chain_tip_update before cancelling download in mempool_cancel_mined * adds `sent_non_finalized_block_hashes` field to StateService * adds batched sent_hash insertions and checks sent hashes in queue_and_commit_non_finalized before adding a block to the queue * check that the `curr_buf` in SentHashes is not empty before pushing it to the `sent_bufs` * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Fix rustfmt * Check for finalized block heights using zs_contains() * adds known_utxos field to SentHashes * updates comment on SentHashes.add method * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * return early when there's a duplicate hash in QueuedBlocks.queue instead of panicking * Make finalized UTXOs near the final checkpoint available for full block verification * Replace a checkpoint height literal with the actual config * Update mainnet and testnet checkpoints - 7 October 2022 * Fix some state service init arguments * Allow more lookahead in the downloader, but less lookahead in the syncer * Add the latest config to the tests, and fix the latest config check * Increase the number of finalized blocks checked for non-finalized block UTXO spends * fix(log): reduce verbose logs for block commits (#5348) * Remove some verbose block write channel logs * Only warn about tracing endpoint if the address is actually set * Use CloneError instead of formatting a non-cloneable error Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Increase block verify timeout * Work around a known block timeout bug by using a shorter timeout Co-authored-by: teor <teor@riseup.net> Co-authored-by: Marek <mail@marek.onl> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-10-11 12:25:45 -07:00
.expect("Should have been able to read the file")
.trim()
.to_string();
// If any stored config is equal to the generated then we are good.
if stored_content.eq(&processed_generated_content) {
return Ok(());
}
}
change(state): Write non-finalized blocks to the state in a separate thread, to avoid network and RPC hangs (#5257) * Add a new block commit task and channels, that don't do anything yet * Add last_block_hash_sent to the state service, to avoid database accesses * Update last_block_hash_sent regardless of commit errors * Rename a field to StateService.max_queued_finalized_height * Commit finalized blocks to the state in a separate task * Check for panics in the block write task * Wait for the block commit task in tests, and check for errors * Always run a proptest that sleeps once * Add extra debugging to state shutdowns * Work around a RocksDB shutdown bug * Close the finalized block channel when we're finished with it * Only reset state queue once per error * Update some TODOs * Add a module doc comment * Drop channels and check for closed channels in the block commit task * Close state channels and tasks on drop * Remove some duplicate fields across StateService and ReadStateService * Try tweaking the shutdown steps * Update and clarify some comments * Clarify another comment * Don't try to cancel RocksDB background work on drop * Fix up some comments * Remove some duplicate code * Remove redundant workarounds for shutdown issues * Remode a redundant channel close in the block commit task * Remove a mistaken `!force` shutdown condition * Remove duplicate force-shutdown code and explain it better * Improve RPC error logging * Wait for chain tip updates in the RPC tests * Wait 2 seconds for chain tip updates before skipping them * Remove an unnecessary block_in_place() * Fix some test error messages that were changed by earlier fixes * Expand some comments, fix typos Co-authored-by: Marek <mail@marek.onl> * Actually drop children of failed blocks * Explain why we drop descendants of failed blocks * Clarify a comment * Wait for chain tip updates in a failing test on macOS * Clean duplicate finalized blocks when the non-finalized state activates * Send an error when receiving a duplicate finalized block * Update checkpoint block behaviour, document its consensus rule * Wait for chain tip changes in inbound_block_height_lookahead_limit test * Wait for the genesis block to commit in the fake peer set mempool tests * Disable unreliable mempool verification check in the send transaction test * Appease rustfmt * Use clear_finalized_block_queue() everywhere that blocks are dropped * Document how Finalized and NonFinalized clones are different * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * updates comments, renames send_process_queued, other minor cleanup * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * sends non-finalized blocks to the block write task * passes ZebraDb to commit_new_chain, commit_block, and no_duplicates_in_finalized_chain instead of FinalizedState * updates comments, renames send_process_queued, other minor cleanup * Update zebra-state/src/service/write.rs Co-authored-by: teor <teor@riseup.net> * update assert_block_can_be_validated comment * removes `mem` field from StateService * removes `disk` field from StateService and updates block_iter to use `ZebraDb` instead of the finalized state * updates tests that use the disk to use read_service.db instead * moves best_tip to a read fn and returns finalized & non-finalized states from setup instead of the state service * changes `contextual_validity` to get the network from the finalized_state instead of another param * swaps out StateService with FinalizedState and NonFinalizedState in tests * adds NotReadyToBeCommitted error and returns it from validate_and_commit when a blocks parent hash is not in any chain * removes NonFinalizedWriteCmd and calls, moves update_latest_channels above rsp_tx.send * makes parent_errors_map an indexmap * clears non-finalized block queue when the receiver is dropped and when the StateService is being dropped * removes duplicate field definitions on StateService that were a result of a bad merge * update NotReadyToBeCommitted error message * Appear rustfmt * Fix doc links * Rename a function to initial_contextual_validity() * Do error tasks on Err, and success tasks on Ok * Simplify parent_error_map truncation * Rewrite best_tip() to use tip() * Rename latest_mem() to latest_non_finalized_state() ```sh fastmod latest_mem latest_non_finalized_state zebra* cargo fmt --all ``` * Simplify latest_non_finalized_state() using a new WatchReceiver API * Expand some error messages * Send the result after updating the channels, and document why * wait for chain_tip_update before cancelling download in mempool_cancel_mined * adds `sent_non_finalized_block_hashes` field to StateService * adds batched sent_hash insertions and checks sent hashes in queue_and_commit_non_finalized before adding a block to the queue * check that the `curr_buf` in SentHashes is not empty before pushing it to the `sent_bufs` * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Fix rustfmt * Check for finalized block heights using zs_contains() * adds known_utxos field to SentHashes * updates comment on SentHashes.add method * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * return early when there's a duplicate hash in QueuedBlocks.queue instead of panicking * Make finalized UTXOs near the final checkpoint available for full block verification * Replace a checkpoint height literal with the actual config * Update mainnet and testnet checkpoints - 7 October 2022 * Fix some state service init arguments * Allow more lookahead in the downloader, but less lookahead in the syncer * Add the latest config to the tests, and fix the latest config check * Increase the number of finalized blocks checked for non-finalized block UTXO spends * fix(log): reduce verbose logs for block commits (#5348) * Remove some verbose block write channel logs * Only warn about tracing endpoint if the address is actually set * Use CloneError instead of formatting a non-cloneable error Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Increase block verify timeout * Work around a known block timeout bug by using a shorter timeout Co-authored-by: teor <teor@riseup.net> Co-authored-by: Marek <mail@marek.onl> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-10-11 12:25:45 -07:00
println!(
"\n\
Here is the missing config file: \n\
\n\
{processed_generated_content}\n"
);
Err(eyre!(
"latest zebrad config is not being tested for compatibility. \n\
\n\
Take the missing config file logged above, \n\
and commit it to Zebra's git repository as:\n\
zebrad/tests/common/configs/{}<next-release-tag>.toml \n\
\n\
Or run: \n\
cargo build {}--bin zebrad && \n\
zebrad generate | \n\
feat(net): Cache a list of useful peers on disk (#6739) * Rewrite some state cache docs to clarify * Add a zebra_network::Config.cache_dir for peer address caches * Add new config test files and fix config test failure message * Create some zebra-chain and zebra-network convenience functions * Add methods for reading and writing the peer address cache * Add cached disk peers to the initial peers list * Add metrics and logging for loading and storing the peer cache * Replace log of useless redacted peer IP addresses * Limit the peer cache minimum and maximum size, don't write empty caches * Add a cacheable_peers() method to the address book * Add a peer disk cache updater task to the peer set tasks * Document that the peer cache is shared by multiple instances unless configured otherwise * Disable peer cache read/write in disconnected tests * Make initial peer cache updater sleep shorter for tests * Add unit tests for reading and writing the peer cache * Update the task list in the start command docs * Modify the existing persistent acceptance test to check for peer caches * Update the peer cache directory when writing test configs * Add a CacheDir type so the default config can be enabled, but tests can disable it * Update tests to use the CacheDir config type * Rename some CacheDir internals * Add config file test cases for each kind of CacheDir config * Panic if the config contains invalid socket addresses, rather than continuing * Add a network directory to state cache directory contents tests * Add new network.cache_dir config to the config parsing tests
2023-06-06 01:28:14 -07:00
sed 's/cache_dir = \".*\"/cache_dir = \"cache_dir\"/' > \n\
zebrad/tests/common/configs/{}<next-release-tag>.toml",
if cfg!(feature = "shielded-scan") {
SHIELDED_SCAN_CONFIG_PREFIX
} else {
""
},
if cfg!(feature = "shielded-scan") {
"--features=shielded-scan "
} else {
""
},
if cfg!(feature = "shielded-scan") {
SHIELDED_SCAN_CONFIG_PREFIX
} else {
""
},
))
}
/// Checks that Zebra prints an informative message when it cannot parse the
/// config file.
#[tracing::instrument]
fn invalid_generated_config() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = &testdir()?;
// Add a config file name to tempdir path.
let config_path = testdir.path().join("zebrad.toml");
tracing::info!(
?config_path,
"testing invalid config parsing: generating valid config"
);
// Generate a valid config file in the temp dir.
let child = testdir.spawn_child(args!["generate", "-o": config_path.to_str().unwrap()])?;
let output = child.wait_with_output()?;
let output = output.assert_success()?;
assert_with_context!(
config_path.exists(),
&output,
"generated config file not found"
);
// Load the valid config file that Zebra generated.
let mut config_file = fs::read_to_string(config_path.to_str().unwrap()).unwrap();
// Let's now alter the config file so that it contains a deprecated format
// of `mempool.eviction_memory_time`.
config_file = config_file
.lines()
// Remove the valid `eviction_memory_time` key/value pair from the
// config.
.filter(|line| !line.contains("eviction_memory_time"))
.map(|line| line.to_owned() + "\n")
.collect();
// Append the `eviction_memory_time` key/value pair in a deprecated format.
config_file += r"
[mempool.eviction_memory_time]
nanos = 0
secs = 3600
";
tracing::info!(?config_path, "writing invalid config");
// Write the altered config file so that Zebra can pick it up.
fs::write(config_path.to_str().unwrap(), config_file.as_bytes())
.expect("Could not write the altered config file.");
tracing::info!(?config_path, "testing invalid config parsing");
// Run Zebra in a temp dir so that it loads the config.
let mut child = testdir.spawn_child(args!["start"])?;
// Return an error if Zebra is running for more than two seconds.
//
// Since the config is invalid, Zebra should terminate instantly after its
// start. Two seconds should be sufficient for Zebra to read the config file
// and terminate.
std::thread::sleep(Duration::from_secs(2));
if child.is_running() {
// We're going to error anyway, so return an error that makes sense to the developer.
child.kill(true)?;
return Err(eyre!(
"Zebra should have exited after reading the invalid config"
));
}
let output = child.wait_with_output()?;
// Check that Zebra produced an informative message.
output.stderr_contains("Zebra could not parse the provided config file. This might mean you are using a deprecated format of the file.")?;
Ok(())
}
/// Test all versions of `zebrad.toml` we have stored can be parsed by the latest `zebrad`.
#[tracing::instrument]
fn stored_configs_work() -> Result<()> {
let old_configs_dir = configs_dir();
tracing::info!(?old_configs_dir, "testing older config parsing");
for config_file in old_configs_dir
.read_dir()
.expect("read_dir call failed")
.flatten()
{
let config_file_path = config_file.path();
let config_file_name = config_file_path
.file_name()
.expect("config files must have a file name")
.to_str()
.expect("config file names are valid unicode");
if config_file_name.starts_with('.') || config_file_name.starts_with('#') {
// Skip editor files and other invalid config paths
tracing::info!(
?config_file_path,
"skipping hidden/temporary config file path"
);
continue;
}
// ignore files starting with getblocktemplate prefix
// if we were not built with the getblocktemplate-rpcs feature.
#[cfg(not(feature = "getblocktemplate-rpcs"))]
if config_file_name.starts_with(GET_BLOCK_TEMPLATE_CONFIG_PREFIX) {
tracing::info!(
?config_file_path,
"skipping getblocktemplate-rpcs config file path"
);
continue;
}
// ignore files starting with shieldedscan prefix
// if we were not built with the shielded-scan feature.
#[cfg(not(feature = "shielded-scan"))]
if config_file_name.starts_with(SHIELDED_SCAN_CONFIG_PREFIX) {
tracing::info!(?config_file_path, "skipping shielded-scan config file path");
continue;
}
let run_dir = testdir()?;
let stored_config_path = config_file_full_path(config_file.path());
tracing::info!(
?stored_config_path,
"testing old config can be parsed by current zebrad"
);
// run zebra with stored config
let mut child =
run_dir.spawn_child(args!["-c", stored_config_path.to_str().unwrap(), "start"])?;
let success_regexes = [
// When logs are sent to the terminal, we see the config loading message and path.
format!(
"loaded zebrad config.*config_path.*=.*{}",
regex::escape(config_file_name)
),
// If they are sent to a file, we see a log file message on stdout,
// and a logo, welcome message, and progress bar on stderr.
"Sending logs to".to_string(),
// TODO: add expect_stdout_or_stderr_line_matches() and check for this instead:
//"Thank you for running a mainnet zebrad".to_string(),
];
tracing::info!(
?stored_config_path,
?success_regexes,
"waiting for zebrad to parse config and start logging"
);
let success_regexes = success_regexes
.iter()
.collect_regex_set()
.expect("regexes are valid");
// Zebra was able to start with the stored config.
child.expect_stdout_line_matches(success_regexes)?;
// finish
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
}
Ok(())
}
/// Test if `zebrad` can sync the first checkpoint on mainnet.
///
/// The first checkpoint contains a single genesis block.
#[test]
fn sync_one_checkpoint_mainnet() -> Result<()> {
sync_until(
TINY_CHECKPOINT_TEST_HEIGHT,
Mainnet,
STOP_AT_HEIGHT_REGEX,
TINY_CHECKPOINT_TIMEOUT,
None,
MempoolBehavior::ShouldNotActivate,
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
true,
)
.map(|_tempdir| ())
}
/// Test if `zebrad` can sync the first checkpoint on testnet.
///
/// The first checkpoint contains a single genesis block.
// TODO: disabled because testnet is not currently reliable
// #[test]
#[allow(dead_code)]
fn sync_one_checkpoint_testnet() -> Result<()> {
sync_until(
TINY_CHECKPOINT_TEST_HEIGHT,
Testnet,
STOP_AT_HEIGHT_REGEX,
TINY_CHECKPOINT_TIMEOUT,
None,
MempoolBehavior::ShouldNotActivate,
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
true,
)
.map(|_tempdir| ())
}
/// Test if `zebrad` can sync the first checkpoint, restart, and stop on load.
#[test]
fn restart_stop_at_height() -> Result<()> {
let _init_guard = zebra_test::init();
restart_stop_at_height_for_network(Network::Mainnet, TINY_CHECKPOINT_TEST_HEIGHT)?;
// TODO: disabled because testnet is not currently reliable
// restart_stop_at_height_for_network(Network::Testnet, TINY_CHECKPOINT_TEST_HEIGHT)?;
Ok(())
}
add(test): Integration test to send transactions using lightwalletd (#4068) * Export the `zebra_state::Config::db_path` method Make it easier for tests to discover the sub-directory used to store the chain state data. * Generate code for interfacing with lightwalletd Use the `tonic-build` crate to generate Rust code for communicating with lightwalletd using gRPC. The `*.proto` files were obtained from the Zcash lightwalletd repository. * Use `block::Height` instead of `Height` Import the `block` instead to make it slightly clearer. * Add helper function to remove a file if it exists Try to remove it and ignore an error if it says that the file doesn't exist. This will be used later to remove the lock file from a copied chain state directory. * Add helper function to copy chain state dirs Copy an existing chain state directory into a new temporary directory. * Add a `BoxStateService` type alias Make it easier to write and read a boxed version of a state service. * Add a helper function to start the state service Make it easier to specify the state service to use an existing state cache directory. * Import `eyre!` macro at the module level Allow it to be used in different places without having to repeat the imports. * Add `load_tip_height_from_state_directory` helper A function to discover the current chain tip height stored in a state cache. * Add helper function to prepare partial sync. state Loads a partially synchronized cached state directory into a temporary directory that can be used by a zebrad instance, and also returns the chain tip block height of that state. * Add `perform_full_sync_starting_from` helper Runs a zebrad with an existing partially synchronized state, and finishes synchronizing it to the network chain tip. * Add function to load transactions from a block Use a provided state service to load all transactions from a block at a specified height. The state service is a generic type parameter, because `zebra_state::service::ReadStateService` is not exported publicly. Using a generic type parameter also allows the service to be wrapped in layers if needed in the future. * Add `load_transactions_from_block_after` helper A function to load transactions from a block stored in a cached state directory. The cached state must be synchronized to a chain tip higher than the requested height. * Add helper function to load some test transactions Given a partially synchronized chain state, it will extend that chain by performing a full synchronization, and obtain some transactions from one of the newly added blocks. * Update `spawn_zebrad_for_rpc_without_initial_peers` Wait until the mempool is activated. * Add method to start lightwalletd with RPC server Returns the lightwalletd instance and the port that it's listening for RPC connections. The instance can reuse an existing cached lightwalletd state if the `LIGHTWALLETD_DATA_DIR` environment variable is set. * Add a `LightwalletdRpcClient` type alias To make it easier to identify the type generated from the Protobuf files. * Add helper function to connect to lightwalletd Prepare an RPC client to send requests to a lightwalletd instance. * Add a `prepare_send_transaction_request` helper Creates a request message for lightwalletd to send a transaction. * Add test to send transactions using lightwalletd Obtain some valid transactions from future blocks and try to send them to a lightwalletd instance connected to a zebrad instance that hasn't seen those transactions yet. The transactions should be successfully queued in Zebra's mempool. * Make `zebra_directory` parameter generic Allow using a `TempDir` or a `PathBuf`. * Move lightwalletd protobuf files Place them closer to the module directory, so that it's clearer that they specify the RPC protocol for lightwalletd, and not Zebra itself. * Don't use coinbase transactions in the test Coinbase transactions are rejected by the mempool. * Don't remove state lock file It is removed automatically by Zebra when it shuts down, so if it exists it should be reported as a bug. * Force mempool to be enabled in Zebrad instance Speed up the initialization of the Zebrad instance used for lightwalletd to connect to. * Refactor to create `LIGHTWALLETD_DATA_DIR_VAR` Document how the environment variable can be used to speed up the test. * Check for process errors in spawned Zebra instance Enable checking for known process failure messages. * Add `FINISH_PARTIAL_SYNC_TIMEOUT` constant Document why it exists and how the choice of the value affects the test. * Add `LIGHTWALLETD_TEST_TIMEOUT` constant And use it for the Zebrad and the Lightwalletd instances used in the send transaction integration test. * Check `lightwalletd` process for errors Enable checking the lightwalletd process for known failure messages. * Update `tonic` and `prost` dependencies Use the latest version and fix CI failures because `rustfmt` isn't installed in the build environment. * Create `send_transaction_test` module Move the send transaction using lightwalletd test and its helper functions into a new module. * Move `LIGHTWALLETD_TEST_TIMEOUT` constant Place it in the parent `lightwalletd` module. * Move gRPC helper functions and types to `rpc` mod. Make them more accessible so that they can be used by other tests. * Create a `cached_state` module Move the test utility functions related to using a cached Zebra state into the module. * Move `perform_full_sync_starting_from` to `sync` Keep to closer to the synchronization utility functions. * Move Zebra cached state path variable constant Place it in the `cached_state` module. * Skip test if `ZEBRA_TEST_LIGHTWALLETD` is not set Make it part of the set of tests ignored as a whole if no lightwalletd tests should be executed. * Move `spawn_zebrad_for_rpc_without_initial_peers` Place it in the `launch` sub-module. * Rename `rpc` module into `wallet_grpc` Avoid any potential misunderstandings when the name is seen out of context. * Allow duplicate `heck` dependency At least until `structopt` is updated or `zebra-utils` is updated to use `clap` 3. * Fix a deny.toml typo * fix(build): CMake is required by `prost` crate Co-authored-by: teor <teor@riseup.net> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-04-27 16:06:11 -07:00
fn restart_stop_at_height_for_network(network: Network, height: block::Height) -> Result<()> {
let reuse_tempdir = sync_until(
height,
network,
STOP_AT_HEIGHT_REGEX,
TINY_CHECKPOINT_TIMEOUT,
None,
MempoolBehavior::ShouldNotActivate,
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
true,
)?;
2021-04-07 02:25:31 -07:00
// if stopping corrupts the rocksdb database, zebrad might hang or crash here
2021-04-07 02:23:48 -07:00
// if stopping does not write the rocksdb database to disk, Zebra will
// sync, rather than stopping immediately at the configured height
sync_until(
height,
network,
"state is already at the configured height",
STOP_ON_LOAD_TIMEOUT,
reuse_tempdir,
MempoolBehavior::ShouldNotActivate,
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
false,
)?;
Ok(())
}
/// Test if `zebrad` can activate the mempool on mainnet.
/// Debug activation happens after committing the genesis block.
#[test]
fn activate_mempool_mainnet() -> Result<()> {
sync_until(
add(test): Integration test to send transactions using lightwalletd (#4068) * Export the `zebra_state::Config::db_path` method Make it easier for tests to discover the sub-directory used to store the chain state data. * Generate code for interfacing with lightwalletd Use the `tonic-build` crate to generate Rust code for communicating with lightwalletd using gRPC. The `*.proto` files were obtained from the Zcash lightwalletd repository. * Use `block::Height` instead of `Height` Import the `block` instead to make it slightly clearer. * Add helper function to remove a file if it exists Try to remove it and ignore an error if it says that the file doesn't exist. This will be used later to remove the lock file from a copied chain state directory. * Add helper function to copy chain state dirs Copy an existing chain state directory into a new temporary directory. * Add a `BoxStateService` type alias Make it easier to write and read a boxed version of a state service. * Add a helper function to start the state service Make it easier to specify the state service to use an existing state cache directory. * Import `eyre!` macro at the module level Allow it to be used in different places without having to repeat the imports. * Add `load_tip_height_from_state_directory` helper A function to discover the current chain tip height stored in a state cache. * Add helper function to prepare partial sync. state Loads a partially synchronized cached state directory into a temporary directory that can be used by a zebrad instance, and also returns the chain tip block height of that state. * Add `perform_full_sync_starting_from` helper Runs a zebrad with an existing partially synchronized state, and finishes synchronizing it to the network chain tip. * Add function to load transactions from a block Use a provided state service to load all transactions from a block at a specified height. The state service is a generic type parameter, because `zebra_state::service::ReadStateService` is not exported publicly. Using a generic type parameter also allows the service to be wrapped in layers if needed in the future. * Add `load_transactions_from_block_after` helper A function to load transactions from a block stored in a cached state directory. The cached state must be synchronized to a chain tip higher than the requested height. * Add helper function to load some test transactions Given a partially synchronized chain state, it will extend that chain by performing a full synchronization, and obtain some transactions from one of the newly added blocks. * Update `spawn_zebrad_for_rpc_without_initial_peers` Wait until the mempool is activated. * Add method to start lightwalletd with RPC server Returns the lightwalletd instance and the port that it's listening for RPC connections. The instance can reuse an existing cached lightwalletd state if the `LIGHTWALLETD_DATA_DIR` environment variable is set. * Add a `LightwalletdRpcClient` type alias To make it easier to identify the type generated from the Protobuf files. * Add helper function to connect to lightwalletd Prepare an RPC client to send requests to a lightwalletd instance. * Add a `prepare_send_transaction_request` helper Creates a request message for lightwalletd to send a transaction. * Add test to send transactions using lightwalletd Obtain some valid transactions from future blocks and try to send them to a lightwalletd instance connected to a zebrad instance that hasn't seen those transactions yet. The transactions should be successfully queued in Zebra's mempool. * Make `zebra_directory` parameter generic Allow using a `TempDir` or a `PathBuf`. * Move lightwalletd protobuf files Place them closer to the module directory, so that it's clearer that they specify the RPC protocol for lightwalletd, and not Zebra itself. * Don't use coinbase transactions in the test Coinbase transactions are rejected by the mempool. * Don't remove state lock file It is removed automatically by Zebra when it shuts down, so if it exists it should be reported as a bug. * Force mempool to be enabled in Zebrad instance Speed up the initialization of the Zebrad instance used for lightwalletd to connect to. * Refactor to create `LIGHTWALLETD_DATA_DIR_VAR` Document how the environment variable can be used to speed up the test. * Check for process errors in spawned Zebra instance Enable checking for known process failure messages. * Add `FINISH_PARTIAL_SYNC_TIMEOUT` constant Document why it exists and how the choice of the value affects the test. * Add `LIGHTWALLETD_TEST_TIMEOUT` constant And use it for the Zebrad and the Lightwalletd instances used in the send transaction integration test. * Check `lightwalletd` process for errors Enable checking the lightwalletd process for known failure messages. * Update `tonic` and `prost` dependencies Use the latest version and fix CI failures because `rustfmt` isn't installed in the build environment. * Create `send_transaction_test` module Move the send transaction using lightwalletd test and its helper functions into a new module. * Move `LIGHTWALLETD_TEST_TIMEOUT` constant Place it in the parent `lightwalletd` module. * Move gRPC helper functions and types to `rpc` mod. Make them more accessible so that they can be used by other tests. * Create a `cached_state` module Move the test utility functions related to using a cached Zebra state into the module. * Move `perform_full_sync_starting_from` to `sync` Keep to closer to the synchronization utility functions. * Move Zebra cached state path variable constant Place it in the `cached_state` module. * Skip test if `ZEBRA_TEST_LIGHTWALLETD` is not set Make it part of the set of tests ignored as a whole if no lightwalletd tests should be executed. * Move `spawn_zebrad_for_rpc_without_initial_peers` Place it in the `launch` sub-module. * Rename `rpc` module into `wallet_grpc` Avoid any potential misunderstandings when the name is seen out of context. * Allow duplicate `heck` dependency At least until `structopt` is updated or `zebra-utils` is updated to use `clap` 3. * Fix a deny.toml typo * fix(build): CMake is required by `prost` crate Co-authored-by: teor <teor@riseup.net> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-04-27 16:06:11 -07:00
block::Height(TINY_CHECKPOINT_TEST_HEIGHT.0 + 1),
Mainnet,
STOP_AT_HEIGHT_REGEX,
TINY_CHECKPOINT_TIMEOUT,
None,
MempoolBehavior::ForceActivationAt(TINY_CHECKPOINT_TEST_HEIGHT),
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
true,
)
.map(|_tempdir| ())
}
/// Test if `zebrad` can sync some larger checkpoints on mainnet.
///
2020-10-20 02:55:11 -07:00
/// This test might fail or timeout on slow or unreliable networks,
/// so we don't run it by default. It also takes a lot longer than
/// our 10 second target time for default tests.
#[test]
2020-10-20 02:55:11 -07:00
#[ignore]
fn sync_large_checkpoints_mainnet() -> Result<()> {
let reuse_tempdir = sync_until(
LARGE_CHECKPOINT_TEST_HEIGHT,
Mainnet,
STOP_AT_HEIGHT_REGEX,
LARGE_CHECKPOINT_TIMEOUT,
None,
MempoolBehavior::ShouldNotActivate,
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
true,
)?;
// if this sync fails, see the failure notes in `restart_stop_at_height`
sync_until(
(LARGE_CHECKPOINT_TEST_HEIGHT - 1).unwrap(),
Mainnet,
"previous state height is greater than the stop height",
STOP_ON_LOAD_TIMEOUT,
reuse_tempdir,
MempoolBehavior::ShouldNotActivate,
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
false,
)?;
Ok(())
}
// TODO: We had `sync_large_checkpoints_testnet` and `sync_large_checkpoints_mempool_testnet`,
// but they were removed because the testnet is unreliable (#1222).
// We should re-add them after we have more testnet instances (#1791).
/// Test if `zebrad` can run side by side with the mempool.
/// This is done by running the mempool and syncing some checkpoints.
#[test]
#[ignore]
fn sync_large_checkpoints_mempool_mainnet() -> Result<()> {
sync_until(
MEDIUM_CHECKPOINT_TEST_HEIGHT,
Mainnet,
STOP_AT_HEIGHT_REGEX,
LARGE_CHECKPOINT_TIMEOUT,
None,
MempoolBehavior::ForceActivationAt(TINY_CHECKPOINT_TEST_HEIGHT),
// checkpoint sync is irrelevant here - all tested checkpoints are mandatory
true,
true,
)
.map(|_tempdir| ())
}
#[tracing::instrument]
2020-10-29 18:00:02 -07:00
fn create_cached_database(network: Network) -> Result<()> {
let height = network.mandatory_checkpoint_height();
let checkpoint_stop_regex =
format!("{STOP_AT_HEIGHT_REGEX}.*commit checkpoint-verified request");
create_cached_database_height(
network,
height,
// Use checkpoints to increase sync performance while caching the database
true,
// Check that we're still using checkpoints when we finish the cached sync
&checkpoint_stop_regex,
)
2020-10-29 18:00:02 -07:00
}
#[tracing::instrument]
fn sync_past_mandatory_checkpoint(network: Network) -> Result<()> {
let height = network.mandatory_checkpoint_height() + 1200;
let full_validation_stop_regex =
format!("{STOP_AT_HEIGHT_REGEX}.*commit contextually-verified request");
create_cached_database_height(
network,
height.unwrap(),
// Test full validation by turning checkpoints off
false,
// Check that we're doing full validation when we finish the cached sync
&full_validation_stop_regex,
)
2020-10-29 18:00:02 -07:00
}
/// Sync `network` until the chain tip is reached, or a timeout elapses.
///
/// The timeout is specified using an environment variable, with the name configured by the
/// `timeout_argument_name` parameter. The value of the environment variable must the number of
/// minutes specified as an integer.
#[allow(clippy::print_stderr)]
#[tracing::instrument]
fn full_sync_test(network: Network, timeout_argument_name: &str) -> Result<()> {
let timeout_argument: Option<u64> = env::var(timeout_argument_name)
.ok()
.and_then(|timeout_string| timeout_string.parse().ok());
// # TODO
//
// Replace hard-coded values in create_cached_database_height with:
// - the timeout in the environmental variable
// - the path from ZEBRA_CACHED_STATE_DIR
if let Some(_timeout_minutes) = timeout_argument {
create_cached_database_height(
network,
// Just keep going until we reach the chain tip
block::Height::MAX,
// Use the checkpoints to sync quickly, then do full validation until the chain tip
true,
// Finish when we reach the chain tip
fix(ci): make full sync go all the way to the tip (#4709) * Checkout zebra in each job to avoid warnings But put TODOs where we might be able to skip checkouts * Split log following into sprout checkpoints, sapling/orchard checkpoints, and full validation * Make job IDs shorter * Use /dev/stderr because docker doesn't have a tty * remove pipefail * Revert "remove pipefail" This reverts commit a7ee37bebdc107a4215e7dd307b189d925969234. * Make tee ignore errors writing to a grep pipe * Avoid launching multiple docker instances for duplicate jobs * Ignore broken pipe error messages and statuses * fix(ci): docker wait not finding container We had this issue before, I can't recall if this was a parsing error between GitHub Actions and gcloud `--command` parsing, but we had to change this into two pieces. This implementation keeps it how we did it before https://github.com/ZcashFoundation/zebra/blob/9b9578c99975952a291006dde8d2828fd3e97799/.github/workflows/test.yml#L235-L243 * docs: remove pending TODO We can't remove `actions/checkout` nor set `create_credentials_file` to `false` as next steps won't be able to authenticate to GCP. We can surely remove `actions/checkout` and leave `create_credentials_file` as `true`, but this will raise a warning on each step, and there's no benefit of doing so. * Show `docker wait` and `gcloud ssh` output * If `docker wait` fails, get the exit code using `docker inspect` * Make full sync tests go all the way to the tip Co-authored-by: Conrado Gouvea <conrado@zfnd.org> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-06-30 08:14:30 -07:00
SYNC_FINISHED_REGEX,
)
} else {
eprintln!(
"Skipped full sync test for {network}, \
set the {timeout_argument_name:?} environmental variable to run the test",
);
Ok(())
}
}
2020-10-30 12:45:26 -07:00
// These tests are ignored because they're too long running to run during our
// traditional CI, and they depend on persistent state that cannot be made
// available in github actions or google cloud build. Instead we run these tests
// directly in a vm we spin up on google compute engine, where we can mount
// drives populated by the sync_to_mandatory_checkpoint tests, snapshot those drives,
// and then use them to more quickly run the sync_past_mandatory_checkpoint tests.
2020-10-30 12:45:26 -07:00
/// Sync up to the mandatory checkpoint height on mainnet and stop.
#[allow(dead_code)]
#[cfg_attr(feature = "test_sync_to_mandatory_checkpoint_mainnet", test)]
fn sync_to_mandatory_checkpoint_mainnet() -> Result<()> {
let _init_guard = zebra_test::init();
2020-10-29 18:00:02 -07:00
let network = Mainnet;
create_cached_database(network)
2020-10-29 18:00:02 -07:00
}
/// Sync to the mandatory checkpoint height testnet and stop.
#[allow(dead_code)]
#[cfg_attr(feature = "test_sync_to_mandatory_checkpoint_testnet", test)]
fn sync_to_mandatory_checkpoint_testnet() -> Result<()> {
let _init_guard = zebra_test::init();
2020-10-29 18:00:02 -07:00
let network = Testnet;
create_cached_database(network)
2020-10-29 18:00:02 -07:00
}
/// Test syncing 1200 blocks (3 checkpoints) past the mandatory checkpoint on mainnet.
///
/// This assumes that the config'd state is already synced at or near the mandatory checkpoint
/// activation on mainnet. If the state has already synced past the mandatory checkpoint
/// activation by 1200 blocks, it will fail.
#[allow(dead_code)]
#[cfg_attr(feature = "test_sync_past_mandatory_checkpoint_mainnet", test)]
fn sync_past_mandatory_checkpoint_mainnet() -> Result<()> {
let _init_guard = zebra_test::init();
2020-10-29 18:00:02 -07:00
let network = Mainnet;
sync_past_mandatory_checkpoint(network)
2020-10-29 18:00:02 -07:00
}
/// Test syncing 1200 blocks (3 checkpoints) past the mandatory checkpoint on testnet.
///
/// This assumes that the config'd state is already synced at or near the mandatory checkpoint
/// activation on testnet. If the state has already synced past the mandatory checkpoint
/// activation by 1200 blocks, it will fail.
#[allow(dead_code)]
#[cfg_attr(feature = "test_sync_past_mandatory_checkpoint_testnet", test)]
fn sync_past_mandatory_checkpoint_testnet() -> Result<()> {
let _init_guard = zebra_test::init();
2020-10-29 18:00:02 -07:00
let network = Testnet;
sync_past_mandatory_checkpoint(network)
}
/// Test if `zebrad` can fully sync the chain on mainnet.
///
/// This test takes a long time to run, so we don't run it by default. This test is only executed
/// if there is an environment variable named `FULL_SYNC_MAINNET_TIMEOUT_MINUTES` set with the number
/// of minutes to wait for synchronization to complete before considering that the test failed.
#[test]
#[ignore]
fn full_sync_mainnet() -> Result<()> {
// TODO: add "ZEBRA" at the start of this env var, to avoid clashes
full_sync_test(Mainnet, "FULL_SYNC_MAINNET_TIMEOUT_MINUTES")
}
/// Test if `zebrad` can fully sync the chain on testnet.
///
/// This test takes a long time to run, so we don't run it by default. This test is only executed
/// if there is an environment variable named `FULL_SYNC_TESTNET_TIMEOUT_MINUTES` set with the number
/// of minutes to wait for synchronization to complete before considering that the test failed.
#[test]
#[ignore]
fn full_sync_testnet() -> Result<()> {
// TODO: add "ZEBRA" at the start of this env var, to avoid clashes
full_sync_test(Testnet, "FULL_SYNC_TESTNET_TIMEOUT_MINUTES")
}
#[cfg(feature = "prometheus")]
#[tokio::test]
async fn metrics_endpoint() -> Result<()> {
use hyper::Client;
let _init_guard = zebra_test::init();
// [Note on port conflict](#Note on port conflict)
let port = random_known_port();
let endpoint = format!("127.0.0.1:{port}");
let url = format!("http://{endpoint}");
// Write a configuration that has metrics endpoint_addr set
let mut config = default_test_config(Mainnet)?;
config.metrics.endpoint_addr = Some(endpoint.parse().unwrap());
let dir = testdir()?.with_config(&mut config)?;
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = dir.spawn_child(args!["start"])?;
// Run `zebrad` for a few seconds before testing the endpoint
// Since we're an async function, we have to use a sleep future, not thread sleep.
tokio::time::sleep(LAUNCH_DELAY).await;
// Create an http client
let client = Client::new();
// Test metrics endpoint
let res = client.get(url.try_into().expect("url is valid")).await;
let (res, child) = child.kill_on_error(res)?;
assert!(res.status().is_success());
let body = hyper::body::to_bytes(res).await;
let (body, mut child) = child.kill_on_error(body)?;
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
output.any_output_line_contains(
Update to Tokio 1.13.0 (#2994) * Update `tower` to version `0.4.9` Update to latest version to add support for Tokio version 1. * Replace usage of `ServiceExt::ready_and` It was deprecated in favor of `ServiceExt::ready`. * Update Tokio dependency to version `1.13.0` This will break the build because the code isn't ready for the update, but future commits will fix the issues. * Replace import of `tokio::stream::StreamExt` Use `futures::stream::StreamExt` instead, because newer versions of Tokio don't have the `stream` feature. * Use `IntervalStream` in `zebra-network` In newer versions of Tokio `Interval` doesn't implement `Stream`, so the wrapper types from `tokio-stream` have to be used instead. * Use `IntervalStream` in `inventory_registry` In newer versions of Tokio the `Interval` type doesn't implement `Stream`, so `tokio_stream::wrappers::IntervalStream` has to be used instead. * Use `BroadcastStream` in `inventory_registry` In newer versions of Tokio `broadcast::Receiver` doesn't implement `Stream`, so `tokio_stream::wrappers::BroadcastStream` instead. This also requires changing the error type that is used. * Handle `Semaphore::acquire` error in `tower-batch` Newer versions of Tokio can return an error if the semaphore is closed. This shouldn't happen in `tower-batch` because the semaphore is never closed. * Handle `Semaphore::acquire` error in `zebrad` test On newer versions of Tokio `Semaphore::acquire` can return an error if the semaphore is closed. This shouldn't happen in the test because the semaphore is never closed. * Update some `zebra-network` dependencies Use versions compatible with Tokio version 1. * Upgrade Hyper to version 0.14 Use a version that supports Tokio version 1. * Update `metrics` dependency to version 0.17 And also update the `metrics-exporter-prometheus` to version 0.6.1. These updates are to make sure Tokio 1 is supported. * Use `f64` as the histogram data type `u64` isn't supported as the histogram data type in newer versions of `metrics`. * Update the initialization of the metrics component Make it compatible with the new version of `metrics`. * Simplify build version counter Remove all constants and use the new `metrics::incement_counter!` macro. * Change metrics output line to match on The snapshot string isn't included in the newer version of `metrics-exporter-prometheus`. * Update `sentry` to version 0.23.0 Use a version compatible with Tokio version 1. * Remove usage of `TracingIntegration` This seems to not be available from `sentry-tracing` anymore, so it needs to be replaced. * Add sentry layer to tracing initialization This seems like the replacement for `TracingIntegration`. * Remove unnecessary conversion Suggested by a Clippy lint. * Update Cargo lock file Apply all of the updates to dependencies. * Ban duplicate tokio dependencies Also ban git sources for tokio dependencies. * Stop allowing sentry-tracing git repository in `deny.toml` * Allow remaining duplicates after the tokio upgrade * Use C: drive for CI build output on Windows GitHub Actions uses a Windows image with two disk drives, and the default D: drive is smaller than the C: drive. Zebra currently uses a lot of space to build, so it has to use the C: drive to avoid CI build failures because of insufficient space. Co-authored-by: teor <teor@riseup.net>
2021-11-02 11:46:57 -07:00
"# TYPE zebrad_build_info counter",
&body,
"metrics exporter response",
"the metrics response header",
)?;
std::str::from_utf8(&body).expect("unexpected invalid UTF-8 in metrics exporter response");
// Make sure metrics was started
output.stdout_line_contains(format!("Opened metrics endpoint at {endpoint}").as_str())?;
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
Ok(())
}
#[cfg(feature = "filter-reload")]
#[tokio::test]
async fn tracing_endpoint() -> Result<()> {
use hyper::{Body, Client, Request};
let _init_guard = zebra_test::init();
// [Note on port conflict](#Note on port conflict)
let port = random_known_port();
let endpoint = format!("127.0.0.1:{port}");
let url_default = format!("http://{endpoint}");
let url_filter = format!("{url_default}/filter");
// Write a configuration that has tracing endpoint_addr option set
let mut config = default_test_config(Mainnet)?;
config.tracing.endpoint_addr = Some(endpoint.parse().unwrap());
let dir = testdir()?.with_config(&mut config)?;
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let child = dir.spawn_child(args!["start"])?;
// Run `zebrad` for a few seconds before testing the endpoint
// Since we're an async function, we have to use a sleep future, not thread sleep.
tokio::time::sleep(LAUNCH_DELAY).await;
// Create an http client
let client = Client::new();
// Test tracing endpoint
let res = client
.get(url_default.try_into().expect("url_default is valid"))
.await;
let (res, child) = child.kill_on_error(res)?;
assert!(res.status().is_success());
let body = hyper::body::to_bytes(res).await;
let (body, child) = child.kill_on_error(body)?;
// Set a filter and make sure it was changed
let request = Request::post(url_filter.clone())
.body(Body::from("zebrad=debug"))
.unwrap();
let post = client.request(request).await;
let (_post, child) = child.kill_on_error(post)?;
let tracing_res = client
.get(url_filter.try_into().expect("url_filter is valid"))
.await;
let (tracing_res, child) = child.kill_on_error(tracing_res)?;
assert!(tracing_res.status().is_success());
let tracing_body = hyper::body::to_bytes(tracing_res).await;
let (tracing_body, mut child) = child.kill_on_error(tracing_body)?;
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
// Make sure tracing endpoint was started
output.stdout_line_contains(format!("Opened tracing endpoint at {endpoint}").as_str())?;
// TODO: Match some trace level messages from output
// Make sure the endpoint header is correct
// The header is split over two lines. But we don't want to require line
// breaks at a specific word, so we run two checks for different substrings.
output.any_output_line_contains(
"HTTP endpoint allows dynamic control of the filter",
&body,
"tracing filter endpoint response",
"the tracing response header",
)?;
output.any_output_line_contains(
"tracing events",
&body,
"tracing filter endpoint response",
"the tracing response header",
)?;
std::str::from_utf8(&body).expect("unexpected invalid UTF-8 in tracing filter response");
// Make sure endpoint requests change the filter
output.any_output_line_contains(
"zebrad=debug",
&tracing_body,
"tracing filter endpoint response",
"the modified tracing filter",
)?;
std::str::from_utf8(&tracing_body)
.expect("unexpected invalid UTF-8 in modified tracing filter response");
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
Ok(())
}
/// Test that the JSON-RPC endpoint responds to a request,
/// when configured with a single thread.
#[tokio::test]
async fn rpc_endpoint_single_thread() -> Result<()> {
rpc_endpoint(false).await
}
/// Test that the JSON-RPC endpoint responds to a request,
/// when configured with multiple threads.
#[tokio::test]
async fn rpc_endpoint_parallel_threads() -> Result<()> {
rpc_endpoint(true).await
}
/// Test that the JSON-RPC endpoint responds to a request.
///
/// Set `parallel_cpu_threads` to true to auto-configure based on the number of CPU cores.
#[tracing::instrument]
async fn rpc_endpoint(parallel_cpu_threads: bool) -> Result<()> {
let _init_guard = zebra_test::init();
if zebra_test::net::zebra_skip_network_tests() {
return Ok(());
}
// Write a configuration that has RPC listen_addr set
// [Note on port conflict](#Note on port conflict)
let mut config = random_known_rpc_port_config(parallel_cpu_threads, Mainnet)?;
let dir = testdir()?.with_config(&mut config)?;
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let mut child = dir.spawn_child(args!["start"])?;
// Wait until port is open.
child.expect_stdout_line_matches(
format!("Opened RPC endpoint at {}", config.rpc.listen_addr.unwrap()).as_str(),
)?;
// Create an http client
let client = RpcRequestClient::new(config.rpc.listen_addr.unwrap());
// Make the call to the `getinfo` RPC method
let res = client.call("getinfo", "[]".to_string()).await?;
// Test rpc endpoint response
assert!(res.status().is_success());
let body = res.bytes().await;
let (body, mut child) = child.kill_on_error(body)?;
let parsed: Value = serde_json::from_slice(&body)?;
// Check that we have at least 4 characters in the `build` field.
let build = parsed["result"]["build"].as_str().unwrap();
assert!(build.len() > 4, "Got {build}");
// Check that the `subversion` field has "Zebra" in it.
let subversion = parsed["result"]["subversion"].as_str().unwrap();
assert!(subversion.contains("Zebra"), "Got {subversion}");
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
Ok(())
}
/// Test that the JSON-RPC endpoint responds to requests with different content types.
///
/// This test ensures that the curl examples of zcashd rpc methods will also work in Zebra.
///
/// https://zcash.github.io/rpc/getblockchaininfo.html
#[tokio::test]
async fn rpc_endpoint_client_content_type() -> Result<()> {
let _init_guard = zebra_test::init();
if zebra_test::net::zebra_skip_network_tests() {
return Ok(());
}
// Write a configuration that has RPC listen_addr set
// [Note on port conflict](#Note on port conflict)
let mut config = random_known_rpc_port_config(true, Mainnet)?;
let dir = testdir()?.with_config(&mut config)?;
let mut child = dir.spawn_child(args!["start"])?;
// Wait until port is open.
child.expect_stdout_line_matches(
format!("Opened RPC endpoint at {}", config.rpc.listen_addr.unwrap()).as_str(),
)?;
// Create an http client
let client = RpcRequestClient::new(config.rpc.listen_addr.unwrap());
// Call to `getinfo` RPC method with a no content type.
let res = client
.call_with_no_content_type("getinfo", "[]".to_string())
.await?;
// Zebra will insert valid `application/json` content type and succeed.
assert!(res.status().is_success());
// Call to `getinfo` RPC method with a `text/plain`.
let res = client
.call_with_content_type("getinfo", "[]".to_string(), "text/plain".to_string())
.await?;
// Zebra will replace to the valid `application/json` content type and succeed.
assert!(res.status().is_success());
// Call to `getinfo` RPC method with a `text/plain` content type as the zcashd rpc docs.
let res = client
.call_with_content_type("getinfo", "[]".to_string(), "text/plain;".to_string())
.await?;
// Zebra will replace to the valid `application/json` content type and succeed.
assert!(res.status().is_success());
// Call to `getinfo` RPC method with a `text/plain; other string` content type.
let res = client
.call_with_content_type(
"getinfo",
"[]".to_string(),
"text/plain; other string".to_string(),
)
.await?;
// Zebra will replace to the valid `application/json` content type and succeed.
assert!(res.status().is_success());
// Call to `getinfo` RPC method with a valid `application/json` content type.
let res = client
.call_with_content_type("getinfo", "[]".to_string(), "application/json".to_string())
.await?;
// Zebra will not replace valid content type and succeed.
assert!(res.status().is_success());
// Call to `getinfo` RPC method with invalid string as content type.
let res = client
.call_with_content_type("getinfo", "[]".to_string(), "whatever".to_string())
.await?;
// Zebra will not replace unrecognized content type and fail.
assert!(res.status().is_client_error());
Ok(())
}
/// Test that Zebra's non-blocking logger works, by creating lots of debug output, but not reading the logs.
/// Then make sure Zebra drops excess log lines. (Previously, it would block waiting for logs to be read.)
///
/// This test is unreliable and sometimes hangs on macOS.
#[test]
#[cfg(not(target_os = "macos"))]
fn non_blocking_logger() -> Result<()> {
use futures::FutureExt;
use std::{sync::mpsc, time::Duration};
let rt = tokio::runtime::Runtime::new().unwrap();
let (done_tx, done_rx) = mpsc::channel();
let test_task_handle: tokio::task::JoinHandle<Result<()>> = rt.spawn(async move {
let _init_guard = zebra_test::init();
// Write a configuration that has RPC listen_addr set
// [Note on port conflict](#Note on port conflict)
let mut config = random_known_rpc_port_config(false, Mainnet)?;
config.tracing.filter = Some("trace".to_string());
config.tracing.buffer_limit = 100;
let zebra_rpc_address = config.rpc.listen_addr.unwrap();
let dir = testdir()?.with_config(&mut config)?;
let mut child = dir
.spawn_child(args!["start"])?
.with_timeout(TINY_CHECKPOINT_TIMEOUT);
// Wait until port is open.
child.expect_stdout_line_matches(
format!("Opened RPC endpoint at {}", config.rpc.listen_addr.unwrap()).as_str(),
)?;
// Create an http client
let client = RpcRequestClient::new(zebra_rpc_address);
// Most of Zebra's lines are 100-200 characters long, so 500 requests should print enough to fill the unix pipe,
// fill the channel that tracing logs are queued onto, and drop logs rather than block execution.
for _ in 0..500 {
let res = client.call("getinfo", "[]".to_string()).await?;
// Test that zebrad rpc endpoint is still responding to requests
assert!(res.status().is_success());
}
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
done_tx.send(())?;
Ok(())
});
// Wait until the spawned task finishes up to 45 seconds before shutting down tokio runtime
if done_rx.recv_timeout(Duration::from_secs(45)).is_ok() {
rt.shutdown_timeout(Duration::from_secs(3));
}
match test_task_handle.now_or_never() {
Some(Ok(result)) => result,
Some(Err(error)) => Err(eyre!("join error: {:?}", error)),
None => Err(eyre!("unexpected test task hang")),
}
}
/// Make sure `lightwalletd` works with Zebra, when both their states are empty.
feat(rpc): Implement `getblockchaininfo` RPC method (#3891) * Implement `getblockchaininfo` RPC method * add a test for `get_blockchain_info` * fix tohex/fromhex * move comment * Update lightwalletd acceptance test for getblockchaininfo RPC (#3914) * change(rpc): Return getblockchaininfo network upgrades in height order (#3915) * Update lightwalletd acceptance test for getblockchaininfo RPC * Update some doc comments for network upgrades * List network upgrades in order in the getblockchaininfo RPC Also: - Use a constant for the "missing consensus branch ID" RPC value - Simplify fetching consensus branch IDs - Make RPC type derives consistent - Update RPC type documentation * Make RPC type derives consistent * Fix a confusing test comment * get hashand height at the same time * fix estimated_height * fix lint * add extra check Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix typo Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * split test Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix(rpc): ignore an expected error in the RPC acceptance tests (#3961) * Add ignored regexes to test command failure regex methods * Ignore empty chain error in getblockchaininfo We expect this error when zebrad starts up with an empty state. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-03-25 05:25:31 -07:00
///
/// This test only runs when the `ZEBRA_TEST_LIGHTWALLETD` env var is set.
feat(rpc): Implement `getblockchaininfo` RPC method (#3891) * Implement `getblockchaininfo` RPC method * add a test for `get_blockchain_info` * fix tohex/fromhex * move comment * Update lightwalletd acceptance test for getblockchaininfo RPC (#3914) * change(rpc): Return getblockchaininfo network upgrades in height order (#3915) * Update lightwalletd acceptance test for getblockchaininfo RPC * Update some doc comments for network upgrades * List network upgrades in order in the getblockchaininfo RPC Also: - Use a constant for the "missing consensus branch ID" RPC value - Simplify fetching consensus branch IDs - Make RPC type derives consistent - Update RPC type documentation * Make RPC type derives consistent * Fix a confusing test comment * get hashand height at the same time * fix estimated_height * fix lint * add extra check Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix typo Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * split test Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix(rpc): ignore an expected error in the RPC acceptance tests (#3961) * Add ignored regexes to test command failure regex methods * Ignore empty chain error in getblockchaininfo We expect this error when zebrad starts up with an empty state. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-03-25 05:25:31 -07:00
///
/// This test doesn't work on Windows, so it is always skipped on that platform.
#[test]
#[cfg(not(target_os = "windows"))]
fn lightwalletd_integration() -> Result<()> {
change(rpc): Add getpeerinfo RPC method (#5951) * adds ValidateBlock request to state * adds `Request` enum in block verifier skips solution check for BlockProposal requests calls CheckBlockValidity instead of Commit block for BlockProposal requests * uses new Request in references to chain verifier * adds getblocktemplate proposal mode response type * makes getblocktemplate-rpcs feature in zebra-consensus select getblocktemplate-rpcs in zebra-state * Adds PR review revisions * adds info log in CheckBlockProposalValidity * Reverts replacement of match statement * adds `GetBlockTemplate::capabilities` fn * conditions calling checkpoint verifier on !request.is_proposal * updates references to validate_and_commit_non_finalized * adds snapshot test, updates test vectors * adds `should_count_metrics` to NonFinalizedState * Returns an error from chain verifier for block proposal requests below checkpoint height adds feature flags * adds "proposal" to GET_BLOCK_TEMPLATE_CAPABILITIES_FIELD * adds back block::Request to zebra-consensus lib * updates snapshots * Removes unnecessary network arg * skips req in tracing intstrument for read state * Moves out block proposal validation to its own fn * corrects `difficulty_threshold_is_valid` docs adds/fixes some comments, adds TODOs general cleanup from a self-review. * Update zebra-state/src/service.rs * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Update zebra-rpc/src/methods/get_block_template_rpcs.rs Co-authored-by: teor <teor@riseup.net> * check best chain tip * Update zebra-state/src/service.rs Co-authored-by: teor <teor@riseup.net> * Applies cleanup suggestions from code review * updates gbt acceptance test to make a block proposal * fixes json parsing mistake * adds retries * returns reject reason if there are no retries left * moves result deserialization to RPCRequestClient method, adds docs, moves jsonrpc_core to dev-dependencies * moves sleep(EXPECTED_TX_TIME) out of loop * updates/adds info logs in retry loop * Revert "moves sleep(EXPECTED_TX_TIME) out of loop" This reverts commit f7f0926f4050519687a79afc16656c3f345c004b. * adds `allow(dead_code)` * tests with curtime, mintime, & maxtime * Fixes doc comment * Logs error responses from chain_verifier CheckProposal requests * Removes retry loop, adds num_txs log * removes verbose info log * sorts mempool_txs before generating merkle root * Make imports conditional on a feature * Disable new CI tests until bugs are fixed * adds support for getpeerinfo RPC * adds vector test * Always passes address_book to RpcServer * Update zebra-network/src/address_book.rs Co-authored-by: teor <teor@riseup.net> * Renames PeerObserver to AddressBookPeers * adds getpeerinfo acceptance test * Asserts that addresses parse into SocketAddr * adds launches_lightwalletd field to TestType::LaunchWithEmptyState and uses it from the get_peer_info test * renames peer_observer mod * uses SocketAddr as `addr` field type in PeerInfo Co-authored-by: teor <teor@riseup.net> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-01-16 23:09:07 -08:00
lightwalletd_integration_test(LaunchWithEmptyState {
launches_lightwalletd: true,
})
}
T3. add(test): check for failure messages in lightwalletd and Zebra logs (#3903) * Make command test matching code accept generic regexes And add generic conversions to regexes. * Document test command structs * Support matching multiple regexes internally in the test command * Make it easier to call the generic regex methods * Add a missing API usage comment * Fix a potential hang in test child error reports * Revert Option<Child> process handling * Revert "Revert Option<Child> process handling" This reverts commit 2af30086858d104dcb0ec87383996c36bcaa7371. * Add a set of failure regexes to test command output * Allow debug-printing TestChild again * When the child is dropped, check any remaining output * Document a wait_with_output edge case * Improve failure regex panic output * Improve builder ergonomics * Add internal tests for failure regex panics It would be easy to disable these panics, and never realise. * Add some module structure TODOs * Stop panicking if the child process has already been taken * Add test APIs for consuming child output lines * Fix a hang on child process drop * Handle output being already taken in wait_with_output And document some edge cases we don't handle yet * Use bash's read command in the TestChild stderr test And check the actual command we're using to see if it errors. * Pretty print full failure regex list * Add the test child command line to the failure regex logs * Simplify the sync_until test code * Check logs for failure messages in lightwalletd integration tests * Add more TODOs and improve comments * Allow missing branch ID 00000000 provided by zcashd
2022-03-22 18:34:37 -07:00
/// Make sure `zebrad` can sync from peers, but don't actually launch `lightwalletd`.
feat(rpc): Implement `getblockchaininfo` RPC method (#3891) * Implement `getblockchaininfo` RPC method * add a test for `get_blockchain_info` * fix tohex/fromhex * move comment * Update lightwalletd acceptance test for getblockchaininfo RPC (#3914) * change(rpc): Return getblockchaininfo network upgrades in height order (#3915) * Update lightwalletd acceptance test for getblockchaininfo RPC * Update some doc comments for network upgrades * List network upgrades in order in the getblockchaininfo RPC Also: - Use a constant for the "missing consensus branch ID" RPC value - Simplify fetching consensus branch IDs - Make RPC type derives consistent - Update RPC type documentation * Make RPC type derives consistent * Fix a confusing test comment * get hashand height at the same time * fix estimated_height * fix lint * add extra check Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix typo Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * split test Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix(rpc): ignore an expected error in the RPC acceptance tests (#3961) * Add ignored regexes to test command failure regex methods * Ignore empty chain error in getblockchaininfo We expect this error when zebrad starts up with an empty state. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-03-25 05:25:31 -07:00
///
/// This test only runs when the `ZEBRA_CACHED_STATE_DIR` env var is set.
///
/// This test might work on Windows.
#[test]
fn zebrad_update_sync() -> Result<()> {
lightwalletd_integration_test(UpdateZebraCachedStateNoRpc)
}
/// Make sure `lightwalletd` can sync from Zebra, in update sync mode.
feat(rpc): Implement `getblockchaininfo` RPC method (#3891) * Implement `getblockchaininfo` RPC method * add a test for `get_blockchain_info` * fix tohex/fromhex * move comment * Update lightwalletd acceptance test for getblockchaininfo RPC (#3914) * change(rpc): Return getblockchaininfo network upgrades in height order (#3915) * Update lightwalletd acceptance test for getblockchaininfo RPC * Update some doc comments for network upgrades * List network upgrades in order in the getblockchaininfo RPC Also: - Use a constant for the "missing consensus branch ID" RPC value - Simplify fetching consensus branch IDs - Make RPC type derives consistent - Update RPC type documentation * Make RPC type derives consistent * Fix a confusing test comment * get hashand height at the same time * fix estimated_height * fix lint * add extra check Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix typo Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * split test Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix(rpc): ignore an expected error in the RPC acceptance tests (#3961) * Add ignored regexes to test command failure regex methods * Ignore empty chain error in getblockchaininfo We expect this error when zebrad starts up with an empty state. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-03-25 05:25:31 -07:00
///
/// This test only runs when:
/// - the `ZEBRA_TEST_LIGHTWALLETD`, `ZEBRA_CACHED_STATE_DIR`, and
/// `LIGHTWALLETD_DATA_DIR` env vars are set, and
/// - Zebra is compiled with `--features=lightwalletd-grpc-tests`.
///
/// This test doesn't work on Windows, so it is always skipped on that platform.
#[test]
#[cfg(not(target_os = "windows"))]
#[cfg(feature = "lightwalletd-grpc-tests")]
fn lightwalletd_update_sync() -> Result<()> {
lightwalletd_integration_test(UpdateCachedState)
}
feat(rpc): Implement `getblockchaininfo` RPC method (#3891) * Implement `getblockchaininfo` RPC method * add a test for `get_blockchain_info` * fix tohex/fromhex * move comment * Update lightwalletd acceptance test for getblockchaininfo RPC (#3914) * change(rpc): Return getblockchaininfo network upgrades in height order (#3915) * Update lightwalletd acceptance test for getblockchaininfo RPC * Update some doc comments for network upgrades * List network upgrades in order in the getblockchaininfo RPC Also: - Use a constant for the "missing consensus branch ID" RPC value - Simplify fetching consensus branch IDs - Make RPC type derives consistent - Update RPC type documentation * Make RPC type derives consistent * Fix a confusing test comment * get hashand height at the same time * fix estimated_height * fix lint * add extra check Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix typo Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * split test Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> * fix(rpc): ignore an expected error in the RPC acceptance tests (#3961) * Add ignored regexes to test command failure regex methods * Ignore empty chain error in getblockchaininfo We expect this error when zebrad starts up with an empty state. Co-authored-by: teor <teor@riseup.net> Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-03-25 05:25:31 -07:00
/// Make sure `lightwalletd` can fully sync from genesis using Zebra.
///
/// This test only runs when:
/// - the `ZEBRA_TEST_LIGHTWALLETD` and `ZEBRA_CACHED_STATE_DIR` env vars are set, and
/// - Zebra is compiled with `--features=lightwalletd-grpc-tests`.
///
///
/// This test doesn't work on Windows, so it is always skipped on that platform.
#[test]
#[ignore]
#[cfg(not(target_os = "windows"))]
#[cfg(feature = "lightwalletd-grpc-tests")]
fn lightwalletd_full_sync() -> Result<()> {
lightwalletd_integration_test(FullSyncFromGenesis {
allow_lightwalletd_cached_state: false,
})
}
/// Make sure `lightwalletd` can sync from Zebra, in all available modes.
///
/// Runs the tests in this order:
/// - launch lightwalletd with empty states,
/// - if `ZEBRA_CACHED_STATE_DIR` is set:
/// - run a full sync
/// - if `ZEBRA_CACHED_STATE_DIR` and `LIGHTWALLETD_DATA_DIR` are set:
/// - run a quick update sync,
/// - run a send transaction gRPC test,
/// - run read-only gRPC tests.
///
/// The lightwalletd full, update, and gRPC tests only run with `--features=lightwalletd-grpc-tests`.
///
/// These tests don't work on Windows, so they are always skipped on that platform.
#[tokio::test]
#[ignore]
#[cfg(not(target_os = "windows"))]
async fn lightwalletd_test_suite() -> Result<()> {
change(rpc): Add getpeerinfo RPC method (#5951) * adds ValidateBlock request to state * adds `Request` enum in block verifier skips solution check for BlockProposal requests calls CheckBlockValidity instead of Commit block for BlockProposal requests * uses new Request in references to chain verifier * adds getblocktemplate proposal mode response type * makes getblocktemplate-rpcs feature in zebra-consensus select getblocktemplate-rpcs in zebra-state * Adds PR review revisions * adds info log in CheckBlockProposalValidity * Reverts replacement of match statement * adds `GetBlockTemplate::capabilities` fn * conditions calling checkpoint verifier on !request.is_proposal * updates references to validate_and_commit_non_finalized * adds snapshot test, updates test vectors * adds `should_count_metrics` to NonFinalizedState * Returns an error from chain verifier for block proposal requests below checkpoint height adds feature flags * adds "proposal" to GET_BLOCK_TEMPLATE_CAPABILITIES_FIELD * adds back block::Request to zebra-consensus lib * updates snapshots * Removes unnecessary network arg * skips req in tracing intstrument for read state * Moves out block proposal validation to its own fn * corrects `difficulty_threshold_is_valid` docs adds/fixes some comments, adds TODOs general cleanup from a self-review. * Update zebra-state/src/service.rs * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Update zebra-rpc/src/methods/get_block_template_rpcs.rs Co-authored-by: teor <teor@riseup.net> * check best chain tip * Update zebra-state/src/service.rs Co-authored-by: teor <teor@riseup.net> * Applies cleanup suggestions from code review * updates gbt acceptance test to make a block proposal * fixes json parsing mistake * adds retries * returns reject reason if there are no retries left * moves result deserialization to RPCRequestClient method, adds docs, moves jsonrpc_core to dev-dependencies * moves sleep(EXPECTED_TX_TIME) out of loop * updates/adds info logs in retry loop * Revert "moves sleep(EXPECTED_TX_TIME) out of loop" This reverts commit f7f0926f4050519687a79afc16656c3f345c004b. * adds `allow(dead_code)` * tests with curtime, mintime, & maxtime * Fixes doc comment * Logs error responses from chain_verifier CheckProposal requests * Removes retry loop, adds num_txs log * removes verbose info log * sorts mempool_txs before generating merkle root * Make imports conditional on a feature * Disable new CI tests until bugs are fixed * adds support for getpeerinfo RPC * adds vector test * Always passes address_book to RpcServer * Update zebra-network/src/address_book.rs Co-authored-by: teor <teor@riseup.net> * Renames PeerObserver to AddressBookPeers * adds getpeerinfo acceptance test * Asserts that addresses parse into SocketAddr * adds launches_lightwalletd field to TestType::LaunchWithEmptyState and uses it from the get_peer_info test * renames peer_observer mod * uses SocketAddr as `addr` field type in PeerInfo Co-authored-by: teor <teor@riseup.net> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-01-16 23:09:07 -08:00
lightwalletd_integration_test(LaunchWithEmptyState {
launches_lightwalletd: true,
})?;
// Only runs when ZEBRA_CACHED_STATE_DIR is set.
lightwalletd_integration_test(UpdateZebraCachedStateNoRpc)?;
// These tests need the compile-time gRPC feature
#[cfg(feature = "lightwalletd-grpc-tests")]
{
// Do the quick tests first
// Only runs when LIGHTWALLETD_DATA_DIR and ZEBRA_CACHED_STATE_DIR are set
lightwalletd_integration_test(UpdateCachedState)?;
// Only runs when LIGHTWALLETD_DATA_DIR and ZEBRA_CACHED_STATE_DIR are set
common::lightwalletd::wallet_grpc_test::run().await?;
// Then do the slow tests
// Only runs when ZEBRA_CACHED_STATE_DIR is set.
// When manually running the test suite, allow cached state in the full sync test.
lightwalletd_integration_test(FullSyncFromGenesis {
allow_lightwalletd_cached_state: true,
})?;
// Only runs when LIGHTWALLETD_DATA_DIR and ZEBRA_CACHED_STATE_DIR are set
common::lightwalletd::send_transaction_test::run().await?;
}
Ok(())
}
/// Run a lightwalletd integration test with a configuration for `test_type`.
///
/// Tests that sync `lightwalletd` to the chain tip require the `lightwalletd-grpc-tests` feature`:
/// - [`FullSyncFromGenesis`]
/// - [`UpdateCachedState`]
///
/// Set `FullSyncFromGenesis { allow_lightwalletd_cached_state: true }` to speed up manual full sync tests.
///
/// # Relibility
///
/// The random ports in this test can cause [rare port conflicts.](#Note on port conflict)
///
/// # Panics
///
/// If the `test_type` requires `--features=lightwalletd-grpc-tests`,
/// but Zebra was not compiled with that feature.
#[tracing::instrument]
fn lightwalletd_integration_test(test_type: TestType) -> Result<()> {
let _init_guard = zebra_test::init();
// We run these sync tests with a network connection, for better test coverage.
let use_internet_connection = true;
let network = Mainnet;
let test_name = "lightwalletd_integration_test";
if test_type.launches_lightwalletd() && !can_spawn_lightwalletd_for_rpc(test_name, test_type) {
tracing::info!("skipping test due to missing lightwalletd network or cached state");
return Ok(());
}
// Launch zebra with peers and using a predefined zebrad state path.
let (mut zebrad, zebra_rpc_address) = if let Some(zebrad_and_address) =
spawn_zebrad_for_rpc(network, test_name, test_type, use_internet_connection)?
{
tracing::info!(
?test_type,
"running lightwalletd & zebrad integration test, launching zebrad...",
);
zebrad_and_address
} else {
// Skip the test, we don't have the required cached state
return Ok(());
};
fix(state): Use correct end heights for end of block subtrees during the full sync (#7566) * Avoid manual handling of previous sapling trees by using iterator windows instead * Avoid manual sapling subtree index handling by comparing prev and current subtree indexes instead * Simplify adding notes by using the exact number of remaining notes * Simplify by skipping the first block, because it can't complete a subtree * Re-use existing tree update code * Apply the sapling changes to orchard subtree updates * add a reverse database column family iterator function * Make skipping the lowest tree independent of iteration order * Move new subtree checks into the iterator, rename to end_height * Split subtree calculation into a new method * Split the calculate and write methods * Quickly check the first subtree before running the full upgrade * Do the quick checks every time Zebra runs, and refactor slow check error handling * Do quick checks for orchard as well * Make orchard tree upgrade match sapling upgrade code * Upgrade subtrees in reverse height order * Bump the database patch version so the upgrade runs again * Reset previous subtree upgrade data before doing this one * Add extra checks to subtree calculation to diagnose errors * Use correct heights for subtrees completed at the end of a block * Add even more checks to diagnose issues * Instrument upgrade methods to improve diagnostics * Prevent modification of re-used trees * Debug with subtree positions as well * Fix an off-by-one error with completed subtrees * Fix typos and confusing comments Co-authored-by: Marek <mail@marek.onl> * Fix mistaken previous tree handling and end tree comments * Remove unnecessary subtraction in remaining leaves calc * Log heights when assertions fail * Fix new subtree detection filter * Move new subtree check into a method, cleanup unused code * Remove redundant assertions * Wait for subtree upgrade before testing RPCs * Fix subtree search in quick check * Temporarily upgrade subtrees in forward height order * Clarify some comments * Fix missing test imports * Fix subtree logging * Add a comment about a potential hang with future upgrades * Fix zebrad var ownership * Log more info when add_subtrees.rs fails * cargo fmt --all * Fix unrelated clippy::unnecessary_unwrap * cargo clippy --fix --all-features --all-targets; cargo fmt --all * Stop the quick check depending on tree de-duplication * Refactor waiting for the upgrade into functions * Wait for state upgrades whenever the cached state is updated * Wait for the testnet upgrade in the right place * Fix unused variable * Fix a subtree detection bug and comments * Remove an early reference to reverse direction * Stop skipping subtrees completed at the end of blocks * Actually fix new subtree code --------- Co-authored-by: Marek <mail@marek.onl>
2023-09-19 07:49:36 -07:00
// Store the state version message so we can wait for the upgrade later if needed.
let state_version_message = wait_for_state_version_message(&mut zebrad)?;
if test_type.needs_zebra_cached_state() {
zebrad
.expect_stdout_line_matches(r"loaded Zebra state cache .*tip.*=.*Height\([0-9]{7}\)")?;
} else {
// Timeout the test if we're somehow accidentally using a cached state
zebrad.expect_stdout_line_matches("loaded Zebra state cache .*tip.*=.*None")?;
}
// Wait for the state to upgrade and the RPC port, if the upgrade is short.
//
// If incompletely upgraded states get written to the CI cache,
// change DATABASE_FORMAT_UPGRADE_IS_LONG to true.
if test_type.launches_lightwalletd() && !DATABASE_FORMAT_UPGRADE_IS_LONG {
tracing::info!(
?test_type,
?zebra_rpc_address,
"waiting for zebrad to open its RPC port..."
);
wait_for_state_version_upgrade(
&mut zebrad,
&state_version_message,
state_database_format_version_in_code(),
[format!(
"Opened RPC endpoint at {}",
zebra_rpc_address.expect("lightwalletd test must have RPC port")
)],
)?;
} else {
wait_for_state_version_upgrade(
&mut zebrad,
&state_version_message,
state_database_format_version_in_code(),
None,
)?;
}
// Launch lightwalletd, if needed
let lightwalletd_and_port = if test_type.launches_lightwalletd() {
tracing::info!(
?zebra_rpc_address,
"launching lightwalletd connected to zebrad",
);
// Launch lightwalletd
let (mut lightwalletd, lightwalletd_rpc_port) = spawn_lightwalletd_for_rpc(
network,
test_name,
test_type,
zebra_rpc_address.expect("lightwalletd test must have RPC port"),
)?
.expect("already checked for lightwalletd cached state and network");
tracing::info!(
?lightwalletd_rpc_port,
"spawned lightwalletd connected to zebrad",
);
// Check that `lightwalletd` is calling the expected Zebra RPCs
// getblockchaininfo
if test_type.needs_zebra_cached_state() {
lightwalletd.expect_stdout_line_matches(
"Got sapling height 419200 block height [0-9]{7} chain main branchID [0-9a-f]{8}",
)?;
} else {
// Timeout the test if we're somehow accidentally using a cached state in our temp dir
lightwalletd.expect_stdout_line_matches(
"Got sapling height 419200 block height [0-9]{1,6} chain main branchID 00000000",
)?;
}
if test_type.needs_lightwalletd_cached_state() {
lightwalletd
.expect_stdout_line_matches("Done reading [0-9]{7} blocks from disk cache")?;
} else if !test_type.allow_lightwalletd_cached_state() {
// Timeout the test if we're somehow accidentally using a cached state in our temp dir
lightwalletd.expect_stdout_line_matches("Done reading 0 blocks from disk cache")?;
}
// getblock with the first Sapling block in Zebra's state
//
// zcash/lightwalletd calls getbestblockhash here, but
// adityapk00/lightwalletd calls getblock
//
// The log also depends on what is in Zebra's state:
//
// # Cached Zebra State
//
// lightwalletd ingests blocks into its cache.
//
// # Empty Zebra State
//
// lightwalletd tries to download the Sapling activation block, but it's not in the state.
//
// Until the Sapling activation block has been downloaded,
// lightwalletd will keep retrying getblock.
if !test_type.allow_lightwalletd_cached_state() {
if test_type.needs_zebra_cached_state() {
lightwalletd.expect_stdout_line_matches(
"([Aa]dding block to cache)|([Ww]aiting for block)",
)?;
} else {
lightwalletd.expect_stdout_line_matches(regex::escape(
"Waiting for zcashd height to reach Sapling activation height (419200)",
))?;
}
}
Some((lightwalletd, lightwalletd_rpc_port))
} else {
None
};
fix(state): Use correct end heights for end of block subtrees during the full sync (#7566) * Avoid manual handling of previous sapling trees by using iterator windows instead * Avoid manual sapling subtree index handling by comparing prev and current subtree indexes instead * Simplify adding notes by using the exact number of remaining notes * Simplify by skipping the first block, because it can't complete a subtree * Re-use existing tree update code * Apply the sapling changes to orchard subtree updates * add a reverse database column family iterator function * Make skipping the lowest tree independent of iteration order * Move new subtree checks into the iterator, rename to end_height * Split subtree calculation into a new method * Split the calculate and write methods * Quickly check the first subtree before running the full upgrade * Do the quick checks every time Zebra runs, and refactor slow check error handling * Do quick checks for orchard as well * Make orchard tree upgrade match sapling upgrade code * Upgrade subtrees in reverse height order * Bump the database patch version so the upgrade runs again * Reset previous subtree upgrade data before doing this one * Add extra checks to subtree calculation to diagnose errors * Use correct heights for subtrees completed at the end of a block * Add even more checks to diagnose issues * Instrument upgrade methods to improve diagnostics * Prevent modification of re-used trees * Debug with subtree positions as well * Fix an off-by-one error with completed subtrees * Fix typos and confusing comments Co-authored-by: Marek <mail@marek.onl> * Fix mistaken previous tree handling and end tree comments * Remove unnecessary subtraction in remaining leaves calc * Log heights when assertions fail * Fix new subtree detection filter * Move new subtree check into a method, cleanup unused code * Remove redundant assertions * Wait for subtree upgrade before testing RPCs * Fix subtree search in quick check * Temporarily upgrade subtrees in forward height order * Clarify some comments * Fix missing test imports * Fix subtree logging * Add a comment about a potential hang with future upgrades * Fix zebrad var ownership * Log more info when add_subtrees.rs fails * cargo fmt --all * Fix unrelated clippy::unnecessary_unwrap * cargo clippy --fix --all-features --all-targets; cargo fmt --all * Stop the quick check depending on tree de-duplication * Refactor waiting for the upgrade into functions * Wait for state upgrades whenever the cached state is updated * Wait for the testnet upgrade in the right place * Fix unused variable * Fix a subtree detection bug and comments * Remove an early reference to reverse direction * Stop skipping subtrees completed at the end of blocks * Actually fix new subtree code --------- Co-authored-by: Marek <mail@marek.onl>
2023-09-19 07:49:36 -07:00
// Wait for zebrad and lightwalletd to sync, if needed.
let (mut zebrad, lightwalletd) = if test_type.needs_zebra_cached_state() {
if let Some((lightwalletd, lightwalletd_rpc_port)) = lightwalletd_and_port {
#[cfg(feature = "lightwalletd-grpc-tests")]
{
use common::lightwalletd::sync::wait_for_zebrad_and_lightwalletd_sync;
tracing::info!(
?lightwalletd_rpc_port,
"waiting for zebrad and lightwalletd to sync...",
);
fix(state): Use correct end heights for end of block subtrees during the full sync (#7566) * Avoid manual handling of previous sapling trees by using iterator windows instead * Avoid manual sapling subtree index handling by comparing prev and current subtree indexes instead * Simplify adding notes by using the exact number of remaining notes * Simplify by skipping the first block, because it can't complete a subtree * Re-use existing tree update code * Apply the sapling changes to orchard subtree updates * add a reverse database column family iterator function * Make skipping the lowest tree independent of iteration order * Move new subtree checks into the iterator, rename to end_height * Split subtree calculation into a new method * Split the calculate and write methods * Quickly check the first subtree before running the full upgrade * Do the quick checks every time Zebra runs, and refactor slow check error handling * Do quick checks for orchard as well * Make orchard tree upgrade match sapling upgrade code * Upgrade subtrees in reverse height order * Bump the database patch version so the upgrade runs again * Reset previous subtree upgrade data before doing this one * Add extra checks to subtree calculation to diagnose errors * Use correct heights for subtrees completed at the end of a block * Add even more checks to diagnose issues * Instrument upgrade methods to improve diagnostics * Prevent modification of re-used trees * Debug with subtree positions as well * Fix an off-by-one error with completed subtrees * Fix typos and confusing comments Co-authored-by: Marek <mail@marek.onl> * Fix mistaken previous tree handling and end tree comments * Remove unnecessary subtraction in remaining leaves calc * Log heights when assertions fail * Fix new subtree detection filter * Move new subtree check into a method, cleanup unused code * Remove redundant assertions * Wait for subtree upgrade before testing RPCs * Fix subtree search in quick check * Temporarily upgrade subtrees in forward height order * Clarify some comments * Fix missing test imports * Fix subtree logging * Add a comment about a potential hang with future upgrades * Fix zebrad var ownership * Log more info when add_subtrees.rs fails * cargo fmt --all * Fix unrelated clippy::unnecessary_unwrap * cargo clippy --fix --all-features --all-targets; cargo fmt --all * Stop the quick check depending on tree de-duplication * Refactor waiting for the upgrade into functions * Wait for state upgrades whenever the cached state is updated * Wait for the testnet upgrade in the right place * Fix unused variable * Fix a subtree detection bug and comments * Remove an early reference to reverse direction * Stop skipping subtrees completed at the end of blocks * Actually fix new subtree code --------- Co-authored-by: Marek <mail@marek.onl>
2023-09-19 07:49:36 -07:00
let (lightwalletd, mut zebrad) = wait_for_zebrad_and_lightwalletd_sync(
lightwalletd,
lightwalletd_rpc_port,
zebrad,
zebra_rpc_address.expect("lightwalletd test must have RPC port"),
test_type,
// We want to wait for the mempool and network for better coverage
true,
use_internet_connection,
)?;
// Wait for the state to upgrade, if the upgrade is long.
// If this line hangs, change DATABASE_FORMAT_UPGRADE_IS_LONG to false,
// or combine "wait for sync" with "wait for state version upgrade".
if DATABASE_FORMAT_UPGRADE_IS_LONG {
wait_for_state_version_upgrade(
&mut zebrad,
&state_version_message,
state_database_format_version_in_code(),
None,
)?;
}
fix(state): Use correct end heights for end of block subtrees during the full sync (#7566) * Avoid manual handling of previous sapling trees by using iterator windows instead * Avoid manual sapling subtree index handling by comparing prev and current subtree indexes instead * Simplify adding notes by using the exact number of remaining notes * Simplify by skipping the first block, because it can't complete a subtree * Re-use existing tree update code * Apply the sapling changes to orchard subtree updates * add a reverse database column family iterator function * Make skipping the lowest tree independent of iteration order * Move new subtree checks into the iterator, rename to end_height * Split subtree calculation into a new method * Split the calculate and write methods * Quickly check the first subtree before running the full upgrade * Do the quick checks every time Zebra runs, and refactor slow check error handling * Do quick checks for orchard as well * Make orchard tree upgrade match sapling upgrade code * Upgrade subtrees in reverse height order * Bump the database patch version so the upgrade runs again * Reset previous subtree upgrade data before doing this one * Add extra checks to subtree calculation to diagnose errors * Use correct heights for subtrees completed at the end of a block * Add even more checks to diagnose issues * Instrument upgrade methods to improve diagnostics * Prevent modification of re-used trees * Debug with subtree positions as well * Fix an off-by-one error with completed subtrees * Fix typos and confusing comments Co-authored-by: Marek <mail@marek.onl> * Fix mistaken previous tree handling and end tree comments * Remove unnecessary subtraction in remaining leaves calc * Log heights when assertions fail * Fix new subtree detection filter * Move new subtree check into a method, cleanup unused code * Remove redundant assertions * Wait for subtree upgrade before testing RPCs * Fix subtree search in quick check * Temporarily upgrade subtrees in forward height order * Clarify some comments * Fix missing test imports * Fix subtree logging * Add a comment about a potential hang with future upgrades * Fix zebrad var ownership * Log more info when add_subtrees.rs fails * cargo fmt --all * Fix unrelated clippy::unnecessary_unwrap * cargo clippy --fix --all-features --all-targets; cargo fmt --all * Stop the quick check depending on tree de-duplication * Refactor waiting for the upgrade into functions * Wait for state upgrades whenever the cached state is updated * Wait for the testnet upgrade in the right place * Fix unused variable * Fix a subtree detection bug and comments * Remove an early reference to reverse direction * Stop skipping subtrees completed at the end of blocks * Actually fix new subtree code --------- Co-authored-by: Marek <mail@marek.onl>
2023-09-19 07:49:36 -07:00
(zebrad, Some(lightwalletd))
}
#[cfg(not(feature = "lightwalletd-grpc-tests"))]
panic!(
"the {test_type:?} test requires `cargo test --feature lightwalletd-grpc-tests`\n\
zebrad: {zebrad:?}\n\
lightwalletd: {lightwalletd:?}\n\
lightwalletd_rpc_port: {lightwalletd_rpc_port:?}"
);
} else {
// We're just syncing Zebra, so there's no lightwalletd to check
tracing::info!(?test_type, "waiting for zebrad to sync to the tip");
zebrad.expect_stdout_line_matches(SYNC_FINISHED_REGEX)?;
// Wait for the state to upgrade, if the upgrade is long.
// If this line hangs, change DATABASE_FORMAT_UPGRADE_IS_LONG to false.
if DATABASE_FORMAT_UPGRADE_IS_LONG {
wait_for_state_version_upgrade(
&mut zebrad,
&state_version_message,
state_database_format_version_in_code(),
None,
)?;
}
fix(state): Use correct end heights for end of block subtrees during the full sync (#7566) * Avoid manual handling of previous sapling trees by using iterator windows instead * Avoid manual sapling subtree index handling by comparing prev and current subtree indexes instead * Simplify adding notes by using the exact number of remaining notes * Simplify by skipping the first block, because it can't complete a subtree * Re-use existing tree update code * Apply the sapling changes to orchard subtree updates * add a reverse database column family iterator function * Make skipping the lowest tree independent of iteration order * Move new subtree checks into the iterator, rename to end_height * Split subtree calculation into a new method * Split the calculate and write methods * Quickly check the first subtree before running the full upgrade * Do the quick checks every time Zebra runs, and refactor slow check error handling * Do quick checks for orchard as well * Make orchard tree upgrade match sapling upgrade code * Upgrade subtrees in reverse height order * Bump the database patch version so the upgrade runs again * Reset previous subtree upgrade data before doing this one * Add extra checks to subtree calculation to diagnose errors * Use correct heights for subtrees completed at the end of a block * Add even more checks to diagnose issues * Instrument upgrade methods to improve diagnostics * Prevent modification of re-used trees * Debug with subtree positions as well * Fix an off-by-one error with completed subtrees * Fix typos and confusing comments Co-authored-by: Marek <mail@marek.onl> * Fix mistaken previous tree handling and end tree comments * Remove unnecessary subtraction in remaining leaves calc * Log heights when assertions fail * Fix new subtree detection filter * Move new subtree check into a method, cleanup unused code * Remove redundant assertions * Wait for subtree upgrade before testing RPCs * Fix subtree search in quick check * Temporarily upgrade subtrees in forward height order * Clarify some comments * Fix missing test imports * Fix subtree logging * Add a comment about a potential hang with future upgrades * Fix zebrad var ownership * Log more info when add_subtrees.rs fails * cargo fmt --all * Fix unrelated clippy::unnecessary_unwrap * cargo clippy --fix --all-features --all-targets; cargo fmt --all * Stop the quick check depending on tree de-duplication * Refactor waiting for the upgrade into functions * Wait for state upgrades whenever the cached state is updated * Wait for the testnet upgrade in the right place * Fix unused variable * Fix a subtree detection bug and comments * Remove an early reference to reverse direction * Stop skipping subtrees completed at the end of blocks * Actually fix new subtree code --------- Co-authored-by: Marek <mail@marek.onl>
2023-09-19 07:49:36 -07:00
(zebrad, None)
}
} else {
let lightwalletd = lightwalletd_and_port.map(|(lightwalletd, _port)| lightwalletd);
// We don't have a cached state, so we don't do any tip checks for Zebra or lightwalletd
(zebrad, lightwalletd)
};
tracing::info!(
?test_type,
"cleaning up child processes and checking for errors",
);
// Cleanup both processes
//
// If the test fails here, see the [note on port conflict](#Note on port conflict)
//
// zcash/lightwalletd exits by itself, but
// adityapk00/lightwalletd keeps on going, so it gets killed by the test harness.
zebrad.kill(false)?;
if let Some(mut lightwalletd) = lightwalletd {
lightwalletd.kill(false)?;
let lightwalletd_output = lightwalletd.wait_with_output()?.assert_failure()?;
lightwalletd_output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
}
let zebrad_output = zebrad.wait_with_output()?.assert_failure()?;
zebrad_output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
Ok(())
}
/// Test will start 2 zebrad nodes one after the other using the same Zcash listener.
/// It is expected that the first node spawned will get exclusive use of the port.
/// The second node will panic with the Zcash listener conflict hint added in #1535.
#[test]
fn zebra_zcash_listener_conflict() -> Result<()> {
let _init_guard = zebra_test::init();
// [Note on port conflict](#Note on port conflict)
let port = random_known_port();
let listen_addr = format!("127.0.0.1:{port}");
// Write a configuration that has our created network listen_addr
let mut config = default_test_config(Mainnet)?;
config.network.listen_addr = listen_addr.parse().unwrap();
let dir1 = testdir()?.with_config(&mut config)?;
let regex1 = regex::escape(&format!("Opened Zcash protocol endpoint at {listen_addr}"));
// From another folder create a configuration with the same listener.
// `network.listen_addr` will be the same in the 2 nodes.
// (But since the config is ephemeral, they will have different state paths.)
let dir2 = testdir()?.with_config(&mut config)?;
check_config_conflict(dir1, regex1.as_str(), dir2, PORT_IN_USE_ERROR.as_str())?;
Ok(())
}
/// Start 2 zebrad nodes using the same metrics listener port, but different
/// state directories and Zcash listener ports. The first node should get
/// exclusive use of the port. The second node will panic with the Zcash metrics
/// conflict hint added in #1535.
#[test]
#[cfg(feature = "prometheus")]
fn zebra_metrics_conflict() -> Result<()> {
let _init_guard = zebra_test::init();
// [Note on port conflict](#Note on port conflict)
let port = random_known_port();
let listen_addr = format!("127.0.0.1:{port}");
// Write a configuration that has our created metrics endpoint_addr
let mut config = default_test_config(Mainnet)?;
config.metrics.endpoint_addr = Some(listen_addr.parse().unwrap());
let dir1 = testdir()?.with_config(&mut config)?;
let regex1 = regex::escape(&format!(r"Opened metrics endpoint at {listen_addr}"));
// From another folder create a configuration with the same endpoint.
// `metrics.endpoint_addr` will be the same in the 2 nodes.
// But they will have different Zcash listeners (auto port) and states (ephemeral)
let dir2 = testdir()?.with_config(&mut config)?;
check_config_conflict(dir1, regex1.as_str(), dir2, PORT_IN_USE_ERROR.as_str())?;
Ok(())
}
/// Start 2 zebrad nodes using the same tracing listener port, but different
/// state directories and Zcash listener ports. The first node should get
/// exclusive use of the port. The second node will panic with the Zcash tracing
/// conflict hint added in #1535.
#[test]
#[cfg(feature = "filter-reload")]
fn zebra_tracing_conflict() -> Result<()> {
let _init_guard = zebra_test::init();
// [Note on port conflict](#Note on port conflict)
let port = random_known_port();
let listen_addr = format!("127.0.0.1:{port}");
// Write a configuration that has our created tracing endpoint_addr
let mut config = default_test_config(Mainnet)?;
config.tracing.endpoint_addr = Some(listen_addr.parse().unwrap());
let dir1 = testdir()?.with_config(&mut config)?;
let regex1 = regex::escape(&format!(r"Opened tracing endpoint at {listen_addr}"));
// From another folder create a configuration with the same endpoint.
// `tracing.endpoint_addr` will be the same in the 2 nodes.
// But they will have different Zcash listeners (auto port) and states (ephemeral)
let dir2 = testdir()?.with_config(&mut config)?;
check_config_conflict(dir1, regex1.as_str(), dir2, PORT_IN_USE_ERROR.as_str())?;
Ok(())
}
/// Start 2 zebrad nodes using the same RPC listener port, but different
/// state directories and Zcash listener ports. The first node should get
/// exclusive use of the port. The second node will panic.
///
/// This test is sometimes unreliable on Windows, and hangs on macOS.
/// We believe this is a CI infrastructure issue, not a platform-specific issue.
#[test]
#[cfg(not(any(target_os = "windows", target_os = "macos")))]
fn zebra_rpc_conflict() -> Result<()> {
let _init_guard = zebra_test::init();
if zebra_test::net::zebra_skip_network_tests() {
return Ok(());
}
// Write a configuration that has RPC listen_addr set
// [Note on port conflict](#Note on port conflict)
//
// This is the required setting to detect port conflicts.
let mut config = random_known_rpc_port_config(false, Mainnet)?;
let dir1 = testdir()?.with_config(&mut config)?;
let regex1 = regex::escape(&format!(
r"Opened RPC endpoint at {}",
config.rpc.listen_addr.unwrap(),
));
// From another folder create a configuration with the same endpoint.
// `rpc.listen_addr` will be the same in the 2 nodes.
// But they will have different Zcash listeners (auto port) and states (ephemeral)
let dir2 = testdir()?.with_config(&mut config)?;
check_config_conflict(dir1, regex1.as_str(), dir2, "Unable to start RPC server")?;
Ok(())
}
/// Start 2 zebrad nodes using the same state directory, but different Zcash
/// listener ports. The first node should get exclusive access to the database.
/// The second node will panic with the Zcash state conflict hint added in #1535.
#[test]
fn zebra_state_conflict() -> Result<()> {
let _init_guard = zebra_test::init();
// A persistent config has a fixed temp state directory, but asks the OS to
// automatically choose an unused port
let mut config = persistent_test_config(Mainnet)?;
let dir_conflict = testdir()?.with_config(&mut config)?;
// Windows problems with this match will be worked on at #1654
// We are matching the whole opened path only for unix by now.
let contains = if cfg!(unix) {
let mut dir_conflict_full = PathBuf::new();
dir_conflict_full.push(dir_conflict.path());
dir_conflict_full.push("state");
dir_conflict_full.push(format!(
"v{}",
zebra_state::state_database_format_version_in_code().major,
));
dir_conflict_full.push(config.network.network.to_string().to_lowercase());
format!(
"Opened Zebra state cache at {}",
dir_conflict_full.display()
)
} else {
String::from("Opened Zebra state cache at ")
};
check_config_conflict(
dir_conflict.path(),
regex::escape(&contains).as_str(),
dir_conflict.path(),
LOCK_FILE_ERROR.as_str(),
)?;
Ok(())
}
/// Launch a node in `first_dir`, wait a few seconds, then launch a node in
/// `second_dir`. Check that the first node's stdout contains
/// `first_stdout_regex`, and the second node's stderr contains
/// `second_stderr_regex`.
#[tracing::instrument]
fn check_config_conflict<T, U>(
first_dir: T,
first_stdout_regex: &str,
second_dir: U,
second_stderr_regex: &str,
) -> Result<()>
where
T: ZebradTestDirExt + std::fmt::Debug,
U: ZebradTestDirExt + std::fmt::Debug,
{
// Start the first node
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let mut node1 = first_dir.spawn_child(args!["start"])?;
// Wait until node1 has used the conflicting resource.
node1.expect_stdout_line_matches(first_stdout_regex)?;
// Wait a bit before launching the second node.
std::thread::sleep(BETWEEN_NODES_DELAY);
// Spawn the second node
change(test): Refactor how extra arguments are handled when spawing lightwalled (#4067) * Join similar imports Avoid the confusion that might cause one to think that they come from different modules or crates. * Create an `Arguments` helper type A type to keep track of a list of arguments for a sub-process. It makes it easier for overriding parameters with new values. * Create an `args!` helper macro Make it simpler to create `Arguments` instances with known values. * Require `Arguments` for `spawn_child` method Change the method to have an `Arguments` parameter, and merge it with some default values before passing them forward. * Use `Arguments` in `spawn_lightwalletd_child` Change the method to use an `Arguments` instance, and merge it with some default options. * Use `Arguments` in `spawn_child_with_command` Require an `Arguments` instance in the `spawn_child_with_command` extension method. Makes it simpler to call from `spawn_child` and `spawn_lightwalletd_child` extension methods. * Test if argument order is preserved Check that when building an `Arguments` instance, the order that the arguments are set is preserved in the generated list of strings. * Refactor test to improve readability Also separates some common code to be reused by later tests. * Test overriding arguments Check to see if overriding arguments behaves as expected, by keeping the argument order when overriding and not introducing duplicates. * Refactor test to improve readability Move out a chunk of code so that the test itself is easier to read and to make that code reusable by a later test. * Test that `Arguments` instances can be merged Merge two `Arguments` instances built from two lists of arguments, and check that the expanded strings preserve order and override rules. * Add Eq derives on Arguments Co-authored-by: teor <teor@riseup.net>
2022-04-19 03:28:52 -07:00
let node2 = second_dir.spawn_child(args!["start"]);
let (node2, mut node1) = node1.kill_on_error(node2)?;
// Wait a few seconds and kill first node.
// Second node is terminated by panic, no need to kill.
std::thread::sleep(LAUNCH_DELAY);
let node1_kill_res = node1.kill(false);
let (_, mut node2) = node2.kill_on_error(node1_kill_res)?;
// node2 should have panicked due to a conflict. Kill it here anyway, so it
// doesn't outlive the test on error.
//
// This code doesn't work on Windows or macOS. It's cleanup code that only
// runs when node2 doesn't panic as expected. So it's ok to skip it.
// See #1781.
#[cfg(target_os = "linux")]
if node2.is_running() {
return node2
.kill_on_error::<(), _>(Err(eyre!(
"conflicted node2 was still running, but the test expected a panic"
)))
.context_from(&mut node1)
.map(|_| ());
}
// Now we're sure both nodes are dead, and we have both their outputs
let output1 = node1.wait_with_output().context_from(&mut node2)?;
let output2 = node2.wait_with_output().context_from(&output1)?;
// Make sure the first node was killed, rather than exiting with an error.
output1
.assert_was_killed()
.warning("Possible port conflict. Are there other acceptance tests running?")
.context_from(&output2)?;
// Make sure node2 has the expected resource conflict.
output2
.stderr_line_matches(second_stderr_regex)
.context_from(&output1)?;
output2
.assert_was_not_killed()
.warning("Possible port conflict. Are there other acceptance tests running?")
.context_from(&output1)?;
Ok(())
}
#[tokio::test]
#[ignore]
async fn fully_synced_rpc_test() -> Result<()> {
let _init_guard = zebra_test::init();
// We're only using cached Zebra state here, so this test type is the most similar
let test_type = TestType::UpdateCachedState;
let network = Network::Mainnet;
let (mut zebrad, zebra_rpc_address) = if let Some(zebrad_and_address) =
spawn_zebrad_for_rpc(network, "fully_synced_rpc_test", test_type, false)?
{
tracing::info!("running fully synced zebrad RPC test");
zebrad_and_address
} else {
// Skip the test, we don't have the required cached state
return Ok(());
};
let zebra_rpc_address = zebra_rpc_address.expect("lightwalletd test must have RPC port");
zebrad.expect_stdout_line_matches(format!("Opened RPC endpoint at {zebra_rpc_address}"))?;
let client = RpcRequestClient::new(zebra_rpc_address);
// Make a getblock test that works only on synced node (high block number).
// The block is before the mandatory checkpoint, so the checkpoint cached state can be used
// if desired.
let res = client
.text_from_call("getblock", r#"["1180900", 0]"#.to_string())
.await?;
// Simple textual check to avoid fully parsing the response, for simplicity
let expected_bytes = zebra_test::vectors::MAINNET_BLOCKS
.get(&1_180_900)
.expect("test block must exist");
let expected_hex = hex::encode(expected_bytes);
assert!(
res.contains(&expected_hex),
"response did not contain the desired block: {res}"
);
Ok(())
}
#[test]
fn delete_old_databases() -> Result<()> {
use std::fs::{canonicalize, create_dir};
let _init_guard = zebra_test::init();
// Skip this test because it can be very slow without a network.
//
// The delete databases task is launched last during startup, after network setup.
// If there is no network, network setup can take a long time to timeout,
// so the task takes a long time to launch, slowing down this test.
if zebra_test::net::zebra_skip_network_tests() {
return Ok(());
}
let mut config = default_test_config(Mainnet)?;
let run_dir = testdir()?;
let cache_dir = run_dir.path().join("state");
// create cache dir
create_dir(cache_dir.clone())?;
// create a v1 dir outside cache dir that should not be deleted
let outside_dir = run_dir.path().join("v1");
create_dir(&outside_dir)?;
assert!(outside_dir.as_path().exists());
// create a `v1` dir inside cache dir that should be deleted
let inside_dir = cache_dir.join("v1");
create_dir(&inside_dir)?;
let canonicalized_inside_dir = canonicalize(inside_dir.clone()).ok().unwrap();
assert!(inside_dir.as_path().exists());
// modify config with our cache dir and not ephemeral configuration
// (delete old databases function will not run when ephemeral = true)
config.state.cache_dir = cache_dir;
config.state.ephemeral = false;
// run zebra with our config
let mut child = run_dir
.with_config(&mut config)?
.spawn_child(args!["start"])?;
// delete checker running
child.expect_stdout_line_matches("checking for old database versions".to_string())?;
// inside dir was deleted
child.expect_stdout_line_matches(format!(
"deleted outdated state database directory.*deleted_db.*=.*{canonicalized_inside_dir:?}"
))?;
assert!(!inside_dir.as_path().exists());
// deleting old databases task ended
child.expect_stdout_line_matches("finished old database version cleanup task".to_string())?;
// outside dir was not deleted
assert!(outside_dir.as_path().exists());
// finish
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
Ok(())
}
add(test): Integration test to send transactions using lightwalletd (#4068) * Export the `zebra_state::Config::db_path` method Make it easier for tests to discover the sub-directory used to store the chain state data. * Generate code for interfacing with lightwalletd Use the `tonic-build` crate to generate Rust code for communicating with lightwalletd using gRPC. The `*.proto` files were obtained from the Zcash lightwalletd repository. * Use `block::Height` instead of `Height` Import the `block` instead to make it slightly clearer. * Add helper function to remove a file if it exists Try to remove it and ignore an error if it says that the file doesn't exist. This will be used later to remove the lock file from a copied chain state directory. * Add helper function to copy chain state dirs Copy an existing chain state directory into a new temporary directory. * Add a `BoxStateService` type alias Make it easier to write and read a boxed version of a state service. * Add a helper function to start the state service Make it easier to specify the state service to use an existing state cache directory. * Import `eyre!` macro at the module level Allow it to be used in different places without having to repeat the imports. * Add `load_tip_height_from_state_directory` helper A function to discover the current chain tip height stored in a state cache. * Add helper function to prepare partial sync. state Loads a partially synchronized cached state directory into a temporary directory that can be used by a zebrad instance, and also returns the chain tip block height of that state. * Add `perform_full_sync_starting_from` helper Runs a zebrad with an existing partially synchronized state, and finishes synchronizing it to the network chain tip. * Add function to load transactions from a block Use a provided state service to load all transactions from a block at a specified height. The state service is a generic type parameter, because `zebra_state::service::ReadStateService` is not exported publicly. Using a generic type parameter also allows the service to be wrapped in layers if needed in the future. * Add `load_transactions_from_block_after` helper A function to load transactions from a block stored in a cached state directory. The cached state must be synchronized to a chain tip higher than the requested height. * Add helper function to load some test transactions Given a partially synchronized chain state, it will extend that chain by performing a full synchronization, and obtain some transactions from one of the newly added blocks. * Update `spawn_zebrad_for_rpc_without_initial_peers` Wait until the mempool is activated. * Add method to start lightwalletd with RPC server Returns the lightwalletd instance and the port that it's listening for RPC connections. The instance can reuse an existing cached lightwalletd state if the `LIGHTWALLETD_DATA_DIR` environment variable is set. * Add a `LightwalletdRpcClient` type alias To make it easier to identify the type generated from the Protobuf files. * Add helper function to connect to lightwalletd Prepare an RPC client to send requests to a lightwalletd instance. * Add a `prepare_send_transaction_request` helper Creates a request message for lightwalletd to send a transaction. * Add test to send transactions using lightwalletd Obtain some valid transactions from future blocks and try to send them to a lightwalletd instance connected to a zebrad instance that hasn't seen those transactions yet. The transactions should be successfully queued in Zebra's mempool. * Make `zebra_directory` parameter generic Allow using a `TempDir` or a `PathBuf`. * Move lightwalletd protobuf files Place them closer to the module directory, so that it's clearer that they specify the RPC protocol for lightwalletd, and not Zebra itself. * Don't use coinbase transactions in the test Coinbase transactions are rejected by the mempool. * Don't remove state lock file It is removed automatically by Zebra when it shuts down, so if it exists it should be reported as a bug. * Force mempool to be enabled in Zebrad instance Speed up the initialization of the Zebrad instance used for lightwalletd to connect to. * Refactor to create `LIGHTWALLETD_DATA_DIR_VAR` Document how the environment variable can be used to speed up the test. * Check for process errors in spawned Zebra instance Enable checking for known process failure messages. * Add `FINISH_PARTIAL_SYNC_TIMEOUT` constant Document why it exists and how the choice of the value affects the test. * Add `LIGHTWALLETD_TEST_TIMEOUT` constant And use it for the Zebrad and the Lightwalletd instances used in the send transaction integration test. * Check `lightwalletd` process for errors Enable checking the lightwalletd process for known failure messages. * Update `tonic` and `prost` dependencies Use the latest version and fix CI failures because `rustfmt` isn't installed in the build environment. * Create `send_transaction_test` module Move the send transaction using lightwalletd test and its helper functions into a new module. * Move `LIGHTWALLETD_TEST_TIMEOUT` constant Place it in the parent `lightwalletd` module. * Move gRPC helper functions and types to `rpc` mod. Make them more accessible so that they can be used by other tests. * Create a `cached_state` module Move the test utility functions related to using a cached Zebra state into the module. * Move `perform_full_sync_starting_from` to `sync` Keep to closer to the synchronization utility functions. * Move Zebra cached state path variable constant Place it in the `cached_state` module. * Skip test if `ZEBRA_TEST_LIGHTWALLETD` is not set Make it part of the set of tests ignored as a whole if no lightwalletd tests should be executed. * Move `spawn_zebrad_for_rpc_without_initial_peers` Place it in the `launch` sub-module. * Rename `rpc` module into `wallet_grpc` Avoid any potential misunderstandings when the name is seen out of context. * Allow duplicate `heck` dependency At least until `structopt` is updated or `zebra-utils` is updated to use `clap` 3. * Fix a deny.toml typo * fix(build): CMake is required by `prost` crate Co-authored-by: teor <teor@riseup.net> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-04-27 16:06:11 -07:00
/// Test sending transactions using a lightwalletd instance connected to a zebrad instance.
///
add(test): Integration test to send transactions using lightwalletd (#4068) * Export the `zebra_state::Config::db_path` method Make it easier for tests to discover the sub-directory used to store the chain state data. * Generate code for interfacing with lightwalletd Use the `tonic-build` crate to generate Rust code for communicating with lightwalletd using gRPC. The `*.proto` files were obtained from the Zcash lightwalletd repository. * Use `block::Height` instead of `Height` Import the `block` instead to make it slightly clearer. * Add helper function to remove a file if it exists Try to remove it and ignore an error if it says that the file doesn't exist. This will be used later to remove the lock file from a copied chain state directory. * Add helper function to copy chain state dirs Copy an existing chain state directory into a new temporary directory. * Add a `BoxStateService` type alias Make it easier to write and read a boxed version of a state service. * Add a helper function to start the state service Make it easier to specify the state service to use an existing state cache directory. * Import `eyre!` macro at the module level Allow it to be used in different places without having to repeat the imports. * Add `load_tip_height_from_state_directory` helper A function to discover the current chain tip height stored in a state cache. * Add helper function to prepare partial sync. state Loads a partially synchronized cached state directory into a temporary directory that can be used by a zebrad instance, and also returns the chain tip block height of that state. * Add `perform_full_sync_starting_from` helper Runs a zebrad with an existing partially synchronized state, and finishes synchronizing it to the network chain tip. * Add function to load transactions from a block Use a provided state service to load all transactions from a block at a specified height. The state service is a generic type parameter, because `zebra_state::service::ReadStateService` is not exported publicly. Using a generic type parameter also allows the service to be wrapped in layers if needed in the future. * Add `load_transactions_from_block_after` helper A function to load transactions from a block stored in a cached state directory. The cached state must be synchronized to a chain tip higher than the requested height. * Add helper function to load some test transactions Given a partially synchronized chain state, it will extend that chain by performing a full synchronization, and obtain some transactions from one of the newly added blocks. * Update `spawn_zebrad_for_rpc_without_initial_peers` Wait until the mempool is activated. * Add method to start lightwalletd with RPC server Returns the lightwalletd instance and the port that it's listening for RPC connections. The instance can reuse an existing cached lightwalletd state if the `LIGHTWALLETD_DATA_DIR` environment variable is set. * Add a `LightwalletdRpcClient` type alias To make it easier to identify the type generated from the Protobuf files. * Add helper function to connect to lightwalletd Prepare an RPC client to send requests to a lightwalletd instance. * Add a `prepare_send_transaction_request` helper Creates a request message for lightwalletd to send a transaction. * Add test to send transactions using lightwalletd Obtain some valid transactions from future blocks and try to send them to a lightwalletd instance connected to a zebrad instance that hasn't seen those transactions yet. The transactions should be successfully queued in Zebra's mempool. * Make `zebra_directory` parameter generic Allow using a `TempDir` or a `PathBuf`. * Move lightwalletd protobuf files Place them closer to the module directory, so that it's clearer that they specify the RPC protocol for lightwalletd, and not Zebra itself. * Don't use coinbase transactions in the test Coinbase transactions are rejected by the mempool. * Don't remove state lock file It is removed automatically by Zebra when it shuts down, so if it exists it should be reported as a bug. * Force mempool to be enabled in Zebrad instance Speed up the initialization of the Zebrad instance used for lightwalletd to connect to. * Refactor to create `LIGHTWALLETD_DATA_DIR_VAR` Document how the environment variable can be used to speed up the test. * Check for process errors in spawned Zebra instance Enable checking for known process failure messages. * Add `FINISH_PARTIAL_SYNC_TIMEOUT` constant Document why it exists and how the choice of the value affects the test. * Add `LIGHTWALLETD_TEST_TIMEOUT` constant And use it for the Zebrad and the Lightwalletd instances used in the send transaction integration test. * Check `lightwalletd` process for errors Enable checking the lightwalletd process for known failure messages. * Update `tonic` and `prost` dependencies Use the latest version and fix CI failures because `rustfmt` isn't installed in the build environment. * Create `send_transaction_test` module Move the send transaction using lightwalletd test and its helper functions into a new module. * Move `LIGHTWALLETD_TEST_TIMEOUT` constant Place it in the parent `lightwalletd` module. * Move gRPC helper functions and types to `rpc` mod. Make them more accessible so that they can be used by other tests. * Create a `cached_state` module Move the test utility functions related to using a cached Zebra state into the module. * Move `perform_full_sync_starting_from` to `sync` Keep to closer to the synchronization utility functions. * Move Zebra cached state path variable constant Place it in the `cached_state` module. * Skip test if `ZEBRA_TEST_LIGHTWALLETD` is not set Make it part of the set of tests ignored as a whole if no lightwalletd tests should be executed. * Move `spawn_zebrad_for_rpc_without_initial_peers` Place it in the `launch` sub-module. * Rename `rpc` module into `wallet_grpc` Avoid any potential misunderstandings when the name is seen out of context. * Allow duplicate `heck` dependency At least until `structopt` is updated or `zebra-utils` is updated to use `clap` 3. * Fix a deny.toml typo * fix(build): CMake is required by `prost` crate Co-authored-by: teor <teor@riseup.net> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-04-27 16:06:11 -07:00
/// See [`common::lightwalletd::send_transaction_test`] for more information.
///
/// This test doesn't work on Windows, so it is always skipped on that platform.
add(test): Integration test to send transactions using lightwalletd (#4068) * Export the `zebra_state::Config::db_path` method Make it easier for tests to discover the sub-directory used to store the chain state data. * Generate code for interfacing with lightwalletd Use the `tonic-build` crate to generate Rust code for communicating with lightwalletd using gRPC. The `*.proto` files were obtained from the Zcash lightwalletd repository. * Use `block::Height` instead of `Height` Import the `block` instead to make it slightly clearer. * Add helper function to remove a file if it exists Try to remove it and ignore an error if it says that the file doesn't exist. This will be used later to remove the lock file from a copied chain state directory. * Add helper function to copy chain state dirs Copy an existing chain state directory into a new temporary directory. * Add a `BoxStateService` type alias Make it easier to write and read a boxed version of a state service. * Add a helper function to start the state service Make it easier to specify the state service to use an existing state cache directory. * Import `eyre!` macro at the module level Allow it to be used in different places without having to repeat the imports. * Add `load_tip_height_from_state_directory` helper A function to discover the current chain tip height stored in a state cache. * Add helper function to prepare partial sync. state Loads a partially synchronized cached state directory into a temporary directory that can be used by a zebrad instance, and also returns the chain tip block height of that state. * Add `perform_full_sync_starting_from` helper Runs a zebrad with an existing partially synchronized state, and finishes synchronizing it to the network chain tip. * Add function to load transactions from a block Use a provided state service to load all transactions from a block at a specified height. The state service is a generic type parameter, because `zebra_state::service::ReadStateService` is not exported publicly. Using a generic type parameter also allows the service to be wrapped in layers if needed in the future. * Add `load_transactions_from_block_after` helper A function to load transactions from a block stored in a cached state directory. The cached state must be synchronized to a chain tip higher than the requested height. * Add helper function to load some test transactions Given a partially synchronized chain state, it will extend that chain by performing a full synchronization, and obtain some transactions from one of the newly added blocks. * Update `spawn_zebrad_for_rpc_without_initial_peers` Wait until the mempool is activated. * Add method to start lightwalletd with RPC server Returns the lightwalletd instance and the port that it's listening for RPC connections. The instance can reuse an existing cached lightwalletd state if the `LIGHTWALLETD_DATA_DIR` environment variable is set. * Add a `LightwalletdRpcClient` type alias To make it easier to identify the type generated from the Protobuf files. * Add helper function to connect to lightwalletd Prepare an RPC client to send requests to a lightwalletd instance. * Add a `prepare_send_transaction_request` helper Creates a request message for lightwalletd to send a transaction. * Add test to send transactions using lightwalletd Obtain some valid transactions from future blocks and try to send them to a lightwalletd instance connected to a zebrad instance that hasn't seen those transactions yet. The transactions should be successfully queued in Zebra's mempool. * Make `zebra_directory` parameter generic Allow using a `TempDir` or a `PathBuf`. * Move lightwalletd protobuf files Place them closer to the module directory, so that it's clearer that they specify the RPC protocol for lightwalletd, and not Zebra itself. * Don't use coinbase transactions in the test Coinbase transactions are rejected by the mempool. * Don't remove state lock file It is removed automatically by Zebra when it shuts down, so if it exists it should be reported as a bug. * Force mempool to be enabled in Zebrad instance Speed up the initialization of the Zebrad instance used for lightwalletd to connect to. * Refactor to create `LIGHTWALLETD_DATA_DIR_VAR` Document how the environment variable can be used to speed up the test. * Check for process errors in spawned Zebra instance Enable checking for known process failure messages. * Add `FINISH_PARTIAL_SYNC_TIMEOUT` constant Document why it exists and how the choice of the value affects the test. * Add `LIGHTWALLETD_TEST_TIMEOUT` constant And use it for the Zebrad and the Lightwalletd instances used in the send transaction integration test. * Check `lightwalletd` process for errors Enable checking the lightwalletd process for known failure messages. * Update `tonic` and `prost` dependencies Use the latest version and fix CI failures because `rustfmt` isn't installed in the build environment. * Create `send_transaction_test` module Move the send transaction using lightwalletd test and its helper functions into a new module. * Move `LIGHTWALLETD_TEST_TIMEOUT` constant Place it in the parent `lightwalletd` module. * Move gRPC helper functions and types to `rpc` mod. Make them more accessible so that they can be used by other tests. * Create a `cached_state` module Move the test utility functions related to using a cached Zebra state into the module. * Move `perform_full_sync_starting_from` to `sync` Keep to closer to the synchronization utility functions. * Move Zebra cached state path variable constant Place it in the `cached_state` module. * Skip test if `ZEBRA_TEST_LIGHTWALLETD` is not set Make it part of the set of tests ignored as a whole if no lightwalletd tests should be executed. * Move `spawn_zebrad_for_rpc_without_initial_peers` Place it in the `launch` sub-module. * Rename `rpc` module into `wallet_grpc` Avoid any potential misunderstandings when the name is seen out of context. * Allow duplicate `heck` dependency At least until `structopt` is updated or `zebra-utils` is updated to use `clap` 3. * Fix a deny.toml typo * fix(build): CMake is required by `prost` crate Co-authored-by: teor <teor@riseup.net> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-04-27 16:06:11 -07:00
#[tokio::test]
feat(ci): add `sending_transactions_using_lightwalletd` test to CI (#4267) * feat(ci): add lightwalletd_*_sync tests to CI * feat(ci): add lightwalletd RPC call test * feat(ci): add send transactions test with lwd to CI * fix(ci): create a variable to run transactions test * refactor(ci): use docker in docker This is a workaround for an issue related to disk partitioning, caused by a GCP service called Konlet, while mounting the cached disks to the VM and then to the container * fix(build): persist docker login credentials * fix(ci): get sync height from docker logs instead of gcp * try: use gha cache for faster building * fix(ci): mount disk in container to make it available in vm * fix(build): do not invalidate cache between images * try(docker): invalidate cache as less as possible * fix(ci): GHA terminal is not a TTY * fix(build): do not ignore entrypoint.sh * fix * fix(ci): mount using root priveleges * fix(ci): use existing disk as cached state * fix(ci): wait for disks to get mounted * force rebuild * fix failed force * fix failed commit * WIP * fix(ci): some tests does not use a cached state * wip * refactor(ci): disk names and job segregation * fix(ci): do not name boot and attached disk the same * fix(ci): attach a disk to full sync, to snapshot the state * fix(ci): use correct disk implementations * fix(ci): use different disk name to allow test concurrency * feat(ci): add lightwalledt send transaction test * cleanup(ci): remove extra tests * fix(ci): allow disk concurrency with tests * fix(ci): add considerations for different tests * fix(reusable): last fixes * feat(ci): use reusable workflow for tests * fix(rw): remove nested worflow * fix(rw): minor fixes * force rebuild * fix(rw): do not use an input as job name * fix(rw): remove variable id * fix(ci): remove explicit conditions and id * fix(ci): docker does not need the variable sign ($) to work * fix(ci): mount typo * fix(ci): if a sync fails, always delete the instance This also reduces the amount of jobs needed. * refactor(ci): make all test depend on the same build * fix(ci): some tests require multiple variables * fix(docker): variable substitution * fix(ci): allow to run multiple commits from a PR at once * fix(docker): lower the NETWORK env var for test names * reduce uneeded diff * imp(keys): use better naming for builds_disks * imp(ci): use input defaults * imp(ci): remove test_name in favor of test_id * fix(ci): better key naming * fix(ci): long disk names breaks GCP naming convention * feat(ci): validate local state version with cached state * fix(ci): add condition to run tests * fix: typo * fix: app_name should not be required * fix: zebra_state_path shouldn't be required * fix: reduce diff * fix(ci): checkout to grep local state version * Update .github/workflows/test.yml Co-authored-by: teor <teor@riseup.net> * revert: merge all tests into a single workflow * Remove unused STATE_VERSION env var * fix: minor fixes * fix(ci): make test.patch the same as test * fix(ci): negate the input value * imp(ci): better cached state conditional handling * imp(ci): exit code is captured by `docker run` * fix(deploy): mount disks with better write performance * fix(ci): change sync id to a broader id name * fix(ci): use correct input validation * fix(ci): do not make test with cached state dependant on other * imp(ci): organiza keys better * fix(ci): use appropiate naming * fix(ci): create docker volume before mounting * fix(lint): do not fail on all new changes * imp(ci): do not report in pr review * fix(ci): partition clean disks * fix: typo * fix: test called the wrong way * fix(build): stop using gha cache * ref(ci): validate run condition before calling reusable workflow * fix(ci): use a better filesystem dir and fix other values * fix: linting errors * fix(ci): typo * Revert "fix(build): stop using gha cache" This reverts commit a8fbc5f416df561e58b388e065d1dc9696983508. Cache expiration is a lesser evil than not using caching at all and then failing with a 401 * imp(ci): do not set a default for needs_zebra_state * Update .github/workflows/test.yml Co-authored-by: teor <teor@riseup.net> * fix(deps): remove dependencies * force build * Update .github/workflows/test.yml Co-authored-by: teor <teor@riseup.net> * fix(docker): add RUST_LOG as an ARG and ENV * fix(test): add `#[ignore]` to send transactions test This test needs state then it should be marked as #[ignore] * fix(ci): differentiate between root cache path and its dir * Remove extra `state` directory That was a workaround for an issue that has been fixed. * imp(docs): use better test descriptions Co-authored-by: teor <teor@riseup.net> * fix: reduce unwanted diff with main * fix(ci): make lwd conditions consistent * Remove another extra `state` directory Was also part of a workaround for an issue that has been fixed. * fix(ci): use better conditionals to run test jobs Co-authored-by: teor <teor@riseup.net> * Tweak to support different lightwalletd versions Some versions print `Waiting for block`, and some versions print `Ingestor waiting for block`. Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: teor <teor@riseup.net> Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com>
2022-05-05 22:30:38 -07:00
#[ignore]
#[cfg(feature = "lightwalletd-grpc-tests")]
#[cfg(not(target_os = "windows"))]
add(test): Integration test to send transactions using lightwalletd (#4068) * Export the `zebra_state::Config::db_path` method Make it easier for tests to discover the sub-directory used to store the chain state data. * Generate code for interfacing with lightwalletd Use the `tonic-build` crate to generate Rust code for communicating with lightwalletd using gRPC. The `*.proto` files were obtained from the Zcash lightwalletd repository. * Use `block::Height` instead of `Height` Import the `block` instead to make it slightly clearer. * Add helper function to remove a file if it exists Try to remove it and ignore an error if it says that the file doesn't exist. This will be used later to remove the lock file from a copied chain state directory. * Add helper function to copy chain state dirs Copy an existing chain state directory into a new temporary directory. * Add a `BoxStateService` type alias Make it easier to write and read a boxed version of a state service. * Add a helper function to start the state service Make it easier to specify the state service to use an existing state cache directory. * Import `eyre!` macro at the module level Allow it to be used in different places without having to repeat the imports. * Add `load_tip_height_from_state_directory` helper A function to discover the current chain tip height stored in a state cache. * Add helper function to prepare partial sync. state Loads a partially synchronized cached state directory into a temporary directory that can be used by a zebrad instance, and also returns the chain tip block height of that state. * Add `perform_full_sync_starting_from` helper Runs a zebrad with an existing partially synchronized state, and finishes synchronizing it to the network chain tip. * Add function to load transactions from a block Use a provided state service to load all transactions from a block at a specified height. The state service is a generic type parameter, because `zebra_state::service::ReadStateService` is not exported publicly. Using a generic type parameter also allows the service to be wrapped in layers if needed in the future. * Add `load_transactions_from_block_after` helper A function to load transactions from a block stored in a cached state directory. The cached state must be synchronized to a chain tip higher than the requested height. * Add helper function to load some test transactions Given a partially synchronized chain state, it will extend that chain by performing a full synchronization, and obtain some transactions from one of the newly added blocks. * Update `spawn_zebrad_for_rpc_without_initial_peers` Wait until the mempool is activated. * Add method to start lightwalletd with RPC server Returns the lightwalletd instance and the port that it's listening for RPC connections. The instance can reuse an existing cached lightwalletd state if the `LIGHTWALLETD_DATA_DIR` environment variable is set. * Add a `LightwalletdRpcClient` type alias To make it easier to identify the type generated from the Protobuf files. * Add helper function to connect to lightwalletd Prepare an RPC client to send requests to a lightwalletd instance. * Add a `prepare_send_transaction_request` helper Creates a request message for lightwalletd to send a transaction. * Add test to send transactions using lightwalletd Obtain some valid transactions from future blocks and try to send them to a lightwalletd instance connected to a zebrad instance that hasn't seen those transactions yet. The transactions should be successfully queued in Zebra's mempool. * Make `zebra_directory` parameter generic Allow using a `TempDir` or a `PathBuf`. * Move lightwalletd protobuf files Place them closer to the module directory, so that it's clearer that they specify the RPC protocol for lightwalletd, and not Zebra itself. * Don't use coinbase transactions in the test Coinbase transactions are rejected by the mempool. * Don't remove state lock file It is removed automatically by Zebra when it shuts down, so if it exists it should be reported as a bug. * Force mempool to be enabled in Zebrad instance Speed up the initialization of the Zebrad instance used for lightwalletd to connect to. * Refactor to create `LIGHTWALLETD_DATA_DIR_VAR` Document how the environment variable can be used to speed up the test. * Check for process errors in spawned Zebra instance Enable checking for known process failure messages. * Add `FINISH_PARTIAL_SYNC_TIMEOUT` constant Document why it exists and how the choice of the value affects the test. * Add `LIGHTWALLETD_TEST_TIMEOUT` constant And use it for the Zebrad and the Lightwalletd instances used in the send transaction integration test. * Check `lightwalletd` process for errors Enable checking the lightwalletd process for known failure messages. * Update `tonic` and `prost` dependencies Use the latest version and fix CI failures because `rustfmt` isn't installed in the build environment. * Create `send_transaction_test` module Move the send transaction using lightwalletd test and its helper functions into a new module. * Move `LIGHTWALLETD_TEST_TIMEOUT` constant Place it in the parent `lightwalletd` module. * Move gRPC helper functions and types to `rpc` mod. Make them more accessible so that they can be used by other tests. * Create a `cached_state` module Move the test utility functions related to using a cached Zebra state into the module. * Move `perform_full_sync_starting_from` to `sync` Keep to closer to the synchronization utility functions. * Move Zebra cached state path variable constant Place it in the `cached_state` module. * Skip test if `ZEBRA_TEST_LIGHTWALLETD` is not set Make it part of the set of tests ignored as a whole if no lightwalletd tests should be executed. * Move `spawn_zebrad_for_rpc_without_initial_peers` Place it in the `launch` sub-module. * Rename `rpc` module into `wallet_grpc` Avoid any potential misunderstandings when the name is seen out of context. * Allow duplicate `heck` dependency At least until `structopt` is updated or `zebra-utils` is updated to use `clap` 3. * Fix a deny.toml typo * fix(build): CMake is required by `prost` crate Co-authored-by: teor <teor@riseup.net> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-04-27 16:06:11 -07:00
async fn sending_transactions_using_lightwalletd() -> Result<()> {
common::lightwalletd::send_transaction_test::run().await
}
/// Test all the rpc methods a wallet connected to lightwalletd can call.
///
/// See [`common::lightwalletd::wallet_grpc_test`] for more information.
///
/// This test doesn't work on Windows, so it is always skipped on that platform.
#[tokio::test]
#[ignore]
#[cfg(feature = "lightwalletd-grpc-tests")]
#[cfg(not(target_os = "windows"))]
async fn lightwalletd_wallet_grpc_tests() -> Result<()> {
common::lightwalletd::wallet_grpc_test::run().await
}
change(rpc): Add getpeerinfo RPC method (#5951) * adds ValidateBlock request to state * adds `Request` enum in block verifier skips solution check for BlockProposal requests calls CheckBlockValidity instead of Commit block for BlockProposal requests * uses new Request in references to chain verifier * adds getblocktemplate proposal mode response type * makes getblocktemplate-rpcs feature in zebra-consensus select getblocktemplate-rpcs in zebra-state * Adds PR review revisions * adds info log in CheckBlockProposalValidity * Reverts replacement of match statement * adds `GetBlockTemplate::capabilities` fn * conditions calling checkpoint verifier on !request.is_proposal * updates references to validate_and_commit_non_finalized * adds snapshot test, updates test vectors * adds `should_count_metrics` to NonFinalizedState * Returns an error from chain verifier for block proposal requests below checkpoint height adds feature flags * adds "proposal" to GET_BLOCK_TEMPLATE_CAPABILITIES_FIELD * adds back block::Request to zebra-consensus lib * updates snapshots * Removes unnecessary network arg * skips req in tracing intstrument for read state * Moves out block proposal validation to its own fn * corrects `difficulty_threshold_is_valid` docs adds/fixes some comments, adds TODOs general cleanup from a self-review. * Update zebra-state/src/service.rs * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * Update zebra-rpc/src/methods/get_block_template_rpcs.rs Co-authored-by: teor <teor@riseup.net> * check best chain tip * Update zebra-state/src/service.rs Co-authored-by: teor <teor@riseup.net> * Applies cleanup suggestions from code review * updates gbt acceptance test to make a block proposal * fixes json parsing mistake * adds retries * returns reject reason if there are no retries left * moves result deserialization to RPCRequestClient method, adds docs, moves jsonrpc_core to dev-dependencies * moves sleep(EXPECTED_TX_TIME) out of loop * updates/adds info logs in retry loop * Revert "moves sleep(EXPECTED_TX_TIME) out of loop" This reverts commit f7f0926f4050519687a79afc16656c3f345c004b. * adds `allow(dead_code)` * tests with curtime, mintime, & maxtime * Fixes doc comment * Logs error responses from chain_verifier CheckProposal requests * Removes retry loop, adds num_txs log * removes verbose info log * sorts mempool_txs before generating merkle root * Make imports conditional on a feature * Disable new CI tests until bugs are fixed * adds support for getpeerinfo RPC * adds vector test * Always passes address_book to RpcServer * Update zebra-network/src/address_book.rs Co-authored-by: teor <teor@riseup.net> * Renames PeerObserver to AddressBookPeers * adds getpeerinfo acceptance test * Asserts that addresses parse into SocketAddr * adds launches_lightwalletd field to TestType::LaunchWithEmptyState and uses it from the get_peer_info test * renames peer_observer mod * uses SocketAddr as `addr` field type in PeerInfo Co-authored-by: teor <teor@riseup.net> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-01-16 23:09:07 -08:00
/// Test successful getpeerinfo rpc call
///
/// See [`common::get_block_template_rpcs::get_peer_info`] for more information.
#[tokio::test]
#[cfg(feature = "getblocktemplate-rpcs")]
async fn get_peer_info() -> Result<()> {
common::get_block_template_rpcs::get_peer_info::run().await
}
/// Test successful getblocktemplate rpc call
///
/// See [`common::get_block_template_rpcs::get_block_template`] for more information.
#[tokio::test]
#[ignore]
#[cfg(feature = "getblocktemplate-rpcs")]
async fn get_block_template() -> Result<()> {
common::get_block_template_rpcs::get_block_template::run().await
}
/// Test successful submitblock rpc call
///
/// See [`common::get_block_template_rpcs::submit_block`] for more information.
#[tokio::test]
#[ignore]
#[cfg(feature = "getblocktemplate-rpcs")]
async fn submit_block() -> Result<()> {
common::get_block_template_rpcs::submit_block::run().await
}
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
/// Check that the the end of support code is called at least once.
#[test]
fn end_of_support_is_checked_at_start() -> Result<()> {
let _init_guard = zebra_test::init();
let testdir = testdir()?.with_config(&mut default_test_config(Mainnet)?)?;
let mut child = testdir.spawn_child(args!["start"])?;
// Give enough time to start up the eos task.
std::thread::sleep(Duration::from_secs(30));
child.kill(false)?;
let output = child.wait_with_output()?;
let output = output.assert_failure()?;
// Zebra started
output.stdout_line_contains("Starting zebrad")?;
// End of support task started.
output.stdout_line_contains("Starting end of support task")?;
// Make sure the command was killed
output.assert_was_killed()?;
Ok(())
}
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
change(ci): Generate mainnet checkpoints in CI (#6550) * Add extra test type modes to support zebra-checkpoints * Add Mainnet and Testnet zebra-checkpoints test harnesses * Add zebra-checkpoints to test docker images * Add zebra-checkpoints test entrypoints * Add Mainnet CI workflow for zebra-checkpoints * Enable zebra-checkpoints feature in the test image * Use the same features for (almost) all the docker tests * Make workflow features match Docker features * Add a feature note * Add a zebra-checkpoints test feature to zebrad * Remove the "no cached state" testnet code * Log a startup message to standard error when launching zebra-checkpoints * Rename tests to avoid partial name conflicts * Fix log formatting * Add sentry feature to experimental docker image build * Explain what ENTRYPOINT_FEATURES is used for * Use the correct zebra-checkpoints path * Silence zebrad logs while generating checkpoints * Fix zebra-checkpoints log handling * Re-enable waiting for zebrad to fully sync * Add documentation for how to run these tests individually * Start generating checkpoints from the last compiled-in checkpoint * Fix clippy lints * Revert changes to TestType * Wait for all the checkpoints before finishing * Add more stderr debugging to zebra-checkpoints * Fix an outdated module comment * Add a workaround for zebra-checkpoints launch/run issues * Use temp dir and log what it is * Log extra metadata about the zebra-checkpoints binary * Add note about unstable feature -Z bindeps * Temporarily make the test run faster and with debug info * Log the original test command name when showing stdout and stderr * Try zebra-checkpoints in the system path first, then the cargo path * Fix slow thread close bug in dual process test harness * If the logs are shown, don't say they are hidden * Run `zebra-checkpoints --help` to work out what's going on in CI * Build `zebra-utils` binaries for `zebrad` integration tests * Revert temporary debugging changes * Revert changes that were moved to another PR
2023-04-26 21:39:43 -07:00
/// Test `zebra-checkpoints` on mainnet.
///
/// If you want to run this test individually, see the module documentation.
/// See [`common::checkpoints`] for more information.
#[tokio::test]
#[ignore]
#[cfg(feature = "zebra-checkpoints")]
async fn generate_checkpoints_mainnet() -> Result<()> {
common::checkpoints::run(Mainnet).await
}
/// Test `zebra-checkpoints` on testnet.
/// This test might fail if testnet is unstable.
///
/// If you want to run this test individually, see the module documentation.
/// See [`common::checkpoints`] for more information.
#[tokio::test]
#[ignore]
#[cfg(feature = "zebra-checkpoints")]
async fn generate_checkpoints_testnet() -> Result<()> {
common::checkpoints::run(Testnet).await
}
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
/// Check that new states are created with the current state format version,
/// and that restarting `zebrad` doesn't change the format version.
#[tokio::test]
async fn new_state_format() -> Result<()> {
for network in [Mainnet, Testnet] {
state_format_test("new_state_format_test", network, 2, None).await?;
}
Ok(())
}
/// Check that outdated states are updated to the current state format version,
/// and that restarting `zebrad` doesn't change the updated format version.
///
/// TODO: test partial updates, once we have some updates that take a while.
/// (or just add a delay during tests)
#[tokio::test]
async fn update_state_format() -> Result<()> {
let mut fake_version = state_database_format_version_in_code();
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
fake_version.minor = 0;
fake_version.patch = 0;
for network in [Mainnet, Testnet] {
state_format_test("update_state_format_test", network, 3, Some(&fake_version)).await?;
}
Ok(())
}
/// Check that newer state formats are downgraded to the current state format version,
/// and that restarting `zebrad` doesn't change the format version.
///
/// Future version compatibility is a best-effort attempt, this test can be disabled if it fails.
#[tokio::test]
async fn downgrade_state_format() -> Result<()> {
let mut fake_version = state_database_format_version_in_code();
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
fake_version.minor = u16::MAX.into();
fake_version.patch = 0;
for network in [Mainnet, Testnet] {
state_format_test(
"downgrade_state_format_test",
network,
3,
Some(&fake_version),
)
.await?;
}
Ok(())
}
/// Test state format changes, see calling tests for details.
async fn state_format_test(
base_test_name: &str,
network: Network,
reopen_count: usize,
fake_version: Option<&Version>,
) -> Result<()> {
let _init_guard = zebra_test::init();
let test_name = &format!("{base_test_name}/new");
// # Create a new state and check it has the current version
let zebrad = spawn_zebrad_without_rpc(network, test_name, false, false, None, false)?;
// Skip the test unless it has the required state and environmental variables.
let Some(mut zebrad) = zebrad else {
return Ok(());
};
tracing::info!(?network, "running {test_name} using zebrad");
zebrad.expect_stdout_line_matches("creating new database with the current format")?;
zebrad.expect_stdout_line_matches("loaded Zebra state cache")?;
// Give Zebra enough time to actually write the database to disk.
tokio::time::sleep(Duration::from_secs(1)).await;
let logs = zebrad.kill_and_return_output(false)?;
assert!(
!logs.contains("marked database format as upgraded"),
"unexpected format upgrade in logs:\n\
{logs}"
);
assert!(
!logs.contains("marked database format as downgraded"),
"unexpected format downgrade in logs:\n\
{logs}"
);
let output = zebrad.wait_with_output()?;
let mut output = output.assert_failure()?;
let mut dir = output
.take_dir()
.expect("dir should not already have been taken");
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
// # Apply the fake version if needed
let mut expect_older_version = false;
let mut expect_newer_version = false;
if let Some(fake_version) = fake_version {
let test_name = &format!("{base_test_name}/apply_fake_version/{fake_version}");
tracing::info!(?network, "running {test_name} using zebra-state");
let config = UseAnyState
.zebrad_config(test_name, false, Some(dir.path()), network)
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
.expect("already checked config")?;
zebra_state::write_state_database_format_version_to_disk(
&config.state,
fake_version,
network,
)
.expect("can't write fake database version to disk");
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
// Give zebra_state enough time to actually write the database version to disk.
tokio::time::sleep(Duration::from_secs(1)).await;
let running_version = state_database_format_version_in_code();
change(state): Prepare for in-place database format upgrades, but don't make any format changes yet (#7031) * Move format upgrades to their own module and enum * Launch a format change thread if needed, and shut it down during shutdown * Add some TODOs and remove a redundant timer * Regularly check for panics in the state upgrade task * Only run example upgrade once, change version field names * Increment database format to 25.0.2: add format change task * Log the running and initial disk database format versions on startup * Add initial disk and running state versions to cached state images in CI * Fix missing imports * Fix typo in logs workflow command * Add a force_save_to_disk argument to the CI workflow * Move use_internet_connection into zebrad_config() * fastmod can_spawn_zebrad_for_rpc can_spawn_zebrad_for_test_type zebra* * Add a spawn_zebrad_without_rpc() function * Remove unused copy_state() test code * Assert that upgrades and downgrades happen with the correct versions * Add a kill_and_return_output() method for tests * Add a test for new_state_format() versions (no upgrades or downgrades) * Add use_internet_connection to can_spawn_zebrad_for_test_type() * Fix workflow parameter passing * Check that reopening a new database doesn't upgrade (or downgrade) the format * Allow ephemeral to be set to false even if we don't have a cached state * Add a test type that will accept any kind of state * When re-using a directory, configure the state test config with that path * Actually mark newly created databases with their format versions * Wait for the state to be opened before testing the format * Run state format tests on mainnet and testnet configs (no network access) * run multiple reopens in tests * Test upgrades run correctly * Test that version downgrades work as expected (best effort) * Add a TODO for testing partial updates * Fix missing test arguments * clippy if chain * Fix typo * another typo * Pass a database instance to the format upgrade task * Fix a timing issue in the tests * Fix version matching in CI * Use correct env var reference * Use correct github env file * Wait for the database to be written before killing Zebra * Use correct workflow syntax * Version changes aren't always upgrades --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2023-07-13 14:36:15 -07:00
match fake_version.cmp(&running_version) {
Ordering::Less => expect_older_version = true,
Ordering::Equal => {}
Ordering::Greater => expect_newer_version = true,
}
}
// # Reopen that state and check the version hasn't changed
for reopened in 0..reopen_count {
let test_name = &format!("{base_test_name}/reopen/{reopened}");
if reopened > 0 {
expect_older_version = false;
expect_newer_version = false;
}
let mut zebrad = spawn_zebrad_without_rpc(network, test_name, false, false, dir, false)?
.expect("unexpectedly missing required state or env vars");
tracing::info!(?network, "running {test_name} using zebrad");
if expect_older_version {
zebrad.expect_stdout_line_matches("trying to open older database format")?;
zebrad.expect_stdout_line_matches("marked database format as upgraded")?;
zebrad.expect_stdout_line_matches("database is fully upgraded")?;
} else if expect_newer_version {
zebrad.expect_stdout_line_matches("trying to open newer database format")?;
zebrad.expect_stdout_line_matches("marked database format as downgraded")?;
} else {
zebrad.expect_stdout_line_matches("trying to open current database format")?;
zebrad.expect_stdout_line_matches("loaded Zebra state cache")?;
}
// Give Zebra enough time to actually write the database to disk.
tokio::time::sleep(Duration::from_secs(1)).await;
let logs = zebrad.kill_and_return_output(false)?;
if !expect_older_version {
assert!(
!logs.contains("marked database format as upgraded"),
"unexpected format upgrade in logs:\n\
{logs}"
);
}
if !expect_newer_version {
assert!(
!logs.contains("marked database format as downgraded"),
"unexpected format downgrade in logs:\n\
{logs}"
);
}
let output = zebrad.wait_with_output()?;
let mut output = output.assert_failure()?;
dir = output
.take_dir()
.expect("dir should not already have been taken");
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
}
Ok(())
}
/// Snapshot the `z_getsubtreesbyindex` method in a synchronized chain.
///
/// This test name must have the same prefix as the `fully_synced_rpc_test`, so they can be run in the same test job.
#[tokio::test]
#[ignore]
async fn fully_synced_rpc_z_getsubtreesbyindex_snapshot_test() -> Result<()> {
let _init_guard = zebra_test::init();
// We're only using cached Zebra state here, so this test type is the most similar
let test_type = TestType::UpdateZebraCachedStateWithRpc;
let network = Network::Mainnet;
let (mut zebrad, zebra_rpc_address) = if let Some(zebrad_and_address) = spawn_zebrad_for_rpc(
network,
"rpc_z_getsubtreesbyindex_sync_snapshots",
test_type,
true,
)? {
tracing::info!("running fully synced zebrad z_getsubtreesbyindex RPC test");
zebrad_and_address
} else {
// Skip the test, we don't have the required cached state
return Ok(());
};
// Store the state version message so we can wait for the upgrade later if needed.
let state_version_message = wait_for_state_version_message(&mut zebrad)?;
// It doesn't matter how long the state version upgrade takes,
// because the sync finished regex is repeated every minute.
wait_for_state_version_upgrade(
&mut zebrad,
&state_version_message,
state_database_format_version_in_code(),
None,
)?;
// Wait for zebrad to load the full cached blockchain.
zebrad.expect_stdout_line_matches(SYNC_FINISHED_REGEX)?;
// Create an http client
let client =
RpcRequestClient::new(zebra_rpc_address.expect("already checked that address is valid"));
// Create test vector matrix
let zcashd_test_vectors = vec![
(
"z_getsubtreesbyindex_mainnet_sapling_0_1".to_string(),
r#"["sapling", 0, 1]"#.to_string(),
),
(
"z_getsubtreesbyindex_mainnet_sapling_0_11".to_string(),
r#"["sapling", 0, 11]"#.to_string(),
),
(
"z_getsubtreesbyindex_mainnet_sapling_17_1".to_string(),
r#"["sapling", 17, 1]"#.to_string(),
),
(
"z_getsubtreesbyindex_mainnet_sapling_1090_6".to_string(),
r#"["sapling", 1090, 6]"#.to_string(),
),
(
"z_getsubtreesbyindex_mainnet_orchard_0_1".to_string(),
r#"["orchard", 0, 1]"#.to_string(),
),
(
"z_getsubtreesbyindex_mainnet_orchard_338_1".to_string(),
r#"["orchard", 338, 1]"#.to_string(),
),
(
"z_getsubtreesbyindex_mainnet_orchard_585_1".to_string(),
r#"["orchard", 585, 1]"#.to_string(),
),
];
for i in zcashd_test_vectors {
let res = client.call("z_getsubtreesbyindex", i.1).await?;
let body = res.bytes().await;
let parsed: Value = serde_json::from_slice(&body.expect("Response is valid json"))?;
insta::assert_json_snapshot!(i.0, parsed);
}
zebrad.kill(false)?;
let output = zebrad.wait_with_output()?;
let output = output.assert_failure()?;
// [Note on port conflict](#Note on port conflict)
output
.assert_was_killed()
.wrap_err("Possible port conflict. Are there other acceptance tests running?")?;
Ok(())
}
/// Test that the scanner task gets started when the node starts.
#[test]
#[cfg(feature = "shielded-scan")]
fn scan_task_starts() -> Result<()> {
use indexmap::IndexMap;
use zebra_scan::tests::ZECPAGES_SAPLING_VIEWING_KEY;
let _init_guard = zebra_test::init();
let test_type = TestType::LaunchWithEmptyState {
launches_lightwalletd: false,
};
let mut config = default_test_config(Mainnet)?;
let mut keys = IndexMap::new();
keys.insert(ZECPAGES_SAPLING_VIEWING_KEY.to_string(), 1);
config.shielded_scan.sapling_keys_to_scan = keys;
// Start zebra with the config.
let mut zebrad = testdir()?
.with_exact_config(&config)?
.spawn_child(args!["start"])?
.with_timeout(test_type.zebrad_timeout());
// Check scanner was started.
zebrad.expect_stdout_line_matches("loaded Zebra scanner cache")?;
// Look for 2 scanner notices indicating we are below sapling activation.
zebrad.expect_stdout_line_matches("scanner is waiting for Sapling activation. Current tip: [0-9]{1,4}, Sapling activation: 419200")?;
zebrad.expect_stdout_line_matches("scanner is waiting for Sapling activation. Current tip: [0-9]{1,4}, Sapling activation: 419200")?;
// Kill the node.
zebrad.kill(false)?;
// Check that scan task started and the first scanning is done.
let output = zebrad.wait_with_output()?;
// Make sure the command was killed
output.assert_was_killed()?;
output.assert_failure()?;
Ok(())
}
/// Test that the scanner gRPC server starts when the node starts.
#[tokio::test]
#[cfg(feature = "shielded-scan")]
async fn scan_rpc_server_starts() -> Result<()> {
use zebra_grpc::scanner::{scanner_client::ScannerClient, Empty};
let _init_guard = zebra_test::init();
let test_type = TestType::LaunchWithEmptyState {
launches_lightwalletd: false,
};
let port = random_known_port();
let listen_addr = format!("127.0.0.1:{port}");
let mut config = default_test_config(Mainnet)?;
config.shielded_scan.listen_addr = Some(listen_addr.parse()?);
// Start zebra with the config.
let mut zebrad = testdir()?
.with_exact_config(&config)?
.spawn_child(args!["start"])?
.with_timeout(test_type.zebrad_timeout());
// Wait until gRPC server is starting.
tokio::time::sleep(LAUNCH_DELAY).await;
zebrad.expect_stdout_line_matches("starting scan gRPC server")?;
tokio::time::sleep(Duration::from_secs(1)).await;
let mut client = ScannerClient::connect(format!("http://{listen_addr}")).await?;
let request = tonic::Request::new(Empty {});
client.get_info(request).await?;
// Kill the node.
zebrad.kill(false)?;
// Check that scan task started and the first scanning is done.
let output = zebrad.wait_with_output()?;
// Make sure the command was killed
output.assert_was_killed()?;
output.assert_failure()?;
Ok(())
}
/// Test that the scanner can continue scanning where it was left when zebrad restarts.
///
/// Needs a cache state close to the tip. A possible way to run it locally is:
///
/// export ZEBRA_CACHED_STATE_DIR="/path/to/zebra/state"
/// cargo test scan_start_where_left --features="shielded-scan" -- --ignored --nocapture
///
/// The test will run zebrad with a key to scan, scan the first few blocks after sapling and then stops.
/// Then it will restart zebrad and check that it resumes scanning where it was left.
///
/// Note: This test will remove all the contents you may have in the ZEBRA_CACHED_STATE_DIR/private-scan directory
/// so it can start with an empty scanning state.
#[ignore]
#[test]
#[cfg(feature = "shielded-scan")]
fn scan_start_where_left() -> Result<()> {
use indexmap::IndexMap;
use zebra_scan::{storage::db::SCANNER_DATABASE_KIND, tests::ZECPAGES_SAPLING_VIEWING_KEY};
let _init_guard = zebra_test::init();
// use `UpdateZebraCachedStateNoRpc` as the test type to make sure a zebrad cache state is available.
let test_type = TestType::UpdateZebraCachedStateNoRpc;
if let Some(cache_dir) = test_type.zebrad_state_path("scan test") {
// Add a key to the config
let mut config = default_test_config(Mainnet)?;
let mut keys = IndexMap::new();
keys.insert(ZECPAGES_SAPLING_VIEWING_KEY.to_string(), 1);
config.shielded_scan.sapling_keys_to_scan = keys;
// Add the cache dir to shielded scan, make it the same as the zebrad cache state.
config.shielded_scan.db_config_mut().cache_dir = cache_dir.clone();
config.shielded_scan.db_config_mut().ephemeral = false;
// Add the cache dir to state.
config.state.cache_dir = cache_dir.clone();
config.state.ephemeral = false;
// Remove the scan directory before starting.
let scan_db_path = cache_dir.join(SCANNER_DATABASE_KIND);
fs::remove_dir_all(std::path::Path::new(&scan_db_path)).ok();
// Start zebra with the config.
let mut zebrad = testdir()?
.with_exact_config(&config)?
.spawn_child(args!["start"])?
.with_timeout(test_type.zebrad_timeout());
// Check scanner was started.
zebrad.expect_stdout_line_matches("loaded Zebra scanner cache")?;
// The first time
zebrad.expect_stdout_line_matches(
r"Scanning the blockchain for key 0, started at block 419200, now at block 420000",
)?;
// Make sure scanner scans a few blocks.
zebrad.expect_stdout_line_matches(
r"Scanning the blockchain for key 0, started at block 419200, now at block 430000",
)?;
zebrad.expect_stdout_line_matches(
r"Scanning the blockchain for key 0, started at block 419200, now at block 440000",
)?;
// Kill the node.
zebrad.kill(false)?;
let output = zebrad.wait_with_output()?;
// Make sure the command was killed
output.assert_was_killed()?;
output.assert_failure()?;
// Start the node again.
let mut zebrad = testdir()?
.with_exact_config(&config)?
.spawn_child(args!["start"])?
.with_timeout(test_type.zebrad_timeout());
// Resuming message.
zebrad.expect_stdout_line_matches(
"Last scanned height for key number 0 is 439000, resuming at 439001",
)?;
zebrad.expect_stdout_line_matches("loaded Zebra scanner cache")?;
// Start scanning where it was left.
zebrad.expect_stdout_line_matches(
r"Scanning the blockchain for key 0, started at block 439001, now at block 440000",
)?;
zebrad.expect_stdout_line_matches(
r"Scanning the blockchain for key 0, started at block 439001, now at block 450000",
)?;
}
Ok(())
}
// TODO: Add this test to CI (#8236)
/// Tests successful:
/// - Registration of a new key,
/// - Subscription to scan results of new key, and
/// - Deletion of keys
/// in the scan task
/// See [`common::shielded_scan::scan_task_commands`] for more information.
#[tokio::test]
#[ignore]
#[cfg(feature = "shielded-scan")]
async fn scan_task_commands() -> Result<()> {
common::shielded_scan::scan_task_commands::run().await
}