zebra/docker/Dockerfile

129 lines
3.7 KiB
Docker
Raw Normal View History

refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
# This steps implement cargo-chef for docker layer caching
# We are using four stages:
# - chef: installs cargo-chef
# - planner: computes the recipe file
# - builder: caches our dependencies and builds the binary
# - tester: builds and run tests
# - runtime: is our runtime environment
FROM rust:bullseye as chef
RUN cargo install cargo-chef --locked
WORKDIR /app
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
SHELL ["/bin/bash", "-xo", "pipefail", "-c"]
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
COPY --from=planner /app/recipe.json recipe.json
# Install zebra build deps
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
llvm \
libclang-dev \
clang \
ca-certificates \
add(test): Integration test to send transactions using lightwalletd (#4068) * Export the `zebra_state::Config::db_path` method Make it easier for tests to discover the sub-directory used to store the chain state data. * Generate code for interfacing with lightwalletd Use the `tonic-build` crate to generate Rust code for communicating with lightwalletd using gRPC. The `*.proto` files were obtained from the Zcash lightwalletd repository. * Use `block::Height` instead of `Height` Import the `block` instead to make it slightly clearer. * Add helper function to remove a file if it exists Try to remove it and ignore an error if it says that the file doesn't exist. This will be used later to remove the lock file from a copied chain state directory. * Add helper function to copy chain state dirs Copy an existing chain state directory into a new temporary directory. * Add a `BoxStateService` type alias Make it easier to write and read a boxed version of a state service. * Add a helper function to start the state service Make it easier to specify the state service to use an existing state cache directory. * Import `eyre!` macro at the module level Allow it to be used in different places without having to repeat the imports. * Add `load_tip_height_from_state_directory` helper A function to discover the current chain tip height stored in a state cache. * Add helper function to prepare partial sync. state Loads a partially synchronized cached state directory into a temporary directory that can be used by a zebrad instance, and also returns the chain tip block height of that state. * Add `perform_full_sync_starting_from` helper Runs a zebrad with an existing partially synchronized state, and finishes synchronizing it to the network chain tip. * Add function to load transactions from a block Use a provided state service to load all transactions from a block at a specified height. The state service is a generic type parameter, because `zebra_state::service::ReadStateService` is not exported publicly. Using a generic type parameter also allows the service to be wrapped in layers if needed in the future. * Add `load_transactions_from_block_after` helper A function to load transactions from a block stored in a cached state directory. The cached state must be synchronized to a chain tip higher than the requested height. * Add helper function to load some test transactions Given a partially synchronized chain state, it will extend that chain by performing a full synchronization, and obtain some transactions from one of the newly added blocks. * Update `spawn_zebrad_for_rpc_without_initial_peers` Wait until the mempool is activated. * Add method to start lightwalletd with RPC server Returns the lightwalletd instance and the port that it's listening for RPC connections. The instance can reuse an existing cached lightwalletd state if the `LIGHTWALLETD_DATA_DIR` environment variable is set. * Add a `LightwalletdRpcClient` type alias To make it easier to identify the type generated from the Protobuf files. * Add helper function to connect to lightwalletd Prepare an RPC client to send requests to a lightwalletd instance. * Add a `prepare_send_transaction_request` helper Creates a request message for lightwalletd to send a transaction. * Add test to send transactions using lightwalletd Obtain some valid transactions from future blocks and try to send them to a lightwalletd instance connected to a zebrad instance that hasn't seen those transactions yet. The transactions should be successfully queued in Zebra's mempool. * Make `zebra_directory` parameter generic Allow using a `TempDir` or a `PathBuf`. * Move lightwalletd protobuf files Place them closer to the module directory, so that it's clearer that they specify the RPC protocol for lightwalletd, and not Zebra itself. * Don't use coinbase transactions in the test Coinbase transactions are rejected by the mempool. * Don't remove state lock file It is removed automatically by Zebra when it shuts down, so if it exists it should be reported as a bug. * Force mempool to be enabled in Zebrad instance Speed up the initialization of the Zebrad instance used for lightwalletd to connect to. * Refactor to create `LIGHTWALLETD_DATA_DIR_VAR` Document how the environment variable can be used to speed up the test. * Check for process errors in spawned Zebra instance Enable checking for known process failure messages. * Add `FINISH_PARTIAL_SYNC_TIMEOUT` constant Document why it exists and how the choice of the value affects the test. * Add `LIGHTWALLETD_TEST_TIMEOUT` constant And use it for the Zebrad and the Lightwalletd instances used in the send transaction integration test. * Check `lightwalletd` process for errors Enable checking the lightwalletd process for known failure messages. * Update `tonic` and `prost` dependencies Use the latest version and fix CI failures because `rustfmt` isn't installed in the build environment. * Create `send_transaction_test` module Move the send transaction using lightwalletd test and its helper functions into a new module. * Move `LIGHTWALLETD_TEST_TIMEOUT` constant Place it in the parent `lightwalletd` module. * Move gRPC helper functions and types to `rpc` mod. Make them more accessible so that they can be used by other tests. * Create a `cached_state` module Move the test utility functions related to using a cached Zebra state into the module. * Move `perform_full_sync_starting_from` to `sync` Keep to closer to the synchronization utility functions. * Move Zebra cached state path variable constant Place it in the `cached_state` module. * Skip test if `ZEBRA_TEST_LIGHTWALLETD` is not set Make it part of the set of tests ignored as a whole if no lightwalletd tests should be executed. * Move `spawn_zebrad_for_rpc_without_initial_peers` Place it in the `launch` sub-module. * Rename `rpc` module into `wallet_grpc` Avoid any potential misunderstandings when the name is seen out of context. * Allow duplicate `heck` dependency At least until `structopt` is updated or `zebra-utils` is updated to use `clap` 3. * Fix a deny.toml typo * fix(build): CMake is required by `prost` crate Co-authored-by: teor <teor@riseup.net> Co-authored-by: Gustavo Valverde <gustavo@iterativo.do>
2022-04-27 16:06:11 -07:00
cmake \
protobuf-compiler \
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
; \
rm -rf /var/lib/apt/lists/* /tmp/*
# Install google OS Config agent
RUN if [ "$(uname -m)" != "aarch64" ]; then \
apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
curl \
lsb-release \
&& \
echo "deb http://packages.cloud.google.com/apt google-compute-engine-$(lsb_release -cs)-stable main" > /etc/apt/sources.list.d/google-compute-engine.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get -qq update && \
apt-get -qq install -y --no-install-recommends google-osconfig-agent; \
fi \
&& \
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
rm -rf /var/lib/apt/lists/* /tmp/*
ENV CARGO_HOME /app/.cargo/
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --features enable-sentry --recipe-path recipe.json
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
ARG RUST_BACKTRACE
ENV RUST_BACKTRACE ${RUST_BACKTRACE:-0}
ARG RUST_LIB_BACKTRACE
ENV RUST_LIB_BACKTRACE ${RUST_LIB_BACKTRACE:-0}
ARG COLORBT_SHOW_HIDDEN
ENV COLORBT_SHOW_HIDDEN ${COLORBT_SHOW_HIDDEN:-0}
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
# Skip IPv6 tests by default, as some CI environment don't have IPv6 available
ARG ZEBRA_SKIP_IPV6_TESTS
ENV ZEBRA_SKIP_IPV6_TESTS ${ZEBRA_SKIP_IPV6_TESTS:-1}
# Use default checkpoint sync and network values if none is provided
ARG CHECKPOINT_SYNC
ENV CHECKPOINT_SYNC ${CHECKPOINT_SYNC:-true}
ARG NETWORK
ENV NETWORK ${NETWORK:-Mainnet}
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
COPY . .
# Build zebra
RUN cargo build --locked --release --features enable-sentry --bin zebrad
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
FROM builder AS tester
# Pre-download Zcash Sprout and Sapling parameters
# TODO: do not hardcode the user /root/ even though is a safe assumption
COPY --from=us-docker.pkg.dev/zealous-zebra/zebra/zcash-params /root/.zcash-params /root/.zcash-params
COPY --from=us-docker.pkg.dev/zealous-zebra/zebra/lightwalletd /lightwalletd /usr/local/bin
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
RUN cargo test --locked --release --features enable-sentry --workspace --no-run
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
COPY ./docker/entrypoint.sh /
RUN chmod u+x /entrypoint.sh
ARG CHECKPOINT_SYNC=true
ARG NETWORK=Mainnet
ARG TEST_FULL_SYNC
ENV TEST_FULL_SYNC ${TEST_FULL_SYNC:-0}
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
ARG RUN_ALL_TESTS
ENV RUN_ALL_TESTS ${RUN_ALL_TESTS:-0}
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
ENTRYPOINT ["/entrypoint.sh"]
CMD [ "cargo"]
# Runner image
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
FROM debian:bullseye-slim AS runtime
COPY --from=builder /app/target/release/zebrad /usr/local/bin
COPY --from=us-docker.pkg.dev/zealous-zebra/zebra/zcash-params /root/.zcash-params /root/.zcash-params
RUN apt-get update && \
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
apt-get install -y --no-install-recommends \
ca-certificates
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
RUN set -ex; \
{ \
echo "[consensus]"; \
echo "checkpoint_sync = ${CHECKPOINT_SYNC}"; \
echo "[metrics]"; \
echo "endpoint_addr = '0.0.0.0:9999'"; \
echo "[network]"; \
echo "network = '${NETWORK}'"; \
echo "[state]"; \
echo "cache_dir = '/zebrad-cache'"; \
echo "[tracing]"; \
echo "endpoint_addr = '0.0.0.0:3000'"; \
} > "zebrad.toml"
Download Zcash Sapling parameters and load them from cached files (#3057) * Replace Zcash parameters crates with pre-downloaded local parameter files * Download Zcash parameters using the `zcashd` script in CI and Docker * Add a zcash_proofs dependency to zebra-consensus * Download Sapling parameters using zcash_proofs, rather than fetch-params.sh * Add a new `zebrad download` subcommand This command isn't required for nomrmal usage. But it's useful when testing, or launching multiple Zebra instances. * Use `zebrad download` in CI to pre-download parameters * Log a helpful hint if downloading fails * Allow some duplicate dependencies currently hidden by orchard * Spawn a separate task to download Groth16 parameters * Run the parameter download with code coverage This avoids re-compining Zebra with and without coverage. * Update Cargo.lock after rebase * Try to pass `download` as an argument to `zebrad` in coverage CI * Fix copy and paste comment typos * Add path and download examples, like zcash_proofs * Download params in CI just like zcash_proofs does * Delete a redundant build step * Implement graceful shutdown for zebrad start * Send coverage summary to /dev/null when getting the params path * Use the correct parameters path and download commands in CI * Explain pre-downloads * Avoid calling params_folder twice * Rename parameter types and methods for consistency ```sh fastmod SaplingParams SaplingParameters zebra* fastmod Groth16Params Groth16Parameters zebra* fastmod PARAMS GROTH16_PARAMETERS zebra* fastmod params_folder directory zebra* ``` And a manual variable name tweak. * rustfmt * Remove a redundant coverage step Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com>
2021-11-19 15:02:56 -08:00
EXPOSE 3000 8233 18233
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
ARG SHORT_SHA
ENV SHORT_SHA $SHORT_SHA
ARG SENTRY_DSN
ENV SENTRY_DSN ${SENTRY_DSN}
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
CMD [ "zebrad", "-c", "zebrad.toml", "start" ]