zebra/docker/Dockerfile

127 lines
3.7 KiB
Docker
Raw Normal View History

refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
# This steps implement cargo-chef for docker layer caching
# We are using four stages:
# - chef: installs cargo-chef
# - planner: computes the recipe file
# - builder: caches our dependencies and builds the binary
# - tester: builds and run tests
# - runtime: is our runtime environment
FROM rust:bullseye as chef
RUN cargo install cargo-chef --locked
WORKDIR /app
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
SHELL ["/bin/bash", "-xo", "pipefail", "-c"]
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
COPY --from=planner /app/recipe.json recipe.json
# Install zebra build deps
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
llvm \
libclang-dev \
clang \
ca-certificates \
; \
rm -rf /var/lib/apt/lists/* /tmp/*
# Install google OS Config agent
RUN if [ "$(uname -m)" != "aarch64" ]; then \
apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
curl \
lsb-release \
&& \
echo "deb http://packages.cloud.google.com/apt google-compute-engine-$(lsb_release -cs)-stable main" > /etc/apt/sources.list.d/google-compute-engine.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get -qq update && \
apt-get -qq install -y --no-install-recommends google-osconfig-agent; \
fi \
&& \
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
rm -rf /var/lib/apt/lists/* /tmp/*
ENV CARGO_HOME /app/.cargo/
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --features enable-sentry --recipe-path recipe.json
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
ARG RUST_BACKTRACE
ENV RUST_BACKTRACE ${RUST_BACKTRACE:-0}
ARG RUST_LIB_BACKTRACE
ENV RUST_LIB_BACKTRACE ${RUST_LIB_BACKTRACE:-0}
ARG COLORBT_SHOW_HIDDEN
ENV COLORBT_SHOW_HIDDEN ${COLORBT_SHOW_HIDDEN:-0}
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
# Skip IPv6 tests by default, as some CI environment don't have IPv6 available
ARG ZEBRA_SKIP_IPV6_TESTS
ENV ZEBRA_SKIP_IPV6_TESTS ${ZEBRA_SKIP_IPV6_TESTS:-1}
# Use default checkpoint sync and network values if none is provided
ARG CHECKPOINT_SYNC
ENV CHECKPOINT_SYNC ${CHECKPOINT_SYNC:-true}
ARG NETWORK
ENV NETWORK ${NETWORK:-Mainnet}
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
COPY . .
# Build zebra
RUN cargo build --locked --release --features enable-sentry --bin zebrad
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
FROM builder AS tester
# Pre-download Zcash Sprout and Sapling parameters
# TODO: do not hardcode the user /root/ even though is a safe assumption
COPY --from=us-docker.pkg.dev/zealous-zebra/zebra/zcash-params /root/.zcash-params /root/.zcash-params
COPY --from=us-docker.pkg.dev/zealous-zebra/zebra/lightwalletd /lightwalletd /usr/local/bin
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
RUN cargo test --locked --release --features enable-sentry --workspace --no-run
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
COPY ./docker/entrypoint.sh /
RUN chmod u+x /entrypoint.sh
ARG CHECKPOINT_SYNC=true
ARG NETWORK=Mainnet
ARG TEST_FULL_SYNC
ENV TEST_FULL_SYNC ${TEST_FULL_SYNC:-0}
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
ARG RUN_ALL_TESTS
ENV RUN_ALL_TESTS ${RUN_ALL_TESTS:-0}
feat(actions)!: add full sync test (#3582) * add(tests): full sync test * fix(test): add build * fix(deploy): escape double dashes '--' correctly * fix(test): remove unexpected --no-capture arg error: Found argument '--nocapture' which wasn't expected, or isn't valid in this context * refactor(docker): use default executable as entrypoint * refactor(startup): add a custom entrypoint * fix(test): add missing TEST_FULL_SYNC variable * test(timeout): use the biggest machine * fix * fix(deploy): use latest successful image * typo * refactor(docker): generate config file at startup * revert(build): changes were made to docker * fix(docker): send variables correctly to the entrypoint * test different conf file approach * fix(env): add RUN_TEST env variable * ref: use previous approach * fix(color): use environment variable * fix(resources): use our normal machine size * fix(ci): double CPU and RAM for full sync test * fix(test): check for zebrad test output in the correct order The mempool is only activated once, so we must check for that log first. After mempool activation, the stop regex is logged at least once. (It might be logged before as well, but we can't rely on that.) When checking that the mempool didn't activate, wait for the `zebrad` command to exit, then check the entire log. * fix(ci): run full sync test with full compiler optimisations * fix(tests): reintroduce tests and run full sync on approval * fix(tests): reduce the changelog Co-authored-by: teor <teor@riseup.net>
2022-03-02 06:15:24 -08:00
ENTRYPOINT ["/entrypoint.sh"]
CMD [ "cargo"]
# Runner image
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
FROM debian:bullseye-slim AS runtime
COPY --from=builder /app/target/release/zebrad /usr/local/bin
COPY --from=us-docker.pkg.dev/zealous-zebra/zebra/zcash-params /root/.zcash-params /root/.zcash-params
RUN apt-get update && \
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
apt-get install -y --no-install-recommends \
ca-certificates
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
RUN set -ex; \
{ \
echo "[consensus]"; \
echo "checkpoint_sync = ${CHECKPOINT_SYNC}"; \
echo "[metrics]"; \
echo "endpoint_addr = '0.0.0.0:9999'"; \
echo "[network]"; \
echo "network = '${NETWORK}'"; \
echo "[state]"; \
echo "cache_dir = '/zebrad-cache'"; \
echo "[tracing]"; \
echo "endpoint_addr = '0.0.0.0:3000'"; \
} > "zebrad.toml"
Download Zcash Sapling parameters and load them from cached files (#3057) * Replace Zcash parameters crates with pre-downloaded local parameter files * Download Zcash parameters using the `zcashd` script in CI and Docker * Add a zcash_proofs dependency to zebra-consensus * Download Sapling parameters using zcash_proofs, rather than fetch-params.sh * Add a new `zebrad download` subcommand This command isn't required for nomrmal usage. But it's useful when testing, or launching multiple Zebra instances. * Use `zebrad download` in CI to pre-download parameters * Log a helpful hint if downloading fails * Allow some duplicate dependencies currently hidden by orchard * Spawn a separate task to download Groth16 parameters * Run the parameter download with code coverage This avoids re-compining Zebra with and without coverage. * Update Cargo.lock after rebase * Try to pass `download` as an argument to `zebrad` in coverage CI * Fix copy and paste comment typos * Add path and download examples, like zcash_proofs * Download params in CI just like zcash_proofs does * Delete a redundant build step * Implement graceful shutdown for zebrad start * Send coverage summary to /dev/null when getting the params path * Use the correct parameters path and download commands in CI * Explain pre-downloads * Avoid calling params_folder twice * Rename parameter types and methods for consistency ```sh fastmod SaplingParams SaplingParameters zebra* fastmod Groth16Params Groth16Parameters zebra* fastmod PARAMS GROTH16_PARAMETERS zebra* fastmod params_folder directory zebra* ``` And a manual variable name tweak. * rustfmt * Remove a redundant coverage step Co-authored-by: Janito Vaqueiro Ferreira Filho <janito.vff@gmail.com>
2021-11-19 15:02:56 -08:00
EXPOSE 3000 8233 18233
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
ARG SHORT_SHA
ENV SHORT_SHA $SHORT_SHA
ARG SENTRY_DSN
ENV SENTRY_DSN ${SENTRY_DSN}
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
CMD [ "zebrad", "-c", "zebrad.toml", "start" ]