zebra/.github/workflows/zcash-params.yml

45 lines
1.3 KiB
YAML
Raw Normal View History

refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
name: zcash-params
# Ensures that only one workflow task will run at a time. Previous deployments, if
# already in process, won't get cancelled. Instead, we let the first to complete
# then queue the latest pending workflow, cancelling any workflows in between
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: false
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
on:
workflow_dispatch:
inputs:
no_cache:
description: 'Disable the Docker cache for this build'
required: false
type: boolean
default: false
push:
branches:
- 'main'
paths:
# parameter download code
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
- 'zebra-consensus/src/primitives/groth16/params.rs'
- 'zebra-consensus/src/chain.rs'
- 'zebrad/src/commands/start.rs'
# workflow definitions
- 'docker/zcash-params/Dockerfile'
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
- '.github/workflows/zcash-params.yml'
- '.github/workflows/build-docker-image.yml'
refactor(cd): improve Docker and gcloud usage without Cloud Build (#3431) * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): overall pipeline improvement - Use a more ENV configurable Dockerfile - Remove cloudbuild dependency - Use compute optimized machine types - Use SSD instead of normal hard drives - Move Sentry endpoint to secrets - Use a single yml for auto & manual deploy - Migrate to Google Artifact Registry * refactor (cd): use newer google auth action * fix (cd): use newer secret as gcp credential * fix (docker): do not create extra directories * fix (docker): ignore .github for caching purposes * fix (docker): use latest rust * fix: use a better name for manual deployment * refactor (docker): use standard directories for executable * fix (cd): most systems expect a "latest" tag Caching from the latest image is one of the main reasons to add this extra tag. Before this commit, the inline cache was not being used. * fix (cd): push the build image and the cache separately The inline cache exporter only supports `min` cache mode. To enable `max` cache mode, push the image and the cache separately by using the registry cache exporter. This also allows for smaller release images. * fix (cd): remove unused GHA cache We're leveraging the registry to cache the actions, instead of using the 10GB limits from Github Actions cache storage * refactor (cd): use cargo-chef for caching rust deps * fix (release): use newer debian to reduce vulnerabilities * fix (cd): use same zone, region and service accounts * fix (cd): use same disk size and type for all deployments * refactor (cd): activate interactive shells Use interactive shells for manual and test deployments. This allow greater flexibility if troubleshooting is needed inside the machines * fix (docker): do not build with different settings Compiling might be slow because different steps are compiling the same code 2-4 times because of the variations * fix(cd): use Mainnet instead of mainnet * fix(docker): remove tests as a runtime dependency * fix(cd): use default service account with cloud-platform scope * fix(cd): keep compatibility with gcr.io To prevent conflicts between registries, and migrate when the time is right, we'll keep pushing to both registries and use github actions cache to prevent conflicts between artifacts. * fix(docker): do not download zcash params twice * feat(docker): add google OS Config agent Use a separate step to have better flexibility in case a better approach is available * fix(docker): allow to use zebrad as a command * feat: add an image to inherit from with zcash params * refactor(docker): use cached zcash params from previous build * imp(cd): add double safety measure for production
2022-02-08 16:50:13 -08:00
jobs:
build:
name: Build Zcash Params Docker
uses: ./.github/workflows/build-docker-image.yml
with:
dockerfile_path: ./docker/zcash-params/Dockerfile
dockerfile_target: release
image_name: zcash-params
no_cache: ${{ inputs.no_cache || false }}
rust_backtrace: full
rust_lib_backtrace: full
colorbt_show_hidden: '1'
rust_log: info