solana/ci
Michael Vines 02c0098d57 Less --verbose by default 2019-02-10 10:19:16 -08:00
..
docker-rust Flip && style 2018-12-18 09:56:43 -08:00
docker-rust-nightly Update to rust nightly 2019-01-31 2019-02-01 07:11:17 -08:00
docker-snapcraft codemod --extensions sh '#!/usr/bin/env bash -e' '#!/usr/bin/env bash\nset -e' 2018-11-11 16:24:36 -08:00
semver_bash Vendor https://github.com/cloudflare/semver_bash/tree/c1133faf0e 2018-08-17 23:15:48 -07:00
.gitignore
README.md Add more Azure details 2018-12-19 16:31:28 -08:00
_ Initial revision 2018-12-18 14:27:37 -08:00
affects-files.sh Remove non-standard : anchors 2018-12-18 18:44:20 -08:00
audit.sh Remove duplicate _ definitions 2018-12-18 14:25:10 -08:00
buildkite-secondary.yml Double publish crate timeout 2019-01-07 20:46:21 -08:00
buildkite.yml Bump stable timeout 2019-02-06 13:31:35 -08:00
channel-info.sh Remove channel duplication 2019-01-12 11:08:29 -08:00
crate-version.sh set -e shuffling 2018-11-11 16:24:36 -08:00
docker-run.sh Pass BUILDKITE_COMMIT env var into containers 2018-12-18 08:53:39 -08:00
format-url.sh Add format-url.sh 2018-12-15 15:10:04 -08:00
hoover.sh codemod --extensions sh '#!/bin/bash' '#!/usr/bin/env bash' 2018-11-11 16:24:36 -08:00
integration-tests.sh tmi: disable --verbose by default. | export V=1| to request verbosity 2019-02-07 10:42:57 -08:00
is-pr.sh codemod --extensions sh '#!/usr/bin/env bash -e' '#!/usr/bin/env bash\nset -e' 2018-11-11 16:24:36 -08:00
iterations-localnet.sh Avoid empty --features= arg to avoid unnecessary cargo building 2019-02-07 10:42:57 -08:00
localnet-sanity.sh use a .gitignore'd file name for transactionCount (#2609) 2019-01-30 20:19:10 -08:00
nits.sh Ban Default::default() 2019-02-09 10:12:32 -08:00
pr-snap.sh Restore publish snap 2019-01-04 20:46:44 -08:00
publish-book.sh Publish book from both the edge and beta channels 2019-01-12 11:08:29 -08:00
publish-bpf-sdk.sh Run s3cmd in a container to avoid additional CI system dependencies 2018-12-19 13:09:24 -08:00
publish-crate.sh Move runtime.rs into its own crate 2019-02-07 09:46:06 -08:00
publish-metrics-dashboard.sh User-initiated builds now select the correct channel 2019-01-22 14:23:46 -08:00
publish-snap.sh Remove channel duplication 2019-01-12 11:08:29 -08:00
publish-tarball.sh Remove channel duplication 2019-01-12 11:08:29 -08:00
run-local.sh set -e shuffling 2018-11-11 16:24:36 -08:00
shellcheck.sh nit: hide echo 2019-01-08 21:11:43 -08:00
snapcraft.credentials.enc
solana-testnet.yml Change testnet automation to use TAR instead of snap (#1809) 2018-11-13 13:33:15 -08:00
test-bench.sh Less --verbose by default 2019-02-10 10:19:16 -08:00
test-checks.sh remove println!, add check to keep it out (#2585) 2019-01-29 16:02:03 -08:00
test-coverage.sh source ulimit-n.sh so it applies to the current shell 2019-02-02 20:08:49 -08:00
test-large-network.sh source ulimit-n.sh so it applies to the current shell 2019-02-02 20:08:49 -08:00
test-stable-perf.sh cargo incremental builds breaks Rust BPF, locally disable it (#2674) 2019-02-06 13:59:10 -08:00
test-stable.sh Less --verbose by default 2019-02-10 10:19:16 -08:00
testnet-automation-cleanup.sh codemod --extensions sh '#!/usr/bin/env bash -e' '#!/usr/bin/env bash\nset -e' 2018-11-11 16:24:36 -08:00
testnet-automation-json-parser.py Change format of data for TPS/Finality metrics in testnet automation (#1446) 2018-10-09 10:35:01 -07:00
testnet-automation.sh Use CUDA for testnet automation performance calculations (#2259) 2018-12-21 04:27:31 -08:00
testnet-deploy.sh Report full node info before starting/updating network 2019-01-16 10:24:00 -08:00
testnet-manager.sh Enable ledger verification for non-perf testnets 2019-01-19 20:28:56 -08:00
testnet-sanity.sh Report full node info before running sanity 2019-01-16 10:24:00 -08:00
upload-ci-artifact.sh Desnake upload_ci_artifact for consistency 2018-12-13 22:25:27 -08:00
version-check-with-upgrade.sh Don't fiddle with default rust, humans don't like that 2018-12-18 08:44:33 -08:00
version-check.sh Update to rust nightly 2019-01-31 2019-02-01 07:11:17 -08:00

README.md

Our CI infrastructure is built around BuildKite with some additional GitHub integration provided by https://github.com/mvines/ci-gate

Agent Queues

We define two Agent Queues: queue=default and queue=cuda. The default queue should be favored and runs on lower-cost CPU instances. The cuda queue is only necessary for running tests that depend on GPU (via CUDA) access -- CUDA builds may still be run on the default queue, and the buildkite artifact system used to transfer build products over to a GPU instance for testing.

Buildkite Agent Management

Buildkite Azure Setup

Create a new Azure-based "queue=default" agent by running the following command:

$ az vm create \
   --resource-group ci \
   --name XXX \
   --image boilerplate \
   --admin-username $(whoami) \
   --ssh-key-value ~/.ssh/id_rsa.pub

The "boilerplate" image contains all the required packages pre-installed so the new machine should immediately show up in the Buildkite agent list once it has been provisioned and be ready for service.

Creating a "queue=cuda" agent follows the same process but additionally:

  1. Resize the image from the Azure port to include a GPU
  2. Edit the tags field in /etc/buildkite-agent/buildkite-agent.cfg to tags="queue=cuda,queue=default" and decrease the value of the priority field by one

Updating the CI Disk Image

  1. Create a new VM Instance as described above
  2. Modify it as required
  3. When ready, ssh into the instance and start a root shell with sudo -i. Then prepare it for deallocation by running: waagent -deprovision+user; cd /etc; ln -s ../run/systemd/resolve/stub-resolv.conf resolv.conf
  4. Run az vm deallocate --resource-group ci --name XXX
  5. Run az vm generalize --resource-group ci --name XXX
  6. Run az image create --resource-group ci --source XXX --name boilerplate
  7. Goto the ci resource group in the Azure portal and remove all resources with the XXX name in them

Reference

This section contains details regarding previous CI setups that have been used, and that we may return to one day.

Buildkite AWS CloudFormation Setup

AWS CloudFormation is currently inactive, although it may be restored in the future

AWS CloudFormation can be used to scale machines up and down based on the current CI load. If no machine is currently running it can take up to 60 seconds to spin up a new instance, please remain calm during this time.

AMI

We use a custom AWS AMI built via https://github.com/solana-labs/elastic-ci-stack-for-aws/tree/solana/cuda.

Use the following process to update this AMI as dependencies change:

$ export AWS_ACCESS_KEY_ID=my_access_key
$ export AWS_SECRET_ACCESS_KEY=my_secret_access_key
$ git clone https://github.com/solana-labs/elastic-ci-stack-for-aws.git -b solana/cuda
$ cd elastic-ci-stack-for-aws/
$ make build
$ make build-ami

Watch for the "amazon-ebs: AMI:" log message to extract the name of the new AMI. For example:

amazon-ebs: AMI: ami-07118545e8b4ce6dc

The new AMI should also now be visible in your EC2 Dashboard. Go to the desired AWS CloudFormation stack, update the ImageId field to the new AMI id, and apply the stack changes.

Buildkite GCP Setup

CI runs on Google Cloud Platform via two Compute Engine Instance groups: ci-default and ci-cuda. Autoscaling is currently disabled and the number of VM Instances in each group is manually adjusted.

Updating a CI Disk Image

Each Instance group has its own disk image, ci-default-vX and ci-cuda-vY, where X and Y are incremented each time the image is changed.

The process to update a disk image is as follows (TODO: make this less manual):

  1. Create a new VM Instance using the disk image to modify.
  2. Once the VM boots, ssh to it and modify the disk as desired.
  3. Stop the VM Instance running the modified disk. Remember the name of the VM disk
  4. From another machine, gcloud auth login, then create a new Disk Image based off the modified VM Instance:
 $ gcloud compute images create ci-default-$(date +%Y%m%d%H%M) --source-disk xxx --source-disk-zone us-east1-b --family ci-default

or

  $ gcloud compute images create ci-cuda-$(date +%Y%m%d%H%M) --source-disk xxx --source-disk-zone us-east1-b --family ci-cuda
  1. Delete the new VM instance.
  2. Go to the Instance templates tab, find the existing template named ci-default-vX or ci-cuda-vY and select it. Use the "Copy" button to create a new Instance template called ci-default-vX+1 or ci-cuda-vY+1 with the newly created Disk image.
  3. Go to the Instance Groups tag and find the applicable group, ci-default or ci-cuda. Edit the Instance Group in two steps: (a) Set the number of instances to 0 and wait for them all to terminate, (b) Update the Instance template and restore the number of instances to the original value.
  4. Clean up the previous version by deleting it from Instance Templates and Images.