solana/ci
Michael Vines 30e03feb5f Add initial CI subsystem documentation 2018-06-22 15:30:29 -07:00
..
docker-snapcraft Update snapcraft docker image contain snapcraft 2.42.1 2018-06-21 11:42:37 -07:00
.gitignore Package solana as a snap 2018-06-18 17:36:03 -07:00
README.md Add initial CI subsystem documentation 2018-06-22 15:30:29 -07:00
buildkite.yml Temporarily disable failing CI to get back to green 2018-06-22 11:29:31 -07:00
coverage.sh Avoid docker buildkite plugin, which is not supported by bkrun 2018-05-28 22:23:25 -07:00
docker-run.sh Skip --user if SOLANA_DOCKER_RUN_NOSETUID is set 2018-06-21 12:24:52 -07:00
hoover.sh De-double quote 2018-06-19 13:20:47 -07:00
publish-crate.sh Package solana as a snap 2018-06-18 17:36:03 -07:00
publish-snap.sh Package solana as a snap 2018-06-18 17:36:03 -07:00
run-local.sh Add local buildkite CI runner 2018-05-28 22:23:25 -07:00
shellcheck.sh Avoid docker buildkite plugin, which is not supported by bkrun 2018-05-28 22:23:25 -07:00
snapcraft.credentials.enc Add snapcraft login credentials 2018-06-18 17:36:03 -07:00
test-cuda.sh Add convenience script to download performance libraries 2018-06-20 16:48:32 -07:00
test-erasure.sh Run test-erasure in a container 2018-06-21 13:00:40 -07:00
test-ignored.sh Delint existing shell scripts 2018-05-28 05:18:46 -06:00
test-nightly.sh Add RUST_BACKTRACE 2018-05-28 22:23:25 -07:00
test-stable.sh Add RUST_BACKTRACE 2018-05-28 22:23:25 -07:00

README.md

Our CI infrastructure is built around BuildKite with some additional GitHub integration provided by https://github.com/mvines/ci-gate

Buildkite AWS CloudFormation Setup

We use AWS CloudFormation to scale machines up and down based on the current CI load. If no machine is currently running it can take up to 60 seconds to spin up a new instance, please remain calm during this time.

Agent Queues

We define two Agent Queues: queue=default and queue=cuda. The default queue should be favored and runs on lower-cost CPU instances. The cuda queue is only necessary for running tests that depend on GPU (via CUDA) access -- CUDA builds may still be run on the default queue, and the buildkite artifact system used to transfer build products over to a GPU instance for testing.

AMI

We use a custom AWS AMI built via https://github.com/solana-labs/elastic-ci-stack-for-aws/tree/solana/cuda.

Use the following process to update this AMI as dependencies change:

$ export AWS_ACCESS_KEY_ID=my_access_key
$ export AWS_SECRET_ACCESS_KEY=my_secret_access_key
$ git clone https://github.com/solana-labs/elastic-ci-stack-for-aws.git -b solana/cuda
$ cd elastic-ci-stack-for-aws/
$ make build
$ make build-ami

Watch for the "amazon-ebs: AMI:" log message to extract the name of the new AMI. For example:

amazon-ebs: AMI: ami-07118545e8b4ce6dc

The new AMI should also now be visible in your EC2 Dashboard. Go to the desired AWS CloudFormation stack, update the ImageId field to the new AMI id, and apply the stack changes.