Merge pull request #90 from ArseniiPetrovich/master

Code refactor: Switch from BASH to Ansible playbooks as an deployment hypervisor
This commit is contained in:
Victor Baranov 2019-03-13 12:40:47 +03:00 committed by GitHub
commit 6fddecf521
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
55 changed files with 1018 additions and 1162 deletions

16
.gitignore vendored
View File

@ -4,8 +4,13 @@
/ignore.tfvars
# Terraform State
/.terraform
/terraform.tfstate.d
*.terraform*
*terraform.tfstate.d*
*tfplan*
roles/main_infra/files/backend.tfvars
roles/main_infra/files/remote-backend-selector.tf
roles/main_infra/files/terraform.tfvars
*.backup
# Sensitive information
/*.privkey
@ -13,4 +18,9 @@
# Stack-specific information
/PREFIX
/plans/*.planfile
group_vars/*.yml
*.retry
*.temp
*.swp
.*.swp

380
README.md
View File

@ -1,216 +1,227 @@
# Usage
# About
This repo contains Ansible playbooks designed in purpose of automation [Blockscout](https://github.com/poanetwork/blockscout) deployment builds. Currently it supports only [AWS](#AWS) as a cloud provider. Playbooks will create all necessary infrastructure along with cloud storage space required for saving configuration and state files.
## Prerequisites
The bootstrap script included in this project expects the AWS CLI, jq, and Terraform to be installed and on the PATH.
Playbooks relies on Terraform under the hood, which is the stateful infrastructure-as-a-code software tool. It allows to keep a hand on your infrastructure - modify and recreate single and multiple resources depending on your needs.
On macOS, with Homebrew installed, just run: `brew install --with-default-names awscli gnu-sed jq terraform`
For other platforms, or if you don't have Homebrew installed, please see the following links:
- [jq](https://stedolan.github.io/jq/download/)
- [awscli](https://docs.aws.amazon.com/cli/latest/userguide/installing.html)
- [terraform](https://www.terraform.io/intro/getting-started/install.html)
You will also need the following information for the installer:
- A unique prefix to use for provisioned resources (5 alphanumeric chars or less)
- A password to use for the RDS database (at least 8 characters long)
- The name of a IAM key pair to use for EC2 instances, if you provide a name which
already exists it will be used, otherwise it will be generated for you.
| Dependency name | Installation method |
| -------------------------------------- | ------------------------------------------------------------ |
| Ansible >= 2.6 | [Installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) |
| Terraform 0.11 | [Installation guide](https://learn.hashicorp.com/terraform/getting-started/install.html) |
| Python >=2.6.0 | `apt install python` |
| Python-pip | `apt install python-pip` |
| boto & boto3 & botocore python modules | `pip install boto boto3 botocore` |
## AWS
You will need to set up a new AWS account (or subaccount), and then either login
to that account using the AWS CLI (via `aws configure`) or create a user account
that you will use for provisioning, and login to that account. Set output format
to `json` for the AWS CLI. The account used
requires full access to all AWS services, as a wide variety of services are used,
a mostly complete list is as follows:
During deployment you will have to provide credentials to your AWS account. Deployment process requires a wide set of permissions to do the job, so it would work best of all if you specify the administrator account credentials.
- VPCs and associated networking resources (subnets, routing tables, etc.)
- Security Groups
- EC2
- S3
- SSM
- DynamoDB
- Route53
- RDS
- ElastiCache
- CodeDeploy
However, if you want to restrict the permissions as much possible, here is the list of resources which are created during the deployment process:
Given the large number of services involved, and the unpredictability of which
specific API calls will be needed during provisioning, it is recommended that
you provide a user account with full access. You do not need to keep this user
around (or enabled) except during the initial provisioning, and any subsequent
runs to update the infrastructure. How you choose to handle this user is up to you.
- An S3 bucket to keep Terraform state files;
- DynamoDB table to manage Terraform state files leases;
- An SSH keypair (or you can choose to use one which was already created), this is used with any EC2 hosts;
- A VPC containing all of the resources provisioned;
- A public subnet for the app servers, and a private subnet for the database (and Redis for now);
- An internet gateway to provide internet access for the VPC;
- An ALB which exposes the app server HTTPS endpoints to the world;
- A security group to lock down ingress to the app servers to 80/443 + SSH;
- A security group to allow the ALB to talk to the app servers;
- A security group to allow the app servers access to the database;
- An internal DNS zone;
- A DNS record for the database;
- An autoscaling group and launch configuration for each chain;
- A CodeDeploy application and deployment group targeting the corresponding autoscaling groups.
## Usage
Each configured chain will receive its own ASG (autoscaling group) and deployment group, when application updates are pushed to CodeDeploy, all autoscaling groups will deploy the new version using a blue/green strategy. Currently, there is only one EC2 host to run, and the ASG is configured to allow scaling up, but no triggers are set up to actually perform the scaling yet. This is something that may come in the future.
Once the prerequisites are out of the way, you are ready to spin up your new infrastructure!
The deployment process goes in two stages. First, Ansible creates S3 bucket and DynamoDB table that are required for Terraform state management. It is needed to ensure that Terraforms state is stored in a centralized location, so that multiple people can use Terraform on the same infra without stepping on each others toes. Terraform prevents this from happening by holding locks (via DynamoDB) against the state data (stored in S3).
From the root of the project:
## Configuration
```
$ bin/infra help
The single point of configuration in this script is a `group_vars/all.yml` file. First, copy it from `group_vars/all.yml.example` template by executing`cp group_vars/all.yml.example group_vars/all.yml` command and then modify it via any text editor you want (vim example - `vim group_vars/all.yml`). Here is the example of configuration file (Scroll down for variables description):
```yaml
aws_access_key: ""
aws_secret_key: ""
backend: true
upload_config_to_s3: true
bucket: "poa-terraform-state"
dynamodb_table: "poa-terraform-lock"
ec2_ssh_key_name: "sokol-test"
ec2_ssh_key_content: ""
instance_type: "m5.xlarge"
vpc_cidr: "10.0.0.0/16"
public_subnet_cidr: "10.0.0.0/24"
db_subnet_cidr: "10.0.1.0/24"
dns_zone_name: "poa.internal"
prefix: "sokol"
use_ssl: "false"
alb_ssl_policy: "ELBSecurityPolicy-2016-08"
alb_certificate_arn: "arn:aws:acm:us-east-1:290379793816:certificate/6d1bab74-fb46-4244-aab2-832bf519ab24"
root_block_size: 120
pool_size: 30
secret_key_base: "TPGMvGK0iIwlXBQuQDA5KRqk77VETbEBlG4gAWeb93TvBsYAjvoAvdODMd6ZeguPwf2YTRY3n7uvxXzQP4WayQ=="
new_relic_app_name: ""
new_relic_license_key: ""
networks: >
chains:
mychain: "url/to/endpoint"
chain_trace_endpoint:
mychain: "url/to/debug/endpoint/or/the/main/chain/endpoint"
chain_ws_endpoint:
mychain: "url/to/websocket/endpoint"
chain_jsonrpc_variant:
mychain: "parity"
chain_logo:
mychain: "url/to/logo"
chain_coin:
mychain: "coin"
chain_network:
mychain: "network name"
chain_subnetwork:
mychain: "subnetwork name"
chain_network_path:
mychain: "path/to/root"
chain_network_icon:
mychain: "_test_network_icon.html"
chain_graphiql_transaction:
mychain: "0xbc426b4792c48d8ca31ec9786e403866e14e7f3e4d39c7f2852e518fae529ab4"
chain_block_transformer:
mychain: "base"
chain_heart_beat_timeout:
mychain: 30
chain_heart_command:
mychain: "systemctl restart explorer.service"
chain_blockscout_version:
mychain: "v1.3.0-beta"
chain_db_id:
mychain: "myid"
chain_db_name:
mychain: "myname"
chain_db_username:
mychain: "myusername"
chain_db_password:
mychain: "mypassword"
chain_db_instance_class:
mychain: "db.m4.xlarge"
chain_db_storage:
mychain: "200"
chain_db_storage_type:
mychain: "gp2"
chain_db_version:
mychain: "10.5"
```
This will show you the tasks and options available to you with this script.
- `aws_access_key` and `aws_secret_key` is a credentials pair that provides access to AWS for the deployer;
- `backend` variable defines whether Terraform should keep state files remote or locally. Set `backend` variable to `true` if you want to save state file to the remote S3 bucket;
- `upload_config_to_s3` - set to `true` if you want to upload config`all.yml` file to the S3 bucket automatically during deployment. Will not work if `backend` is set to false;
- `bucket` and `dynamodb_table` represents the name of AWS resources that will be used for Terraform state management;
- If `ec2_ssh_key_content` variable is not empty, Terraform will try to create EC2 SSH key with the `ec2_ssh_key_name` name. Otherwise, the existing key with `ec2_ssh_key_name` name will be used;
- `instance_type` defines a size of the Blockscout instance that will be launched during the deployment process;
- `vpc_cidr`, `public_subnet_cidr`, `db_subnet_cidr` represents the network configuration for the deployment. Usually you want to leave it as is. However, if you want to modify it, please, expect that `db_subnet_cidr` represents not a single network, but a group of networks united with one CIDR block that will be divided during the deployment. For details, see [subnets.tf](https://github.com/ArseniiPetrovich/blockscout-terraform/blob/master/roles/main_infra/files/subnets.tf#L35) for details;
- An internal DNS zone with`dns_zone_name` name will be created to take care of BlockScout internal communications;
- `prefix` - is a unique tag to use for provisioned resources (5 alphanumeric chars or less);
- The name of a IAM key pair to use for EC2 instances, if you provide a name which
already exists it will be used, otherwise it will be generated for you;
The infra script will request any information it needs to proceed, and then call Terraform to bootstrap the necessary infrastructure
for its own state management. This state management infra is needed to ensure that Terraforms state is stored in a centralized location,
so that multiple people can use Terraform on the same infra without stepping on each others toes. Terraform prevents this from happening by
holding locks (via DynamoDB) against the state data (stored in S3). Generating the S3 bucket and DynamoDB table has to be done using local state
the first time, but once provisioned, the local state is migrated to S3, and all further invocations of `terraform` will use the state stored in S3.
* If `use_ssl` is set to `false`, SSL will be forced on Blockscout. To configure SSL, use `alb_ssl_policy` and `alb_certificate_arn` variables;
The infra created, at a high level, is as follows:
- The region should be left at `us-east-1` as some of the other regions fail for different reasons;
- The `root_block_size` is the amount of storage on your EC2 instance. This value can be adjusted by how frequently logs are rotated. Logs are located in `/opt/app/logs` of your EC2 instance;
- The `pool_size` defines the number of connections allowed by the RDS instance;
- `secret_key_base` is a random password used for BlockScout internally. It is highly recommended to gernerate your own `secret_key_base` before the deployment. For instance, you can do it via `openssl rand -base64 64 | tr -d '\n'` command;
- `new_relic_app_name` and `new_relic_license_key` should usually stay empty unless you want and know how to configure New Relic integration;
- Chain configuration is made via `chain_*` variables. For details of chain configuration see the [appropriate section](#Chain-Configuration) of this ReadMe. For examples, see the `group_vars/all.yml.example` file.
- An SSH keypair (or you can choose to use one which was already created), this is used with any EC2 hosts
- A VPC containing all of the resources provisioned
- A public subnet for the app servers, and a private subnet for the database (and Redis for now)
- An internet gateway to provide internet access for the VPC
- An ALB which exposes the app server HTTPS endpoints to the world
- A security group to lock down ingress to the app servers to 80/443 + SSH
- A security group to allow the ALB to talk to the app servers
- A security group to allow the app servers access to the database
- An internal DNS zone
- A DNS record for the database
- An autoscaling group and launch configuration for each chain
- A CodeDeploy application and deployment group targeting the corresponding autoscaling groups
## Chain Configuration
Each configured chain will receive its own ASG (autoscaling group) and deployment group, when application updates
are pushed to CodeDeploy, all autoscaling groups will deploy the new version using a blue/green strategy. Currently,
there is only one EC2 host to run, and the ASG is configured to allow scaling up, but no triggers are set up to actually perform the
scaling yet. This is something that may come in the future.
- `chains` - maps chains to the URLs of HTTP RPC endpoints, an ordinary blockchain node can be used;
- `chain_trace_endpoint` - maps chains to the URLs of HTTP RPC endpoints, which represents a node where state pruning is disabled (archive node) and tracing is enabled. If you don't have a trace endpoint, you can simply copy values from `chains` variable;
- `chain_ws_endpoint` - maps chains to the URLs of HTTP RPCs that supports websockets. This is required to get the real-time updates. Can be the same as `chains` if websocket is enabled there (but make sure to use`ws(s)` instead of `htpp(s)` protocol);
- `chain_jsonrpc_variant` - a client used to connect to the network. Can be `parity`, `geth`, etc;
- `chain_logo` - maps chains to the it logos. Place your own logo at `apps/block_scout_web/assets/static` and specify a relative path at `chain_logo` variable;
- `chain_coin` - a name of the coin used in each particular chain;
- `chain_network` - usually, a name of the organization keeping group of networks, but can represent a name of any logical network grouping you want;
- `chain_subnetwork` - a name of the network to be shown at BlockScout;
- `chain_network_path` - a relative URL path which will be used as an endpoint for defined chain. For example, if we will have our BlockScout at `blockscout.com` domain and place `core` network at `/poa/core`, then the resulting endpoint will be `blockscout.com/poa/core` for this network.
- `chain_network_icon` - maps the chain name to the network navigation icon at apps/block_scout_web/lib/block_scout_web/templates/icons without .eex extension
- `chain_graphiql_transaction` - is a variable that maps chain to a random transaction hash on that chain. This hash will be used to provide a sample query in the GraphIQL Playground.
- `chain_block_transformer` - will be `clique` for clique networks like Rinkeby and Goerli, and `base` for the rest;
- `chain_heart_beat_timeout`, `chain_heart_command` - configs for the integrated heartbeat. First describes a timeout after the command described at the second variable will be executed;
- Each of the `chain_db_*` variables configures the database for each chain. Each chain will have the separate RDS instance.
**IMPORTANT**: This repository's `.gitignore` prevents the storage of several files generated during provisioning, but it is important
that you keep them around in your own fork, so that subsequent runs of the `infra` script are using the same configuration and state.
These files are `backend.tfvars`, `main.tfvars`, and the Terraform state directories. If you generated
a private key for EC2 (the default), then you will also have a `*.privkey** file in your project root, you need to store this securely out of
band once created, but does not need to be in the repository.
Chain configuration will be stored in the Systems Manager Parameter Store, each chain has its own set of config values. If you modify one of these values, you will need to go and terminate the instances for that chain so that they are reprovisioned with the new configuration.
## Migration Prompt
The installer will prompt during its initial run to ask if you want to migrate
the Terraform state to S3, this is a necessary step, and is only prompted due to
a bug in the Terraform CLI, in a future release, this shouldn't occur, but in
the meantime, you will need to answer yes to this prompt.
## Configuring Installer
The `infra` script generates config files for storing the values provided for
future runs. You can provide overrides to this configuration in
`terraform.tfvars` or any file with the `.tfvars` extension.
An example `terraform.tfvars` configuration file looks like:
```
region = "us-east-1"
bucket = "poa-terraform-state"
dynamodb_table = "poa-terraform-lock"
key_name = "sokol-test"
prefix = "sokol"
db_password = "qwerty12345"
db_instance_class = "db.m4.xlarge"
db_storage = "120"
alb_ssl_policy = "ELBSecurityPolicy-2016-08"
alb_certificate_arn = "arn:aws:acm:us-east-1:290379793816:certificate/6d1bab74-fb46-4244-aab2-832bf519ab24"
root_block_size = 120
pool_size = 30
```
- The region should be left at `us-east-1` as some of the other regions fail for different reasons.
- The `bucket` and `dynamodb_table` can be edited but should have an identical prefix.
- The `key_name` should start with the `prefix` and can only contain 5 characters and must start with a letter.
- The `db_password` can be a changed to any alphanumeric value.
- The `db_instance_class` and `db_storage` are not required but are defaulted to `db.m4.large` and `100`GB respectively.
- If you don't plan to use SSL, set variable `use_ssl = "false"`
- The `alb_ssl_policy` and `alb_certificate_arn` are required in order to force SSL usage.
- The `root_block_size` is the amount of storage on your EC2 instance. This value can be adjusted by how frequently logs are rotated. Logs are located in `/opt/app/logs` of your EC2 instance.
- The `pool_size` defines the number of connections allowed by the RDS instance.
You will need to make sure to import the changes into the Terraform state though, or you run the risk of getting out of sync.
## Database Storage Required
The configuration variable `db_storage` can be used to define the amount of storage allocated to your RDS instance. The chart below shows an estimated amount of storage that is required to index individual chains. The `db_storage` can only be adjusted 1 time in a 24 hour period on AWS.
| Chain | Storage (GiB) |
| --- | --- |
| POA Core | 200 |
| POA Sokol | 400 |
| Ethereum Classic | 1000 |
| Ethereum Mainnet | 4000 |
| Kovan Testnet | 800 |
| Ropsten Testnet | 1500 |
| Chain | Storage (GiB) |
| ---------------- | ------------- |
| POA Core | 200 |
| POA Sokol | 400 |
| Ethereum Classic | 1000 |
| Ethereum Mainnet | 4000 |
| Kovan Testnet | 800 |
| Ropsten Testnet | 1500 |
## Defining Chains/Adding Chains
## Deploying the Infrastructure
The default of this repo is to build infra for the `sokol` chain, but you may not want that, or want a different set, so you need to create/edit `terraform.tfvars` and add the following configuration:
1. Ensure all the [prerequisites](#Prerequisites) are installed and has the right version number;
```terraform
chains = {
"mychain" = "url/to/endpoint"
}
chain_trace_endpoint = {
"mychain" = "url/to/debug/endpoint/or/the/main/chain/endpoint"
}
chain_ws_endpoint = {
"mychain" = "url/to/websocket/endpoint"
}
chain_jsonrpc_variant = {
"mychain" = "parity"
}
chain_logo = {
"mychain" = "url/to/logo"
}
chain_coin = {
"mychain" = "coin"
}
chain_network = {
"mychain" = "network name"
}
chain_subnetwork = {
"mychain" = "subnetwork name"
}
chain_network_path = {
"mychain" = path/to/root"
}
chain_network_icon = {
"mychain" = "_test_network_icon.html"
}
chain_graphiql_transaction = {
"mychain" = "0xbc426b4792c48d8ca31ec9786e403866e14e7f3e4d39c7f2852e518fae529ab4"
}
```
2. Create the AWS access key and secret access key for user with [sufficient permissions](#AWS);
This will ensure that those chains are used when provisioning the infrastructure.
3. Set the configuration file as described at the [corresponding part of instruction](#Configuration);
## Configuration
4. Run `ansible-playbook deploy.yml`;
Config is stored in the Systems Manager Parameter Store, each chain has its own set of config values. If you modify one of these values,
you will need to go and terminate the instances for that chain so that they are reprovisioned with the new configuration.
**Note:** during the deployment the ["diffs didn't match"](#error-applying-plan-diffs-didnt-match) error may occur, it will be ignored automatically. If Ansible play recap shows 0 failed plays, then the deployment was successful despite the error.
You will need to make sure to import the changes into the Terraform state though, or you run the risk of getting out of sync.
5. Save the output and proceed to the [next part of instruction](#Deploying-Blockscout).
## Deploying BlockScout
Once infrastructure is deployed, read [this](https://forum.poa.network/t/deploying-blockscout-with-terraform/1952#preparing-blockscout) and [this](https://forum.poa.network/t/deploying-blockscout-with-terraform/1952#deploying-blockscout) parts of Blockscout deployment instruction along with the infrastructure deployment output to continue Blockscout deployment.
## Destroying Provisioned Infrastructure
You can use `bin/infra destroy` to remove any generated infrastructure. It is
important to note though that if you run this script on partially generated
infrastructure, or if an error occurs during the destroy process, that you may
need to manually check for, and remove, any resources that were not able to be
deleted for you. You can use the `bin/infra resources` command to list all ARNs
that are tagged with the unique prefix you supplied to the installer, but not
all AWS resources support tags, and so will not be listed. Here's a list of such
resources I am aware of:
You can use `ansible-playbook destroy.yml` file to remove any generated infrastructure. But first of all you have to remove resources deployed via CodeDeploy manually (it includes a virtual machine and associated autoscaling group). It is also important to note though that if you run this script on partially generated infrastructure, or if an error occurs during the destroy process, that you may need to manually check for, and remove, any resources that were not able to be deleted for you.
- Route53 hosted zone and records
- ElastiCache/RDS subnet groups
- CodeDeploy applications
**Note!** While Terraform is stateful, Ansible is stateless, so if you modify `bucket` or `dynamodb_table` variables and run `destroy.yml` or `deploy.yml` playbooks, it will not alter the current S3/Dynamo resources names, but create a new resources. Moreover, altering `bucket` variable will make Terraform to forget about existing infrastructure and, as a consequence, redeploy it. If it absolutely necessary for you to alter the S3 or DynamoDB names you can do it manually and then change the appropriate variable accordingly.
If the `destroy` command succeeds, then everything has been removed, and you do
not have to worry about leftover resources hanging around.
Also note, that changing `backend` variable will force Terraform to forget about created infrastructure also, since it will start searching the current state files locally instead of remote.
## Migrating deployer to another machine
You can easily manipulate your deployment from any machine with sufficient prerequisites. If `upload_config_to_s3` variable is set to true, the deployer will automatically upload your `all.yml` file to the s3 bucket, so you can easily download it to any other machine. Simply download this file to your `group_vars` folder and your new deployer will pick up the current deployment instead of creating a new one.
## Attaching the existing RDS instance to the current deployment
In some cases you may want not to create a new database, but to add the existing one to use within the deployment. In order to do that configure all the proper values at `group_vars/all.yml` including yours DB ID and name and execute the `ansible-playbook attach_existing_rds.yml` command. This will add the current DB instance into Terraform-managed resource group. After that run `ansible-playbook deploy.yml` as usually.
**Note 1**: while executing `ansible-playbook attach_existing_rds.yml` the S3 and DynamoDB will be automatically created (if `backend` variable is set to `true`) to store Terraform state files.
**Note 2**: the actual name of your resource must include prefix that you will use in this deployment.
Example:
Real resource: tf-poa
`prefix` variable: tf
`chain_db_id` variable: poa
**Note 3**: make sure MultiAZ is disabled on your database.
**Note 4**: make sure that all the variables at `group_vars/all.yml` are exactly the same as at your existing DB.
## Common Errors and Questions
### S3: 403 error during provisioning
Usually appears if S3 bucket already exists. Remember, S3 bucket has globally unique name, so if you don't have it, it doesn't mean, that it doesn't exists at all. Login to your AWS console and try to create S3 bucket with the same name you specified at `bucket` variable to ensure.
### Error Applying Plan (diffs didn't match)
If you see something like the following:
@ -224,30 +235,9 @@ Error: Error applying plan:
Please include the following information in your report:
Terraform Version: 0.11.7
Terraform Version: 0.11.11
Resource ID: aws_autoscaling_group.explorer
Mismatch reason: attribute mismatch: availability_zones.1252502072
```
This is due to a bug in Terraform, however the fix is to just rerun `bin/infra
provision` again, and Terraform will pick up where it left off. This does not
always happen, but this is the current workaround if you see it.
### Error inspecting states in the "s3" backend
If you see the following:
```
Error inspecting states in the "s3" backend:
NoSuchBucket: The specified bucket does not exist
status code: 404, request id: xxxxxxxx, host id: xxxxxxxx
Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.
```
This is due to mismatched variables in `terraform.tfvars` and `main.tfvars` files. Update the `terraform.tfvars` file to match the `main.tfvars` file. Delete the `.terraform` and `terraform.dfstate.d` folders, run `bin/infra destroy_setup`, and restart provision by running `bin/infra provision`.
This is due to a bug in Terraform, however the fix is to just rerun `ansible-playbook deploy.yml` again, and Terraform will pick up where it left off. This does not always happen, but this is the current workaround if you see it.

15
attach_existing_rds.yml Normal file
View File

@ -0,0 +1,15 @@
- name: Attach existing RDS instance
hosts: localhost
roles:
- { role: check }
- { role: s3, when: "backend|bool == true" }
- { role: dynamodb, when: "backend|bool == true" }
- { role: attach_existing_rds }
vars_prompt:
- name: "confirmation"
prompt: "Are you sure you want to attach the existing RDS? If backend variable is set to True, this action includes creating the S3 and DynamoDB table for storing Terraform state files."
default: False
environment:
AWS_ACCESS_KEY_ID: "{{ aws_access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ aws_secret_key }}"
AWS_REGION: "{{ region }}"

View File

@ -1 +0,0 @@
../common/backend.tf

View File

@ -1 +0,0 @@
../setup/main.tf

View File

@ -1 +0,0 @@
../common/provider.tf

View File

@ -1 +0,0 @@
../common/variables.tf

539
bin/infra
View File

@ -1,539 +0,0 @@
#!/usr/bin/env bash
set -e
# Color support
function disable_color() {
IS_TTY=false
txtrst=
txtbld=
bldred=
bldgrn=
bldylw=
bldblu=
bldmag=
bldcyn=
}
IS_TTY=false
if [ -t 1 ]; then
if command -v tput >/dev/null; then
IS_TTY=true
fi
fi
if [ "$IS_TTY" = "true" ]; then
txtrst=$(tput sgr0 || echo '\e[0m') # Reset
txtbld=$(tput bold || echo '\e[1m') # Bold
bldred=${txtbld}$(tput setaf 1 || echo '\e[31m') # Red
bldgrn=${txtbld}$(tput setaf 2 || echo '\e[32m') # Green
bldylw=${txtbld}$(tput setaf 3 || echo '\e[33m') # Yellow
bldblu=${txtbld}$(tput setaf 4 || echo '\e[34m') # Blue
bldmag=${txtbld}$(tput setaf 5 || echo '\e[35m') # Magenta
bldcyn=${txtbld}$(tput setaf 8 || echo '\e[38m') # Cyan
else
disable_color
fi
# Logging
# Print the given message in cyan, but only when --verbose was passed
function debug() {
if [ ! -z "$VERBOSE" ]; then
printf '%s%s%s\n' "$bldcyn" "$1" "$txtrst"
fi
}
# Print the given message in blue
function info() {
printf '%s%s%s\n' "$bldblu" "$1" "$txtrst"
}
# Print the given message in magenta
function action() {
printf '%s%s%s\n' "$bldmag" "$1" "$txtrst"
}
# Print the given message in yellow
function warn() {
printf '%s%s%s\n' "$bldylw" "$1" "$txtrst"
}
# Like warn, but expects the message via redirect
function warnb() {
printf '%s' "$bldylw"
while read -r data; do
printf '%s\n' "$data"
done
printf '%s\n' "$txtrst"
}
# Print the given message in red
function error() {
printf '%s%s%s\n' "$bldred" "$1" "$txtrst"
exit 1
}
# Like error, but expects the message via redirect
function errorb() {
printf '%s' "$bldred"
while read -r data; do
printf '%s\n' "$data"
done
printf '%s\n' "$txtrst"
exit 1
}
# Print the given message in green
function success() {
printf '%s%s%s\n' "$bldgrn" "$1" "$txtrst"
}
# Print help if requested
function help() {
cat << EOF
POA Infrastructure Management Tool
Usage:
./infra [global options] <task> [task args]
This script will bootstrap required AWS resources, then generate infrastructure via Terraform.
Tasks:
help Show help
provision Run the provisioner to generate or modify POA infrastructure
destroy Tear down any provisioned resources and local state
resources List ARNs of any generated resources (* see docs for caveats)
Global Options:
-v | --verbose This will print out verbose execution information for debugging
-h | --help Print this help message
--dry-run Perform as many actions as possible without performing side-effects
--no-color Turn off color
--skip-approval Automatically accept any prompts for confirmation
--profile=<name> Use a specific AWS profile rather than the default
EOF
exit 2
}
# Verify tools
function check_prereqs() {
if ! which jq >/dev/null; then
warnb << EOF
This script requires that the 'jq' utility has been installed and can be found in $PATH
On macOS, with Homebrew, this is as simple as 'brew install jq'.
For installs on other platforms, see https://stedolan.github.io/jq/download/
EOF
exit 2
fi
if ! which aws >/dev/null; then
warnb << EOF
This script requires that the AWS CLI tool has been installed and can be found in $PATH
On macOS, with Homebrew, this is as simple as 'brew install awscli'.
For installs on other platforms, see https://docs.aws.amazon.com/cli/latest/userguide/installing.html
EOF
exit 2
fi
if ! which terraform >/dev/null; then
warnb << EOF
This script requires that the Terraform CLI be installed and available in PATH!
On macOS, with Homebrew, this is as simple as 'brew install terraform'.
For other platforms, see https://www.terraform.io/intro/getting-started/install.html
EOF
exit 2
fi
}
# Load a value which is present in one of the Terraform config
# files in the current directory, with precedence such that user-provided
# .tfvars are loaded after main.tfvars, allowing one to override those values
function get_config() {
EXTRA_VARS="$(find . -name '*.tfvars' -and \! \( -name 'backend.tfvars' \))"
if [ ! -z "$EXTRA_VARS" ]; then
# shellcheck disable=SC2086 disable=2002
cat $EXTRA_VARS | \
grep -E "^$1 " | \
tail -n 1 | \
sed -r -e 's/^[^=]*= //' -e 's/"//g'
fi
}
function destroy_bucket() {
bucket="$(grep 'bucket' backend.tfvars | sed -e 's/bucket = //' -e 's/"//g')"
read -r -p "Are you super sure you want to delete the Terraform state bucket and all versions? (y/n) "
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 2
fi
# Delete all versions and delete markers first
info "Disabling bucket versioning for S3 bucket '$bucket'.."
aws s3api put-bucket-versioning --bucket="$bucket" --versioning-configuration="Status=Suspended"
info "Deleting old versions of S3 bucket '$bucket'.."
# shellcheck disable=SC1004
aws s3api list-object-versions --bucket="$bucket" |\
jq '.Versions[], .DeleteMarkers[] | "\"\(.Key)\" \"\(.VersionId)\""' --raw-output |\
awk -v bucket="$bucket" '{ \
print "aws s3api delete-object", \
"--bucket=\"" bucket "\"", \
"--key=\"" $1 "\"", \
"--version-id=\"" $2 "\"" \
| "/bin/sh >/dev/null"; \
print "Deleted version " $2 "of " $1 " successfully"; \
}'
# Finally, delete the bucket and all its contents
aws s3 rb --force "s3://$bucket"
}
function destroy_dynamo_table() {
table="$(grep 'dynamodb_table' backend.tfvars | sed -e 's/dynamodb_table = //' -e 's/"//g')"
aws dynamodb delete-table --table-name="$table"
}
function destroy_generated_files() {
rm -f ./backend.tfvars
rm -f ./main.tfvars
}
# Tear down all provisioned infra
function destroy() {
# shellcheck disable=SC2086
terraform plan -destroy -var-file=main.tfvars -out plans/destroy.planfile main
read -r -p "Are you sure you want to run this plan? (y/n) "
if [[ $REPLY =~ ^[yY]$ ]]; then
terraform apply plans/destroy.planfile
rm -f plans/destroy.planfile
else
exit 0
fi
read -r -p "Do you wish to destroy the Terraform state? (y/n) "
if [[ $REPLY =~ ^[yY]$ ]]; then
destroy_bucket
destroy_dynamo_table
rm -rf terraform.tfstate.d
rm -rf .terraform
else
exit 0
fi
read -r -p "Do you want to delete the generated config files? (y/n) "
if [[ $REPLY =~ ^[yY]$ ]]; then
destroy_generated_files
fi
success "All generated infrastructure successfully removed!"
}
# Provision infrastructure
function provision() {
# If INFRA_PREFIX has not been set yet, request it from user
if [ -z "$INFRA_PREFIX" ]; then
DEFAULT_INFRA_PREFIX=$(LC_ALL=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 5 | head -n 1)
warnb << EOF
# Infrastructure Prefix
In order to ensure that provisioned resources are unique, this script uses a
unique prefix for all resource names and ids.
By default, a random 5 character alphanumeric string is generated for you, but
if you wish to provide your own, now is your chance. This value will be stored
in 'main.tfvars' so that you only need provide it once, but make sure you source
control the file.
EOF
read -r -p "What prefix should be used? (default is $DEFAULT_INFRA_PREFIX): "
INFRA_PREFIX="$REPLY"
if [ -z "$INFRA_PREFIX" ]; then
INFRA_PREFIX="$DEFAULT_INFRA_PREFIX"
fi
fi
if ! echo "$INFRA_PREFIX" | grep -E '^[a-z0-9]{3,5}$'; then
errorb << EOF
The prefix '$INFRA_PREFIX' is invalid!
It must consist only of the lowercase characters a-z and digits 0-9,
and must be between 3 and 5 characters long.
EOF
fi
# EC2 key pairs
if [ -z "$KEY_PAIR" ]; then
KEY_PAIR="$(get_config 'key_name')"
if [ -z "$KEY_PAIR" ]; then
read -r -p "Please provide the name of the key pair to use with EC2 hosts: "
KEY_PAIR="$REPLY"
if [ -z "$KEY_PAIR" ]; then
error "You must provide a valid key pair name!"
exit 2
fi
fi
fi
if ! aws ec2 describe-key-pairs --key-names="$KEY_PAIR" 2>/dev/null; then
if [ "$DRY_RUN" == "true" ]; then
action "DRY RUN: Would have created an EC2 key pair"
else
info "The key pair '$KEY_PAIR' does not exist, creating..."
if ! output=$(aws ec2 create-key-pair --key-name="$KEY_PAIR"); then
error "$output\\nFailed to generate key pair!"
fi
echo "$output" | jq '.KeyMaterial' --raw-output > "$KEY_PAIR.privkey"
success "Created keypair successfully! Private key has been saved to ./$KEY_PAIR.privkey"
fi
fi
if [ -z "$SECRET_KEY_BASE" ]; then
SECRET_KEY_BASE="$(get_config 'secret_key_base')"
if [ -z "$SECRET_KEY_BASE" ]; then
SECRET_KEY_BASE="$(openssl rand -base64 64 | tr -d '\n')"
fi
fi
# Save variables used by Terraform modules
if [ ! -f ./backend.tfvars ] && [ ! -f ./main.tfvars ]; then
# shellcheck disable=SC2154
region="$TF_VAR_region"
if [ -z "$region" ]; then
# Try to pull region from local config
if [ -f "$HOME/.aws/config" ]; then
if [ "$AWS_PROFILE" == "default" ]; then
region=$(awk '/\[default\]/{a=1;next}; /\[/{a=0}a' ~/.aws/config | grep 'region' | sed -e 's/region = //')
else
#shellcheck disable=SC1117
region=$(awk "/\[profile $AWS_PROFILE\]/{a=1;next}; /\[/{a=0}a" ~/.aws/config | grep 'region' | sed -e 's/region = //')
fi
fi
fi
if [ -z "$region" ]; then
read -r -p "What region should infrastructure be created in (us-east-2): "
if [ -z "$REPLY" ]; then
region='us-east-2'
else
region="$REPLY"
fi
fi
bucket="$(get_config 'bucket')"
if [ -z "$bucket" ]; then
bucket="poa-terraform-state"
fi
dynamo_table="$(get_config 'dynamodb_table')"
if [ -z "$dynamo_table" ]; then
dynamo_table="poa-terraform-locks"
fi
# Backend config only!
{
echo "region = \"$region\""
echo "bucket = \"${INFRA_PREFIX}-$bucket\""
echo "dynamodb_table = \"${INFRA_PREFIX}-$dynamo_table\""
echo "key = \"terraform.tfstate\""
} > ./backend.tfvars
# Other configuration needs to go in main.tfvars or init will break
{
echo "region = \"$region\""
echo "bucket = \"$bucket\""
echo "dynamodb_table = \"$dynamo_table\""
echo "key_name = \"$KEY_PAIR\""
echo "prefix = \"$INFRA_PREFIX\""
echo "secret_key_base = \"$SECRET_KEY_BASE\""
} > ./main.tfvars
fi
# No Terraform state yet, so this is a fresh run
if [ ! -d .terraform ]; then
terraform workspace new base setup
terraform workspace select base setup
# shellcheck disable=SC2086
terraform init -backend-config=backend.tfvars setup
# shellcheck disable=SC2086
terraform plan -out plans/setup.planfile setup
if [ "$DRY_RUN" == "false" ]; then
# No need to show the plan, it has already been displayed
SKIP_SETUP_PLAN="true"
fi
fi
workspace="$(terraform workspace show)"
# Setup hasn't completed yet, perhaps due to a dry run
if [ -f plans/setup.planfile ]; then
if [ -z "$SKIP_SETUP_PLAN" ]; then
# Regenerate setup plan if not fresh
# shellcheck disable=SC2086
terraform plan -out plans/setup.planfile setup
fi
# Wait for user approval if we're going to proceed
if [ "$SKIP_APPROVAL" == "false" ]; then
read -r -p "Take a moment to review the generated plan, and press ENTER to continue"
fi
if [ "$DRY_RUN" == "true" ]; then
action "DRY RUN: Would have executed Terraform plan for S3 backend as just shown"
warn "Unable to dry run further steps until S3 backend has been created!"
exit 0
fi
terraform apply plans/setup.planfile
rm plans/setup.planfile
# Migrate state to S3
# shellcheck disable=SC2086
terraform init -force-copy -backend-config=backend.tfvars base
fi
if [ "$workspace" == "base" ]; then
# Switch to main workspace
terraform workspace new main main
terraform workspace select main main
fi
# shellcheck disable=SC2086
terraform init -backend-config=backend.tfvars -var-file=main.tfvars main
# Generate the plan for the remaining infra
# shellcheck disable=SC2086
terraform plan -var-file=main.tfvars -out plans/main.planfile main
if [ "$SKIP_APPROVAL" == "false" ]; then
read -r -p "Take a moment to review the generated plan, and press ENTER to continue"
fi
if [ "$DRY_RUN" == "true" ]; then
action "DRY RUN: Would have executed the Terraform plan just shown"
fi
# Apply the plan to provision the remaining infra
terraform apply plans/main.planfile
rm plans/main.planfile
success "Infrastructure has been successfully provisioned!"
}
# Print all resource ARNs tagged with prefix=INFRA_PREFIX
function resources() {
if [ -z "$INFRA_PREFIX" ]; then
error "No prefix set, unable to locate tagged resources"
exit 1
fi
# Yes, stagging, blame Amazon
aws resourcegroupstaggingapi get-resources \
--no-paginate \
--tag-filters="Key=prefix,Values=$INFRA_PREFIX" | \
jq '.ResourceTagMappingList[].ResourceARN' --raw-output
}
# Provide test data for validation
function precheck() {
# Save variables used by Terraform modules
if [ ! -f ./ignore.tfvars ]; then
{
echo "bucket = \"poa-terraform-state\""
echo "dynamodb_table = \"poa-terraform-locks\""
echo "key = \"terraform.tfstate\""
echo "key_name = \"poa\""
echo "prefix = \"prefix\""
} > ./ignore.tfvars
fi
}
# Parse options for this script
VERBOSE=false
HELP=false
DRY_RUN=false
# Environment variables for Terraform
AWS_PROFILE="${AWS_PROFILE:-default}"
COMMAND=
while [ "$1" != "" ]; do
param=$(echo "$1" | sed -re 's/^([^=]*)=/\1/')
val=$(echo "$1" | sed -re 's/^([^=]*)=//')
case $param in
-h | --help)
HELP=true
;;
-v | --verbose)
VERBOSE=true
;;
--dry-run)
DRY_RUN=true
;;
--no-color)
disable_color
;;
--profile)
AWS_PROFILE="$val"
;;
--skip-approval)
SKIP_APPROVAL="true"
;;
--)
shift
break
;;
*)
COMMAND="$param"
shift
break
;;
esac
shift
done
# Turn on debug mode if --verbose was set
if [ "$VERBOSE" == "true" ]; then
set -x
fi
# Set working directory to the project root
cd "$(dirname "${BASH_SOURCE[0]}")/.."
# Export AWS_PROFILE if a non-default profile was chosen
if [ ! "$AWS_PROFILE" == "default" ]; then
export AWS_PROFILE
fi
# If cached prefix is in PREFIX file, then use it
if [ -z "$INFRA_PREFIX" ]; then
if ls ./*.tfvars >/dev/null; then
INFRA_PREFIX="$(get_config 'prefix')"
fi
fi
# Override command if --help or -h was passed
if [ "$HELP" == "true" ]; then
# If we ever want to show help for a specific command we'll need this
# HELP_COMMAND="$COMMAND"
COMMAND=help
fi
check_prereqs
case $COMMAND in
help)
help
;;
provision)
provision
;;
destroy)
destroy
;;
resources)
resources
;;
precheck)
precheck
;;
destroy_setup)
destroy_bucket
destroy_dynamo_table
;;
*)
error "Unknown task '$COMMAND'. Try 'help' to see valid tasks"
exit 1
esac
exit 0

View File

@ -1,3 +0,0 @@
terraform {
backend "s3" {}
}

View File

@ -1,16 +0,0 @@
variable "bucket" {
description = "The name of the S3 bucket which will hold Terraform state"
}
variable "dynamodb_table" {
description = "The name of the DynamoDB table which will hold Terraform locks"
}
variable "region" {
description = "The AWS region to use"
default = "us-east-2"
}
variable "prefix" {
description = "The prefix used to identify all resources generated with this plan"
}

11
deploy.yml Normal file
View File

@ -0,0 +1,11 @@
- name: Prepare infrastructure
hosts: localhost
roles:
- { role: check }
- { role: s3, when: "backend|bool == true" }
- { role: dynamodb, when: "backend|bool == true" }
- { role: main_infra }
environment:
AWS_ACCESS_KEY_ID: "{{ aws_access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ aws_secret_key }}"
AWS_REGION: "{{ region }}"

12
destroy.yml Normal file
View File

@ -0,0 +1,12 @@
- name: Destroy infrastructure
hosts: localhost
roles:
- { role: destroy, when: "confirmation|bool == True" }
vars_prompt:
- name: "confirmation"
prompt: "Are you sure you want to destroy all the infra?"
default: False
environment:
AWS_ACCESS_KEY_ID: "{{ aws_access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ aws_secret_key }}"
AWS_REGION: "{{ region }}"

169
group_vars/all.yml.example Normal file
View File

@ -0,0 +1,169 @@
# Credentials to connect to AWS
aws_access_key: ""
aws_secret_key: ""
# Deployment-related variables
## If set to true backend will be uploaded and stored at S3 bucket, so you can easily manage your deployment from any machine. It is highly recommended to do not change this variable
backend: true
## If this is set to true along with backend variable, this config file will be saved to s3 bucket. Please, make sure to name it as all.yml. Otherwise, no upload will be performed
upload_config_to_s3: true
### The bucket and dynamodb_table variables will be used only when backend variable is set to true
### Name of the bucket where TF state files will be stored
bucket: "poa-terraform-state"
### Name of the DynamoDB table where current lease of TF state file will be stored
dynamodb_table: "poa-terraform-lock"
## If ec2_ssh_key_content is empty all the virtual machines will be created with ec2_ssh_key_name key. Otherwise, playbooks will upload ec2_ssh_key_content with the name of ec2_ssh_key_name and launch virtual machines with that key
ec2_ssh_key_name: "sokol-test"
ec2_ssh_key_content: ""
## EC2 Instance will have the following size:
instance_type: "m5.large"
## VPC containing Blockscout resources will be created as following:
vpc_cidr: "10.0.0.0/16"
public_subnet_cidr: "10.0.0.0/24"
db_subnet_cidr: "10.0.1.0/16"
## Internal DNS zone will looks like:
dns_zone_name: "poa.internal"
## All resources will be prefixed with this one
prefix: "sokol"
## The following settngs are related to SSL of Application Load Balancer that will be deployed to AWS. If use_ssl is set to false, alb_* variables can be omitted
use_ssl: "false"
alb_ssl_policy: "ELBSecurityPolicy-2016-08"
alb_certificate_arn: "arn:aws:acm:us-east-1:290379793816:certificate/6d1bab74-fb46-4244-aab2-832bf519ab24"
# Region. It is recommended to deploy to us-east-1 as some of the other regions fails due to varied reasons
region: "us-east-1"
## Size of the EC2 instance EBS root volume
root_block_size: 120
## Number of connections allowed by EC2 instance
pool_size: 30
## Secret key of Explorer. Please, generate your own key here. For example, you can use the following command: openssl rand -base64 64 | tr -d '\n'
secret_key_base: "TPGMvGK0iIwlXBQuQDA5KRqk77VETbEBlG4gAWeb93TvBsYAjvoAvdODMd6ZeguPwf2YTRY3n7uvxXzQP4WayQ=="
## New Relic related configs. Usually you want this empty
new_relic_app_name: ""
new_relic_license_key: ""
# Network related variables
## This variable represents network RPC endpoint:
chains:
core: "http://10.10.10.10:8545"
sokol: "https://192.168.0.1:8545"
## This variable represents network RPC endpoint in trace mode. Can be the same as the previous variable:
chain_trace_endpoint:
core: "http://10.10.10.11:8545"
sokol: "http://192.168.0.1:8546"
## This variable represents network RPC endpoint in websocket mode:
chain_ws_endpoint:
core: "ws://10.10.10.10/ws"
sokol: "ws://192.168.0.1/ws"
## Next variable represents the client that is used to connect to the chain.
chain_jsonrpc_variant:
core: "parity"
sokol: "geth"
## Place your own logo at apps/block_scout_web/assets/static folder of blockscout repo and specify a relative path here
chain_logo:
core: "/images/core.svg"
sokol: "/images/sokol.svg"
## The following variables represents a name of the coin that will be shown at blockchain explorer
chain_coin:
core: "POA"
sokol: "POA"
## Next variable usually represents the name of the organization/community that hosts the chain
chain_network:
core: "POA Network"
sokol: "POA Network"
## Next variable represents the actual name of the particular network
chain_subnetwork:
core: "POA Core Network"
sokol: "POA Sokol test network"
## The next variable represent a relative URL path which will be used as an endpoint for defined chain. For example, if we will have our blockscout at blockscout.com domain and place "core" network at "/poa/core", then the resulting endpoint will be blockscout.com/poa/core for this network.
chain_network_path:
core: "/poa/core"
sokol: "/poa/sokol"
## The following variable maps the chain name to the network navigation icon at apps/block_scout_web/lib/block_scout_web/templates/icons without .eex extension
chain_network_icon:
core: "_test_network_icon.html"
sokol: "_test_network_icon.html"
## The following variable maps the chain names to random transaction hash on that chain. "chain_graphiql_transaction" is a variable that takes a transaction hash from a network to provide a sample query in the GraphIQL Playground.
chain_graphiql_transaction:
core: "0xbc426b4792c48d8ca31ec9786e403866e14e7f3e4d39c7f2852e518fae529ab4"
sokol: "0xbc426b4792c48d8ca31ec9786e403866e14e7f3e4d39c7f2852e518fae529ab5"
## A variable required in indexer configuration files. Can be either base or clique. Usually you don't want to change this value unless you know what are you doing.
chain_block_transformer:
core: "base"
sokol: "base"
## Heartbeat is an Erlang monitoring service that will restart BlockScout if it becomes unresponsive. The following two variables configures the timeout before Blockscout will be restarted and command to restart. Usually you don't want to change these values.
chain_heart_beat_timeout:
core: 30
sokol: 30
chain_heart_command:
core: "systemctl restart explorer.service"
sokol: "systemctl restart explorer.service"
## This value describes a version of Blockscout that will be shown at the footer. You can write any text there you want to see at the footer.
chain_blockscout_version:
core: "v1.3.4-beta"
sokol: "v1.3.4-beta"
## This value represents the name of the DB that will be created/attached. Must be unique. Will be prefixed with `prefix` variable.
chain_db_id:
core: "core"
sokol: "sokol"
## Each network should have it's own DB. This variable maps chain to DB name. Should not be messed with db_id variable, which represents the RDS instance ID.
chain_db_name:
core: "core"
sokol: "sokol"
## The following variables describes the DB configurations for each network including usernames, password, instance class, etc.
chain_db_username:
core: "core"
sokol: "sokol"
chain_db_password:
core: "fkowfjpoi309021"
sokol: "kopsdOPpa9213K"
chain_db_instance_class:
core: "db.m4.xlarge"
sokol: "db.m4.large"
# Size of storage in GiB.
chain_db_storage:
core: "200"
sokol: "100"
# Type of disk to be used for the DB.
chain_db_storage_type:
core: "io1"
sokol: "gp2"
# Blockscout uses Postgres as the DB engine. This variable describes the Postgres version used in each particular chain.
chain_db_version:
core: "10.5"
sokol: "10.6"

View File

@ -1 +0,0 @@
../common/backend.tf

View File

@ -1 +0,0 @@
../common/variables.tf

View File

@ -1,51 +0,0 @@
module "backend" {
source = "../modules/backend"
bootstrap = "0"
bucket = "${var.bucket}"
dynamodb_table = "${var.dynamodb_table}"
prefix = "${var.prefix}"
}
module "stack" {
source = "../modules/stack"
prefix = "${var.prefix}"
region = "${var.region}"
key_name = "${var.key_name}"
chain_jsonrpc_variant = "${var.chain_jsonrpc_variant}"
chains = "${var.chains}"
chain_trace_endpoint = "${var.chain_trace_endpoint}"
chain_ws_endpoint = "${var.chain_ws_endpoint}"
chain_logo = "${var.chain_logo}"
chain_coin = "${var.chain_coin}"
chain_network = "${var.chain_network}"
chain_subnetwork = "${var.chain_subnetwork}"
chain_network_path = "${var.chain_network_path}"
chain_network_icon = "${var.chain_network_icon}"
chain_graphiql_transaction = "${var.chain_graphiql_transaction}"
vpc_cidr = "${var.vpc_cidr}"
public_subnet_cidr = "${var.public_subnet_cidr}"
instance_type = "${var.instance_type}"
root_block_size = "${var.root_block_size}"
pool_size = "${var.pool_size}"
db_subnet_cidr = "${var.db_subnet_cidr}"
dns_zone_name = "${var.dns_zone_name}"
db_id = "${var.db_id}"
db_name = "${var.db_name}"
db_username = "${var.db_username}"
db_password = "${var.db_password}"
db_storage = "${var.db_storage}"
db_storage_type = "${var.db_storage_type}"
db_instance_class = "${var.db_instance_class}"
secret_key_base = "${var.secret_key_base}"
new_relic_app_name = "${var.new_relic_app_name}"
new_relic_license_key = "${var.new_relic_license_key}"
alb_ssl_policy = "${var.alb_ssl_policy}"
alb_certificate_arn = "${var.alb_certificate_arn}"
use_ssl = "${var.use_ssl}"
}

View File

@ -1 +0,0 @@
../common/provider.tf

View File

@ -1,181 +0,0 @@
variable "key_name" {
description = "The name of the SSH key to use with EC2 hosts"
default = "poa"
}
variable "vpc_cidr" {
description = "Virtual Private Cloud CIDR block"
default = "10.0.0.0/16"
}
variable "public_subnet_cidr" {
description = "The CIDR block for the public subnet"
default = "10.0.0.0/24"
}
variable "db_subnet_cidr" {
description = "The CIDR block for the database subnet"
default = "10.0.1.0/16"
}
variable "dns_zone_name" {
description = "The internal DNS name"
default = "poa.internal"
}
variable "instance_type" {
description = "The EC2 instance type to use for app servers"
default = "m5.xlarge"
}
variable "root_block_size" {
description = "The EC2 instance root block size in GB"
default = 8
}
variable "pool_size" {
description = "The number of connections available to the RDS instance"
default = 30
}
variable "chains" {
description = "A map of chain names to urls"
default = {
"sokol" = "https://sokol-trace.poa.network"
}
}
variable "chain_trace_endpoint" {
description = "A map of chain names to RPC tracing endpoint"
default = {
"sokol" = "https://sokol-trace.poa.network"
}
}
variable "chain_ws_endpoint" {
description = "A map of chain names to Websocket RPC Endpoint"
default = {
"sokol" = "wss://sokol-ws.poa.network/ws"
}
}
variable "chain_jsonrpc_variant" {
description = "A map of chain names to JSON RPC variant"
default = {
"sokol" = "parity"
}
}
variable "chain_logo" {
description = "A map of chain names to logo url"
default = {
"sokol" = "/images/sokol_logo.svg"
}
}
variable "chain_coin" {
description = "A map of chain name to coin symbol"
default = {
"sokol" = "POA"
}
}
variable "chain_network" {
description = "A map of chain names to network name"
default = {
"sokol" = "POA Network"
}
}
variable "chain_subnetwork" {
description = "A map of chain names to subnetwork name"
default = {
"sokol" = "Sokol Testnet"
}
}
variable "chain_network_path" {
description = "A map of chain names to network name path"
default = {
"sokol" = "/poa/sokol"
}
}
variable "chain_network_icon" {
description = "A map of chain names to network navigation icon"
default = {
"sokol" = "_test_network_icon.html"
}
}
variable "chain_graphiql_transaction" {
description = "A map of chain names to random transaction hash on that chain"
default = {
"sokol" = "0xbc426b4792c48d8ca31ec9786e403866e14e7f3e4d39c7f2852e518fae529ab4"
}
}
# RDS/Database configuration
variable "db_id" {
description = "The identifier for the RDS database"
default = "poa"
}
variable "db_name" {
description = "The name of the database associated with the application"
default = "poa"
}
variable "db_username" {
description = "The name of the user which will be used to connect to the database"
default = "poa"
}
variable "db_password" {
description = "The password associated with the database user"
}
variable "db_storage" {
description = "The database storage size in GB"
default = "100"
}
variable "db_storage_type" {
description = "The type of database storage to use: magnetic, gp2, io1"
default = "gp2"
}
variable "db_instance_class" {
description = "The instance class of the database"
default = "db.m4.large"
}
variable "secret_key_base" {
description = "The secret key base to use for Explorer"
}
variable "new_relic_app_name" {
description = "The name of the application in New Relic"
default = ""
}
variable "new_relic_license_key" {
description = "The license key for talking to New Relic"
default = ""
}
# SSL Certificate configuration
variable "alb_ssl_policy" {
description = "The SSL Policy for the Application Load Balancer"
default = "ELBSecurityPolicy-2016-08"
}
variable "alb_certificate_arn" {
description = "The Certificate ARN for the Applicationn Load Balancer Policy"
default = "arn:aws:acm:us-east-1:008312654217:certificate/ce6ec2cb-eba4-4b02-af1d-e77ce8813497"
}
variable "use_ssl" {
description = "Enable SSL"
default = "true"
}

View File

@ -1,45 +0,0 @@
# S3 bucket
resource "aws_s3_bucket" "terraform_state" {
count = "${var.bootstrap}"
bucket = "${var.prefix}-${var.bucket}"
acl = "private"
versioning {
enabled = true
}
lifecycle_rule {
id = "expire"
enabled = true
noncurrent_version_expiration {
days = 90
}
}
tags {
origin = "terraform"
prefix = "${var.prefix}"
}
}
# DynamoDB table
resource "aws_dynamodb_table" "terraform_statelock" {
count = "${var.bootstrap}"
name = "${var.prefix}-${var.dynamodb_table}"
read_capacity = 1
write_capacity = 1
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags {
origin = "terraform"
prefix = "${var.prefix}"
}
}

View File

@ -1,8 +0,0 @@
variable "bootstrap" {
description = "Whether we are bootstrapping the required infra or not"
default = 0
}
variable "bucket" {}
variable "dynamodb_table" {}
variable "prefix" {}

View File

@ -1,29 +0,0 @@
output "codedeploy_app" {
description = "The name of the CodeDeploy application"
value = "${aws_codedeploy_app.explorer.name}"
}
output "codedeploy_deployment_group_names" {
description = "The names of all the CodeDeploy deployment groups"
value = "${aws_codedeploy_deployment_group.explorer.*.deployment_group_name}"
}
output "codedeploy_bucket" {
description = "The name of the CodeDeploy S3 bucket for applciation revisions"
value = "${aws_s3_bucket.explorer_releases.id}"
}
output "codedeploy_bucket_path" {
description = "The path for releases in the CodeDeploy S3 bucket"
value = "/"
}
output "explorer_urls" {
description = "A map of each chain to the DNS name of its corresponding Explorer instance"
value = "${zipmap(keys(var.chains), aws_lb.explorer.*.dns_name)}"
}
output "db_instance_address" {
description = "The IP address of the RDS instance"
value = "${aws_db_instance.default.address}"
}

View File

@ -1,21 +0,0 @@
resource "aws_db_instance" "default" {
identifier = "${var.prefix}-${var.db_id}"
engine = "postgres"
engine_version = "10.5"
instance_class = "${var.db_instance_class}"
storage_type = "${var.db_storage_type}"
allocated_storage = "${var.db_storage}"
copy_tags_to_snapshot = true
skip_final_snapshot = true
username = "${var.db_username}"
password = "${var.db_password}"
vpc_security_group_ids = ["${aws_security_group.database.id}"]
db_subnet_group_name = "${aws_db_subnet_group.database.id}"
depends_on = ["aws_security_group.database"]
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}

View File

View File

@ -0,0 +1 @@
../../main_infra/defaults/main.yml

View File

@ -0,0 +1,38 @@
- name: Local or remote backend selector (remote)
template:
src: roles/main_infra/templates/remote-backend-selector.tf.j2
dest: roles/main_infra/files/remote-backend-selector.tf
when:
- backend|bool == true
- name: Local or remote backend selector (local)
file:
state: absent
dest: roles/main_infra/files/remote-backend-selector.tf
when:
- backend | default ('false') | bool != true
- name: Generating variables file
template:
src: roles/main_infra/templates/terraform.tfvars.j2
dest: roles/main_infra/files/terraform.tfvars
- name: Generating backend file
template:
src: roles/main_infra/templates/backend.tfvars.j2
dest: roles/main_infra/files/backend.tfvars
when: backend|bool == true
#Workaround since terraform module return unexpected error.
- name: Initialize Terraform
shell: "echo yes | {{ terraform_location }} init{{ ' -backend-config=backend.tfvars' if backend|bool == true else '' }}"
args:
chdir: "roles/main_infra/files"
- name: Attach existing DB instances
shell: "echo yes | {{ terraform_location }} import aws_db_instance.default[{{ index }}] {{ prefix }}-{{ item.value }}"
args:
chdir: "roles/main_infra/files"
loop: "{{ chain_db_id|dict2items }}"
loop_control:
index_var: index

View File

@ -0,0 +1,34 @@
- name: Check prefix
fail:
msg: "The prefix '{{ prefix }}' is invalid. It must consist only of the lowercase characters a-z and digits 0-9, and must be between 3 and 5 characters long."
when: prefix|length < 3 or prefix|length > 5 or prefix is not match("^[a-z0-9]+$")
- name: Check if terraform is installed
command: which terraform
register: terraform_status
changed_when: false
- name: Terraform check result
fail:
msg: "Terraform is not installed"
when: terraform_status.stdout == ""
- name: Check if python is installed
command: which python
register: python_status
changed_when: false
- name: Python check result
fail:
msg: "Python either is not installed or is too old. Please install python version 2.6 or higher"
when: python_status.stdout == "" or python_int_version|int < 260
vars:
python_int_version: "{{ ansible_python_version.split('.')[0]|int * 100 + ansible_python_version.split('.')[1]|int * 10 + ansible_python_version.split('.')[2]|int }}"
- name: Check if all required modules is installed
command: "{{ ansible_python_interpreter }} -c 'import {{ item }}'"
with_items:
- boto
- boto3
- botocore
changed_when: false

View File

@ -0,0 +1 @@
../../main_infra/defaults/main.yml

View File

@ -0,0 +1,66 @@
- name: Local or remote backend selector (remote)
template:
src: roles/main_infra/templates/remote-backend-selector.tf.j2
dest: roles/main_infra/files/remote-backend-selector.tf
when:
- backend|bool == true
- name: Local or remote backend selector (local)
file:
state: absent
dest: roles/main_infra/files/remote-backend-selector.tf
when:
- backend | default ('false') | bool != true
- name: Generating variables file
template:
src: roles/main_infra/templates/terraform.tfvars.j2
dest: roles/main_infra/files/terraform.tfvars
- name: Generating backend file
template:
src: roles/main_infra/templates/backend.tfvars.j2
dest: roles/main_infra/files/backend.tfvars
when: backend|bool == true
# This is due to the TF0.11 bug which do not allow to completely destroy resources if interpolation syntax is used in outputs.tf at edge cases
- name: Check if outputs.tf exists
stat: path=roles/main_infra/files/outputs.tf
register: outputs_stat
- name: Temporarily remove outputs.tf file
command: mv roles/main_infra/files/outputs.tf roles/main_infra/files/outputs.tf.backup
when: outputs_stat.stat.exists
- name: Terraform destroy main infra
shell: "echo yes | {{ terraform_location }} {{ item }}"
args:
chdir: "roles/main_infra/files"
with_items:
- "init {{ '-backend-config=backend.tfvars' if backend|bool == true else '' }}"
- destroy
- name: Check if outputs.tf.backup exists
stat: path=roles/main_infra/files/outputs.tf.backup
register: outputs_backup_stat
- name: Get back outputs.tf file
command: mv roles/main_infra/files/outputs.tf.backup roles/main_infra/files/outputs.tf
when: outputs_backup_stat.stat.exists
- name: User prompt
pause:
prompt: "Do you want to delete S3 bucket with state file and DynamoDB attached to it also? [Yes/No] Default: No"
register: user_answer
- name: Destroy S3 bucket
s3_bucket:
name: "{{ prefix }}-{{ bucket }}"
state: absent
force: yes
when: user_answer.user_input|bool == True
- dynamodb_table:
name: "{{ prefix }}-{{ dynamodb_table }}"
state: absent
when: user_answer.user_input|bool == True

View File

@ -0,0 +1,2 @@
prefix: "sokol"
dynamodb_table: "dynamo"

View File

@ -0,0 +1,10 @@
- name: Create DynamoDB table
dynamodb_table:
name: "{{ prefix }}-{{ dynamodb_table }}"
hash_key_name: LockID
hash_key_type: STRING
read_capacity: 1
write_capacity: 1
tags:
origin: terraform
prefix: "{{ prefix }}"

View File

@ -0,0 +1,16 @@
dynamodb_table: "poa-terraform-lock"
bucket: "poa-terraform-state"
terraform_location: "/usr/local/bin/terraform"
region: "us-east-1"
prefix: "test"
vpc_cidr: "10.0.0.0/16"
public_subnet_cidr: "10.0.0.0/24"
db_subnet_cidr: "10.0.2.0/16"
dns_zone_name: "poa.internal"
instance_type: "m5.large"
root_block_size: 8
pool_size: 30
alb_ssl_policy: "ELBSecurityPolicy-2016-08"
new_relic_app_name: ""
new_relic_license_key: ""
use_ssl: false

View File

@ -1,3 +1,10 @@
resource "aws_ssm_parameter" "block_transformer" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/block_transformer"
value = "${lookup(var.chain_block_transformer,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "new_relic_app_name" {
count = "${var.new_relic_app_name == "" ? 0 : length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/new_relic_app_name"
@ -29,7 +36,7 @@ resource "aws_ssm_parameter" "ecto_use_ssl" {
resource "aws_ssm_parameter" "ethereum_jsonrpc_variant" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/ethereum_jsonrpc_variant"
value = "${element(values(var.chain_jsonrpc_variant),count.index)}"
value = "${lookup(var.chain_jsonrpc_variant,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "ethereum_url" {
@ -42,62 +49,62 @@ resource "aws_ssm_parameter" "ethereum_url" {
resource "aws_ssm_parameter" "trace_url" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/ethereum_jsonrpc_trace_url"
value = "${element(values(var.chain_trace_endpoint),count.index)}"
value = "${lookup(var.chain_trace_endpoint,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "ws_url" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/ethereum_jsonrpc_ws_url"
value = "${element(values(var.chain_ws_endpoint),count.index)}"
value = "${lookup(var.chain_ws_endpoint,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "logo" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/logo"
value = "${element(values(var.chain_logo),count.index)}"
value = "${lookup(var.chain_logo,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "coin" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/coin"
value = "${element(values(var.chain_coin),count.index)}"
value = "${lookup(var.chain_coin,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "network" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/network"
value = "${element(values(var.chain_network),count.index)}"
value = "${lookup(var.chain_network,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "subnetwork" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/subnetwork"
value = "${element(values(var.chain_subnetwork),count.index)}"
value = "${lookup(var.chain_subnetwork,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "network_path" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/network_path"
value = "${element(values(var.chain_network_path),count.index)}"
value = "${lookup(var.chain_network_path,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "network_icon" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/network_icon"
value = "${element(values(var.chain_network_icon),count.index)}"
value = "${lookup(var.chain_network_icon,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "graphiql_transaction" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/graphiql_transaction"
value = "${element(values(var.chain_graphiql_transaction),count.index)}"
value = "${lookup(var.chain_graphiql_transaction,element(keys(var.chains),count.index))}"
type = "String"
}
@ -153,28 +160,28 @@ resource "aws_ssm_parameter" "port" {
resource "aws_ssm_parameter" "db_username" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_username"
value = "${var.db_username}"
value = "${lookup(var.chain_db_username,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "db_password" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_password"
value = "${var.db_password}"
value = "${lookup(var.chain_db_password,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "db_host" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_host"
value = "${aws_route53_record.db.fqdn}"
value = "${aws_route53_record.db.*.fqdn[count.index]}"
type = "String"
}
resource "aws_ssm_parameter" "db_port" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_port"
value = "${aws_db_instance.default.port}"
value = "${aws_db_instance.default.*.port[count.index]}"
type = "String"
}
resource "aws_ssm_parameter" "alb_ssl_policy" {
@ -189,3 +196,30 @@ resource "aws_ssm_parameter" "alb_certificate_arn" {
value = "${var.alb_certificate_arn}"
type = "String"
}
resource "aws_ssm_parameter" "heart_beat_timeout" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/heart_beat_timeout"
value = "${lookup(var.chain_heart_beat_timeout,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "heart_command" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/heart_command"
value = "${lookup(var.chain_heart_command,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "blockscout_version" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/blockscout_version"
value = "${lookup(var.chain_blockscout_version,element(keys(var.chains),count.index))}"
type = "String"
}
resource "aws_ssm_parameter" "db_name" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_name"
value = "${lookup(var.chain_db_name,element(keys(var.chains),count.index))}"
type = "String"
}

View File

@ -1,6 +1,7 @@
resource "aws_s3_bucket" "explorer_releases" {
bucket = "${var.prefix}-explorer-codedeploy-releases"
acl = "private"
force_destroy = "true"
versioning {
enabled = true

View File

@ -12,12 +12,13 @@ resource "aws_route53_zone" "main" {
# Private DNS records
resource "aws_route53_record" "db" {
zone_id = "${aws_route53_zone.main.zone_id}"
name = "db"
name = "db${count.index}"
type = "A"
count = "${length(var.chains)}"
alias {
name = "${aws_db_instance.default.address}"
zone_id = "${aws_db_instance.default.hosted_zone_id}"
name = "${aws_db_instance.default.*.address[count.index]}"
zone_id = "${aws_db_instance.default.*.hosted_zone_id[count.index]}"
evaluate_target_health = false
}
}

View File

@ -157,7 +157,7 @@ DB_USER="$(get_param 'db_username')"
DB_PASS="$(get_param 'db_password')"
DB_HOST="$(get_param 'db_host')"
DB_PORT="$(get_param 'db_port')"
DB_NAME="$CHAIN"
DB_NAME="$(get_param 'db_name')"
DATABASE_URL="postgresql://$DB_USER:$DB_PASS@$DB_HOST:$DB_PORT"
# Need to map the Parameter Store response to a set of NAME="<value>" entries,

View File

@ -6,12 +6,12 @@ To deploy a new version of the application manually:
1) Run the following command to upload the application to S3.
aws deploy push --application-name=${module.stack.codedeploy_app} --s3-location s3://${module.stack.codedeploy_bucket}/path/to/release.zip --source=path/to/repo
aws deploy push --application-name=${aws_codedeploy_app.explorer.name} --s3-location s3://${aws_s3_bucket.explorer_releases.id}/path/to/release.zip --source=path/to/repo
2) Follow the instructions in the output from the `aws deploy push` command
to deploy the uploaded application. Use the deployment group names shown below:
- ${join("\n - ", formatlist("%s", module.stack.codedeploy_deployment_group_names))}
- ${join("\n - ", formatlist("%s", aws_codedeploy_deployment_group.explorer.*.deployment_group_name))}
You will also need to specify a deployment config name. Example:
@ -25,11 +25,7 @@ To deploy a new version of the application manually:
4) Once the deployment is complete, you can access each chain explorer from its respective url:
- ${join("\n - ", formatlist("%s: %s", keys(module.stack.explorer_urls), values(module.stack.explorer_urls)))}
- ${join("\n - ", formatlist("%s: %s", keys(zipmap(keys(var.chains), aws_lb.explorer.*.dns_name)), values(zipmap(keys(var.chains), aws_lb.explorer.*.dns_name))))}
OUTPUT
}
output "db_instance_address" {
description = "The internal IP address of the RDS instance"
value = "${module.stack.db_instance_address}"
}

View File

@ -0,0 +1,24 @@
resource "aws_db_instance" "default" {
count = "${length(var.chains)}"
name = "${lookup(var.chain_db_name,element(keys(var.chains),count.index))}"
identifier = "${var.prefix}-${lookup(var.chain_db_id,element(keys(var.chains),count.index))}"
engine = "postgres"
engine_version = "${lookup(var.chain_db_version,element(keys(var.chains),count.index))}"
instance_class = "${lookup(var.chain_db_instance_class,element(keys(var.chains),count.index))}"
storage_type = "${lookup(var.chain_db_storage_type,element(keys(var.chains),count.index))}"
allocated_storage = "${lookup(var.chain_db_storage,element(keys(var.chains),count.index))}"
copy_tags_to_snapshot = true
skip_final_snapshot = true
username = "${lookup(var.chain_db_username,element(keys(var.chains),count.index))}"
password = "${lookup(var.chain_db_password,element(keys(var.chains),count.index))}"
vpc_security_group_ids = ["${aws_security_group.database.id}"]
db_subnet_group_name = "${aws_db_subnet_group.database.id}"
apply_immediately = true
depends_on = ["aws_security_group.database"]
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}

View File

@ -60,21 +60,21 @@ resource "aws_lb_target_group" "explorer" {
# The Listener for the ALB (HTTP protocol)
resource "aws_alb_listener" "alb_listener_http" {
count = "${var.use_ssl == "true" ? 0 : 1}"
load_balancer_arn = "${aws_lb.explorer.arn}"
count = "${var.use_ssl == "true" ? 0 : length(var.chains)}"
load_balancer_arn = "${aws_lb.explorer.*.arn[count.index]}"
port = 80
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.explorer.arn}"
target_group_arn = "${aws_lb_target_group.explorer.*.arn[count.index]}"
}
}
# The Listener for the ALB (HTTPS protocol)
resource "aws_alb_listener" "alb_listener_https" {
count = "${var.use_ssl == "true" ? 1 : 0}"
load_balancer_arn = "${aws_lb.explorer.arn}"
count = "${var.use_ssl == "true" ? length(var.chains) : 0}"
load_balancer_arn = "${aws_lb.explorer.*.arn[count.index]}"
port = 443
protocol = "HTTPS"
ssl_policy = "${var.alb_ssl_policy}"
@ -82,6 +82,6 @@ resource "aws_alb_listener" "alb_listener_https" {
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.explorer.arn}"
target_group_arn = "${aws_lb_target_group.explorer.*.arn[count.index]}"
}
}

View File

@ -0,0 +1,5 @@
resource "aws_key_pair" "blockscout" {
count = "${var.key_content == "" ? 0 : 1}"
key_name = "${var.key_name}"
public_key = "${var.key_content}"
}

View File

@ -7,7 +7,7 @@ resource "aws_subnet" "default" {
map_public_ip_on_launch = true
tags {
name = "${var.prefix}-default-subnet"
Name = "${var.prefix}-default-subnet"
prefix = "${var.prefix}"
origin = "terraform"
}
@ -22,7 +22,7 @@ resource "aws_subnet" "alb" {
map_public_ip_on_launch = true
tags {
name = "${var.prefix}-default-subnet"
Name = "${var.prefix}-default-subnet"
prefix = "${var.prefix}"
origin = "terraform"
}
@ -37,7 +37,7 @@ resource "aws_subnet" "database" {
map_public_ip_on_launch = false
tags {
name = "${var.prefix}-database-subnet${count.index}"
Name = "${var.prefix}-database-subnet${count.index}"
prefix = "${var.prefix}"
origin = "terraform"
}

View File

@ -9,6 +9,10 @@ variable "instance_type" {}
variable "root_block_size" {}
variable "pool_size" {}
variable "key_content" {
default = ""
}
variable "chain_jsonrpc_variant" {
default = {}
}
@ -43,17 +47,57 @@ variable "chain_graphiql_transaction" {
default = {}
}
variable "db_id" {}
variable "db_name" {}
variable "db_username" {}
variable "db_password" {}
variable "db_storage" {}
variable "db_storage_type" {}
variable "db_instance_class" {}
variable "chain_db_id" {
default = {}
}
variable "chain_db_name" {
default = {}
}
variable "chain_db_username" {
default = {}
}
variable "chain_db_password" {
default = {}
}
variable "chain_db_storage" {
default = {}
}
variable "chain_db_storage_type" {
default = {}
}
variable "chain_db_instance_class" {
default = {}
}
variable "chain_db_version" {
default = {}
}
variable "new_relic_app_name" {}
variable "new_relic_license_key" {}
variable "secret_key_base" {}
variable "alb_ssl_policy" {}
variable "alb_certificate_arn" {}
variable "use_ssl" {}
variable "use_ssl" {}
variable "chain_block_transformer" {
default = {}
}
variable "chain_heart_beat_timeout" {
default = {}
}
variable "chain_heart_command" {
default = {}
}
variable "chain_blockscout_version" {
default = {}
}

View File

@ -14,6 +14,7 @@ resource "aws_vpc" "vpc" {
enable_dns_support = true
tags {
Name = "${var.prefix}"
prefix = "${var.prefix}"
origin = "terraform"
}

View File

@ -0,0 +1,79 @@
- name: Local or remote backend selector (remote)
template:
src: remote-backend-selector.tf.j2
dest: roles/main_infra/files/remote-backend-selector.tf
when:
- backend|bool == true
- name: Local or remote backend selector (local)
file:
state: absent
dest: roles/main_infra/files/remote-backend-selector.tf
when:
- backend | default ('false') | bool != true
- name: Generating variables file
template:
src: terraform.tfvars.j2
dest: roles/main_infra/files/terraform.tfvars
- name: Generating backend file
template:
src: backend.tfvars.j2
dest: roles/main_infra/files/backend.tfvars
when: backend|bool == true
- name: Check if .terraform folder exists
stat:
path: "roles/main_infra/files/.terraform/"
register: stat_result
- name: Remove .terraform folder
file:
path: roles/main_infra/files/.terraform/
state: absent
when: stat_result.stat.exists == True
#Workaround since terraform module return unexpected error.
- name: Terraform plan construct
shell: "echo yes | {{ terraform_location }} {{ item }}"
register: tf_plan
args:
chdir: "roles/main_infra/files"
with_items:
- "init{{ ' -backend-config=backend.tfvars' if backend|bool == true else '' }}"
- plan
- name: Show Terraform plan
debug:
var: tf_plan.results[1].stdout_lines
- name: User prompt
pause:
prompt: "Are you absolutely sure you want to execute the deployment plan shown above? [False]"
register: user_answer
- name: Terraform provisioning
shell: "echo yes | {{ terraform_location }} apply"
args:
chdir: "roles/main_infra/files"
when: user_answer.user_input|bool == True
ignore_errors: True
- name: Ensure Terraform resources being provisioned
shell: "echo yes | {{ terraform_location }} apply"
args:
chdir: "roles/main_infra/files"
when: user_answer.user_input|bool == True
- name: Terraform output info into variable
shell: "{{ terraform_location }} output"
register: output
args:
chdir: "roles/main_infra/files"
when: user_answer.user_input|bool == True
- name: Output info from Terraform
debug:
var: output.stdout_lines
when: user_answer.user_input|bool == True

View File

@ -0,0 +1,4 @@
region = "{{ ansible_env.AWS_REGION }}"
bucket = "{{ prefix }}-{{ bucket }}"
dynamodb_table = "{{ prefix }}-{{ dynamodb_table }}"
key = "terraform.tfstate"

View File

@ -0,0 +1,4 @@
terraform {
backend "s3" {
}
}

View File

@ -0,0 +1,156 @@
region = "{{ ansible_env.AWS_REGION }}"
prefix = "{{ prefix }}"
key_name = "{{ ec2_ssh_key_name }}"
key_content = "{{ ec2_ssh_key_content }}"
vpc_cidr = "{{ vpc_cidr }}"
public_subnet_cidr = "{{ public_subnet_cidr }}"
db_subnet_cidr = "{{ db_subnet_cidr }}"
dns_zone_name = "{{ dns_zone_name }}"
instance_type = "{{ instance_type }}"
root_block_size = "{{ root_block_size }}"
pool_size = "{{ pool_size }}"
alb_ssl_policy = "{{ alb_ssl_policy }}"
alb_certificate_arn = "{{ alb_certificate_arn }}"
use_ssl = "{{ use_ssl }}"
new_relic_app_name = "{{ new_relic_app_name }}"
new_relic_license_key = "{{ new_relic_license_key }}"
secret_key_base = "{{ secret_key_base }}"
chains = {
{% for key, value in chains.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_trace_endpoint = {
{% for key, value in chain_trace_endpoint.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_ws_endpoint = {
{% for key, value in chain_ws_endpoint.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_jsonrpc_variant = {
{% for key, value in chain_jsonrpc_variant.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_logo = {
{% for key, value in chain_logo.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_coin = {
{% for key, value in chain_coin.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_network = {
{% for key, value in chain_network.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_subnetwork = {
{% for key, value in chain_subnetwork.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_network_path = {
{% for key, value in chain_network_path.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_network_icon = {
{% for key, value in chain_network_icon.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_graphiql_transaction = {
{% for key, value in chain_graphiql_transaction.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_block_transformer = {
{% for key, value in chain_block_transformer.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_heart_beat_timeout = {
{% for key, value in chain_heart_beat_timeout.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_heart_command = {
{% for key, value in chain_heart_command.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_blockscout_version = {
{% for key, value in chain_blockscout_version.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_id = {
{% for key, value in chain_db_id.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_name = {
{% for key, value in chain_db_name.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_username = {
{% for key, value in chain_db_username.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_password = {
{% for key, value in chain_db_password.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_instance_class = {
{% for key, value in chain_db_instance_class.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_storage = {
{% for key, value in chain_db_storage.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_storage_type = {
{% for key, value in chain_db_storage_type.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_version = {
{% for key, value in chain_db_version.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% endfor %}
}

View File

@ -0,0 +1,2 @@
prefix: "sokol"
bucket: "bucket"

52
roles/s3/tasks/main.yml Normal file
View File

@ -0,0 +1,52 @@
- name: Create S3 bucket
aws_s3:
bucket: "{{ prefix }}-{{ bucket }}"
mode: create
permission: private
- name: Apply tags and versioning to create S3 bucket
s3_bucket:
name: "{{ prefix }}-{{ bucket }}"
versioning: yes
tags:
origin: terraform
prefix: "{{ prefix }}"
- name: Add lifecycle management policy to created S3 bucket
s3_lifecycle:
name: "{{ prefix }}-{{ bucket }}"
rule_id: "expire"
noncurrent_version_expiration_days: 90
status: enabled
state: present
- name: Check if config file exists
stat:
path: "{{ playbook_dir }}/group_vars/all.yml"
register: stat_result
when: upload_config_to_s3|bool == True
- name: Copy temporary file to be uploaded
command: "cp {{ playbook_dir }}/group_vars/all.yml {{ playbook_dir }}/group_vars/all.yml.temp"
when: upload_config_to_s3|bool == True
- name: Remove insecure variables
replace:
path: "{{ playbook_dir }}/group_vars/all.yml.temp"
regexp: 'aws_.*'
replace: '<There was and aws-related insecure variable to keep at S3. Removed>'
when: upload_config_to_s3|bool == True
- name: Upload config to S3 bucket
aws_s3:
bucket: "{{ prefix }}-{{ bucket }}"
object: all.yml
src: "{{ playbook_dir }}/group_vars/all.yml.temp"
mode: put
when: stat_result.stat.exists == True and upload_config_to_s3|bool == True
- name: Remove temp file
file:
path: "{{ playbook_dir }}/group_vars/all.yml.temp"
state: absent
when: upload_config_to_s3|bool == True

View File

@ -1,8 +0,0 @@
module "backend" {
source = "../modules/backend"
bootstrap = "${terraform.workspace == "base" ? 1 : 0}"
bucket = "${var.bucket}"
dynamodb_table = "${var.dynamodb_table}"
prefix = "${var.prefix}"
}

View File

@ -1 +0,0 @@
../common/provider.tf

View File

@ -1 +0,0 @@
../common/variables.tf

View File

@ -1,12 +0,0 @@
region = "us-east-1"
bucket = "poa-terraform-state"
dynamodb_table = "poa-terraform-lock"
key_name = "sokol-test"
prefix = "sokol"
db_password = "qwerty12345"
db_instance_class = "db.m4.xlarge"
db_storage = "120"
alb_ssl_policy = "ELBSecurityPolicy-2016-08"
alb_certificate_arn = "arn:aws:acm:us-east-1:290379793816:certificate/6d1bab74-fb46-4244-aab2-832bf519ab24"
root_block_size = 120
pool_size = 30