Merge pull request #126 from ArseniiPetrovich/multi-deploy

New feature: Allow parallel deployment of multiple hosts
This commit is contained in:
Victor Baranov 2019-08-17 09:46:36 +03:00 committed by GitHub
commit 1f554cff67
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
49 changed files with 1079 additions and 815 deletions

12
.gitignore vendored
View File

@ -2,14 +2,11 @@ log.txt
# Terraform State
*.terraform*
*.tfstate
*terraform.tfstate.d*
*tfplan*
roles/main_infra/files/backend.tfvars
roles/main_infra/files/remote-backend-selector.tf
roles/main_infra/files/terraform.tfvars
roles/main_infra/files/hosts.tf
roles/main_infra/files/routing.tf
roles/main_infra/files/provider.tf
*.backup
# Sensitive information
@ -20,6 +17,10 @@ roles/main_infra/files/provider.tf
/PREFIX
group_vars/*
host_vars/*
!host_vars/all.yml.example
!host_vars/blockscout.yml.example
!host_vars/infrastructure.yml.example
!group_vars/all.yml.example
!group_vars/blockscout.yml.example
!group_vars/infrastructure.yml.example
@ -29,6 +30,9 @@ group_vars/*
.*.swp
blockscout-*/
roles/main_infra/files-*
hosts
# osx
.DS_Store

154
README.md
View File

@ -14,11 +14,12 @@ Also you may want to refer to the `lambda` folder which contains a set of script
Playbooks relies on Terraform under the hood, which is the stateful infrastructure-as-a-code software tool. It allows to keep a hand on your infrastructure - modify and recreate single and multiple resources depending on your needs.
This version of playbooks supports the multi-hosts deployment, which means that test BlockScout instances can be built on remote machines. In that case, you will need to have the Ansible, installed on jumpbox (controller) and all the prerequisites, that are described below, installed on runners.
## Prerequisites for deploying infrastructure
| Dependency name | Installation method |
| -------------------------------------- | ------------------------------------------------------------ |
| Ansible >= 2.6 | [Installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) |
| Terraform >=0.11.11 | [Installation guide](https://learn.hashicorp.com/terraform/getting-started/install.html) |
| Python >=2.6.0 | `apt install python` |
| Python-pip | `apt install python-pip` |
@ -28,7 +29,6 @@ Playbooks relies on Terraform under the hood, which is the stateful infrastructu
| Dependency name | Installation method |
| -------------------------------------- | ------------------------------------------------------------ |
| Ansible >= 2.7.3 | [Installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) |
| Terraform >=0.11.11 | [Installation guide](https://learn.hashicorp.com/terraform/getting-started/install.html) |
| Python >=2.6.0 | `apt install python` |
| Python-pip | `apt install python-pip` |
@ -63,24 +63,37 @@ Each configured chain will receive its own ASG (autoscaling group) and deploymen
The deployment process goes in two stages. First, Ansible creates S3 bucket and DynamoDB table that are required for Terraform state management. It is needed to ensure that Terraforms state is stored in a centralized location, so that multiple people can use Terraform on the same infra without stepping on each others toes. Terraform prevents this from happening by holding locks (via DynamoDB) against the state data (stored in S3).
# Configuration
There are three groups of variables required to build BlockScout. Furst is required to create infrastructure, second is required to build BlockScout instances and the third is the one that is required both for infra and BS itself.
For your convenience we have divided variable templates into three files accordingly - `infrastructure.yml.example`, `blockscout.yml.example` and `all.yml.example` . Also we have divided those files to place them in `group_vars` and in `host_vars` folder, so you will not have to repeat some of the variables for each host/group.
The single point of configuration in this script is a `group_vars/all.yml` file. First, copy it from `group_vars/all.yml.example` template by executing `cp group_vars/all.yml.example group_vars/all.yml` command and then modify it via any text editor you want (vim example - `vim group_vars/all.yml`). The subsections describe the variable you may want to adjust.
In order to deploy BlockScout, you will have to setup the following set of files for each instance:
```
/
| - group_vars
| | - group.yml (combination of [blockscout+infrastructure+all].yml.example)
| | - all.yml (optional, one for all instances)
| - host_vars
| | - host.yml (combination of [blockscout+infrastructure+all].yml.example)
| - hosts (one for all instances)
```
## Common variables
- `aws_access_key` and `aws_secret_key` is a credentials pair that provides access to AWS for the deployer;
- `ansible_host` - is an address where BlockScout will be built. If this variable is set to localhost, also set `ansible_connection` to `local` for better performance.
- `chain` variable set the name of the network (Kovan, Core, xDAI, etc.). Will be used as part of the infrastructure resource names.
- `env_vars` represents a set of environment variables used by BlockScout. You can see the description of this variables at [POA Forum](https://forum.poa.network/t/faq-blockscout-environment-variables/1814).
- Also One can define `BULD_*` set of the variables, where asterisk stands for any environment variables. All variables defined with `BUILD_*` will override default variables while building the dev server.
- `aws_access_key` and `aws_secret_key` is a credentials pair that provides access to AWS for the deployer; You can use the `aws_profile` instead. In that case, AWS CLI profile will be used. Also, if none of the access key and profile provided, the `default` AWS profile will be used. The `aws_region` should be left at `us-east-1` as some of the other regions fail for different reasons;
- `backend` variable defines whether deployer should keep state files remote or locally. Set `backend` variable to `true` if you want to save state file to the remote S3 bucket;
- `upload_config_to_s3` - set to `true` if you want to upload config `all.yml` file to the S3 bucket automatically after the deployment. Will not work if `backend` is set to false;
- `upload_debug_info_to_s3` - set to `true` if you want to upload full log output to the S3 bucket automatically after the deployment. Will not work if `backend` is set to false. *IMPORTANT*: Locally logs are stored at `log.txt` which is not cleaned automatically. Please, do not forget to clean it manually or using the `clean.yml` playbook;
- `bucket` represents a globally unique name of the bucket where your configs and state will be stored. It will be created automatically during the deployment;
- `prefix` - is a unique tag to use for provisioned resources (5 alphanumeric chars or less);
- `chains` - maps chains to the URLs of HTTP RPC endpoints, an ordinary blockchain node can be used;
- The `region` should be left at `us-east-1` as some of the other regions fail for different reasons;
*Note*: a chain name shouldn't be more than 5 characters. Otherwise, it causing the error, because the aws load balancer name should not be greater than 32 characters.
## Infrastructure related variables
- `terraform_location` is an address of the Terraform binary on the builder;
- `dynamodb_table` represents the name of table that will be used for Terraform state lock management;
- If `ec2_ssh_key_content` variable is not empty, Terraform will try to create EC2 SSH key with the `ec2_ssh_key_name` name. Otherwise, the existing key with `ec2_ssh_key_name` name will be used;
- `instance_type` defines a size of the Blockscout instance that will be launched during the deployment process;
@ -90,41 +103,19 @@ The single point of configuration in this script is a `group_vars/all.yml` file.
`db_subnet_cidr`: "10.0.1.0/16"
Real networks: 10.0.1.0/24 and 10.0.2.0/24
- An internal DNS zone with`dns_zone_name` name will be created to take care of BlockScout internal communications;
- The name of a IAM key pair to use for EC2 instances, if you provide a name which
already exists it will be used, otherwise it will be generated for you;
* If `use_ssl` is set to `false`, SSL will be forced on Blockscout. To configure SSL, use `alb_ssl_policy` and `alb_certificate_arn` variables;
- The `root_block_size` is the amount of storage on your EC2 instance. This value can be adjusted by how frequently logs are rotated. Logs are located in `/opt/app/logs` of your EC2 instance;
- The `pool_size` defines the number of connections allowed by the RDS instance;
- `secret_key_base` is a random password used for BlockScout internally. It is highly recommended to gernerate your own `secret_key_base` before the deployment. For instance, you can do it via `openssl rand -base64 64 | tr -d '\n'` command;
- `new_relic_app_name` and `new_relic_license_key` should usually stay empty unless you want and know how to configure New Relic integration;
- `elixir_version` - is an Elixir version used in BlockScout release;
- `chain_trace_endpoint` - maps chains to the URLs of HTTP RPC endpoints, which represents a node where state pruning is disabled (archive node) and tracing is enabled. If you don't have a trace endpoint, you can simply copy values from `chains` variable;
- `chain_ws_endpoint` - maps chains to the URLs of HTTP RPCs that supports websockets. This is required to get the real-time updates. Can be the same as `chains` if websocket is enabled there (but make sure to use`ws(s)` instead of `htpp(s)` protocol);
- `chain_jsonrpc_variant` - a client used to connect to the network. Can be `parity`, `geth`, etc;
- `chain_logo` - maps chains to the it logos. Place your own logo at `apps/block_scout_web/assets/static` and specify a relative path at `chain_logo` variable;
- `chain_coin` - a name of the coin used in each particular chain;
- `chain_network` - usually, a name of the organization keeping group of networks, but can represent a name of any logical network grouping you want;
- `chain_subnetwork` - a name of the network to be shown at BlockScout;
- `chain_network_path` - a relative URL path which will be used as an endpoint for defined chain. For example, if we will have our BlockScout at `blockscout.com` domain and place `core` network at `/poa/core`, then the resulting endpoint will be `blockscout.com/poa/core` for this network.
- `chain_network_icon` - maps the chain name to the network navigation icon at apps/block_scout_web/lib/block_scout_web/templates/icons without .eex extension
- `chain_graphiql_transaction` - is a variable that maps chain to a random transaction hash on that chain. This hash will be used to provide a sample query in the GraphIQL Playground.
- `chain_block_transformer` - will be `clique` for clique networks like Rinkeby and Goerli, and `base` for the rest;
- `chain_heart_beat_timeout`, `chain_heart_command` - configs for the integrated heartbeat. First describes a timeout after the command described at the second variable will be executed;
- Each of the `chain_db_*` variables configures the database for each chain. Each chain will have the separate RDS instance.
- `chain_blockscout_version` - is a text at the footer of BlockScout instance. Usually represents the current BlockScout version.
- Each of the `db_*` variables configures the database for each chain. Each chain will have the separate RDS instance;
- `instance_type` represent the size of the EC2 instance to be deployed in production;
- `use_placement_group` determines whether or not to launch BlockScout in a placement group.
## Blockscout related variables
- `blockscout_repo` - a direct link to the Blockscout repo;
- `chain_branch` - maps branch at `blockscout_repo` to each chain;
- Specify the `chain_merge_commit` variable if you want to merge any of the specified `chains` with the commit in the other branch. Usually may be used to update production branches with the releases from master branch;
- `branch` - maps branch at `blockscout_repo` to each chain;
- Specify the `merge_commit` variable if you want to merge any of the specified `chains` with the commit in the other branch. Usually may be used to update production branches with the releases from master branch;
- `skip_fetch` - if this variable is set to `true` , BlockScout repo will not be cloned and the process will start from building the dependencies. Use this variable to prevent playbooks from overriding manual changes in cloned repo;
- `ps_*` variables represents a connection details to the test Postgres database. This one will not be installed automatically, so make sure `ps_*` credentials are valid before starting the deployment;
- `chain_custom_environment` - is a map of variables that should be overrided when deploying the new version of Blockscout. Can be omitted.
*Note*: `chain_custom_environment` variables will not be propagated to the Parameter Store at production servers and need to be set there manually.
## Database Storage Required
@ -142,37 +133,42 @@ The configuration variable `db_storage` can be used to define the amount of stor
# Deploying the Infrastructure
1. Ensure all the [infrastructure prerequisites](#Prerequisites-for-deploying-infrastructure) are installed and has the right version number;
2. Create the AWS access key and secret access key for user with [sufficient permissions](#AWS);
3. Create `hosts` file from `hosts.example` (`mv hosts.example hosts`) and adjust to your needs. Each host should represent each BlockScout instance you want to deploy. Note, that each host name should belong exactly to one group. Also, as per Ansible requirements, hosts and groups names should be unique.
3. Merge `infrastructure` and `all` config template files into single config file:
```bash
cat group_vars/infrastructure.yml.example group_vars/all.yml.example > group_vars/all.yml
The simplest `hosts` file with one BlockScout instance will look like:
```ini
[group]
host
```
4. Set the variables at `group_vars/all.yml` config template file as described at the [corresponding part of instruction](#Configuration);
Where `[group]` is a group name, which will be interpreted as a `prefix` for all created resources and `host` is a name of BlockScout instance.
5. Run `ansible-playbook deploy_infra.yml`;
4. For each host merge `infrastructure.yml.example` and `all.yml.example` config template files in `host_vars` folder into single config file with the same name as in `hosts` file:
- During the deployment the ["diffs didn't match"](#error-applying-plan-diffs-didnt-match) error may occur, it will be ignored automatically. If Ansible play recap shows 0 failed plays, then the deployment was successful despite the error.
```bash
cat host_vars/infrastructure.yml.example host_vars/all.yml.example > host_vars/host.yml
```
- Optionally, you may want to check the variables the were uploaded to the [Parameter Store](https://console.aws.amazon.com/systems-manager/parameters) at AWS Console.
5. For each group merge `infrastructure.yml.example` and `all.yml.example` config template files in `group_vars` folder into single config file with the same name as group name in `hosts` file:
```bash
cat group_vars/infrastructure.yml.example group_vars/all.yml.example > group_vars/group.yml
```
6. Adjust the variables at `group_vars` and `host_vars`. Note - you can move variables between host and group vars depending on if variable should be applied to the host or to the entire group. The list of the variables you can find at the [corresponding part of instruction](#Configuration);
Also, if you need to **distribute variables accross all the hosts/groups**, you can add these variables to the `group_vars/all.yml` file. Note about variable precedence => [Official Ansible Docs](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable).
7. Run `ansible-playbook deploy_infra.yml`;
- During the deployment the ["diffs didn't match"](#error-applying-plan-diffs-didnt-match) error may occur, it will be ignored automatically. If Ansible play recap shows 0 failed plays, then the deployment was successful despite the error.
- Optionally, you may want to check the variables the were uploaded to the [Parameter Store](https://console.aws.amazon.com/systems-manager/parameters) at AWS Console.
# Deploying BlockScout
1. Ensure all the [BlockScout prerequisites](#Prerequisites-for-deploying-blockscout) are installed and has the right version number;
2. Merge `blockscout` and `all` config template files into single config file:
```bash
cat group_vars/blockscout.yml.example group_vars/all.yml.example > group_vars/all.yml
```
**Note!** All three configuration files are compatible to each other, so you can simply `cat group_vars/blockscout.yml.example >> group_vars/all.yml` if you already do have the `all.yml` file after the deploying of infrastructure.
3. Set the variables at `group_vars/all.yml` config template file as described at the [corresponding part of instruction](#Configuration);
**Note!** Use `chain_custom_environment` to update the variables in each deployment. Map each deployed chain with variables as they should appear at the Parameter Store. Check the example at `group_vars/blockscout.yml.example` config file. `chain_*` variables will be ignored during BlockScout software deployment.
4. This step is for mac OS users. Please skip it, if this is not your case.
0. (optional) This step is for mac OS users. Please skip it, if this is not your case.
To avoid the error
```
@ -188,11 +184,45 @@ error and crashing of Python follow the next steps:
(source: https://stackoverflow.com/questions/50168647/multiprocessing-causes-python-to-crash-and-gives-an-error-may-have-been-in-progr);
5. Run `ansible-playbook deploy_software.yml`;
6. When the prompt appears, check that server is running and there is no visual artifacts. The server will be launched at port 4000 at the same machine where you run the Ansible playbooks. If you face any errors you can either fix it or cancel the deployment by pressing **Ctrl+C** and then pressing **A** when additionally prompted.
7. When server is ready to be deployed simply press enter and deployer will upload Blockscout to the appropriate S3.
8. Two other prompts will appear to ensure your will on updating the Parameter Store variables and deploying the BlockScout through the CodeDeploy. Both **yes** and **true** will be interpreted as the confirmation.
9. Monitor and manage your deployment at [CodeDeploy](https://console.aws.amazon.com/codesuite/codedeploy/applications) service page at AWS Console.
1. Ensure all the [BlockScout prerequisites](#Prerequisites-for-deploying-blockscout) are installed and has the right version number;
2. Create the AWS access key and secret access key for user with [sufficient permissions](#AWS);
3. Create `hosts` file from `hosts.example` (`mv hosts.example hosts`) and adjust to your needs. Each host should represent each BlockScout instance you want to deploy. Note, that each host name should belong exactly to one group. Also, as per Ansible requirements, hosts and groups names should be unique.
The simplest `hosts` file with one BlockScout instance will look like:
```ini
[group]
host
```
Where `[group]` is a group name, which will be interpreted as a `prefix` for all created resources and `host` is a name of BlockScout instance.
4. For each host merge `blockscout.yml.example` and `all.yml.example` config template files in `host_vars` folder into single config file with the same name as in `hosts` file:
```bash
cat host_vars/blockscout.yml.example host_vars/all.yml.example > host_vars/host.yml
```
If you have already merged `infrastructure.yml.example` and `all.yml` while deploying the BlockScout infrastructure, you can simply add the `blockscout.yml.example` to the merged file: `cat host_vars/blockscout.yml.example >> host_vars/host.yml`
5. For each group merge `blockscout.yml.example` and `all.yml.example` config template files in `group_vars` folder into single config file with the same name as group name in `hosts` file:
```bash
cat group_vars/blockscout.yml.example group_vars/all.yml.example > group_vars/group.yml
```
If you have already merged `infrastructure.yml.example` and `all.yml` while deploying the BlockScout infrastructure, you can simply add the `blockscout.yml.example` to the merged file: `cat group_vars/blockscout.yml.example >> group_vars/host.yml`
6. Adjust the variables at `group_vars` and `host_vars`. Note - you can move variables between host and group vars depending on if variable should be applied to the host or to the entire group. The list of the variables you can find at the [corresponding part of instruction](#Configuration);
Also, if you need to **distribute variables accross all the hosts/groups**, you can add these variables to the `group_vars/all.yml` file. Note about variable precedence => [Official Ansible Docs](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable).
7. Run `ansible-playbook deploy_software.yml`;
8. When the prompt appears, check that server is running and there is no visual artifacts. The server will be launched at port 4000 at the same machine where you run the Ansible playbooks. If you face any errors you can either fix it or cancel the deployment by pressing **Ctrl+C** and then pressing **A** when additionally prompted.
9. When server is ready to be deployed simply press enter and deployer will upload Blockscout to the appropriate S3.
10. Two other prompts will appear to ensure your will on updating the Parameter Store variables and deploying the BlockScout through the CodeDeploy. Both **yes** and **true** will be interpreted as the confirmation.
11. (optional) If the deployment fails, you can use the following tags to repeat the particular steps of the deployment:
- build
- update_vars
- deploy
12. Monitor and manage your deployment at [CodeDeploy](https://console.aws.amazon.com/codesuite/codedeploy/applications) service page at AWS Console.
# Destroying Provisioned Infrastructure
@ -227,7 +257,7 @@ Example:
`prefix` variable: tf
`chain_db_id` variable: poa
`db_id` variable: poa
**Note 3**: make sure MultiAZ is disabled on your database.

View File

@ -3,5 +3,7 @@ force_handlers = True
pipelining = True
inventory = hosts
deprecation_warnings = False
host_key_checking=false
log_path=log.txt
host_key_checking = false
log_path = log.txt
hash_behaviour = merge
display_skipped_hosts = false

View File

@ -9,7 +9,7 @@
with_items:
- s3
- dynamodb
when: backend|bool == true
when: backend | bool
- include_role:
name: attach_existing_rds
always:

View File

@ -1,14 +1,16 @@
- name: Clean TF cache
hosts: localhost
hosts: localhost,all
tasks:
- name: Clean TF cache
file:
state: absent
path: "{{ item }}"
with_items:
- roles/main_infra/files/.terraform
- roles/main_infra/files/terraform.tfstate.d
- roles/main_infra/files/main.tfvars
- roles/main_infra/files/backend.tfvars
- roles/main_infra/files/terraform.tfplan
- log.txt
with_fileglob:
- "roles/main_infra/files/.terraform"
- "roles/main_infra/files/terraform.tfstate.d"
- "roles/main_infra/files/main.tfvars"
- "roles/main_infra/files/backend.tfvars"
- "roles/main_infra/files/terraform.tfplan"
- "log.txt"
- "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}"
- "/tmp/files-{{ group_names[0] }}"

View File

@ -1,5 +1,5 @@
- name: Prepare infrastructure
hosts: localhost
hosts: all
tasks:
- block:
- include_role:
@ -9,9 +9,10 @@
with_items:
- s3
- dynamodb
when: backend|bool == true
when: backend | bool
- include_role:
name: main_infra
when: inventory_hostname == groups[group_names[0]][0]
always:
- include_role:
name: s3_config

View File

@ -1,21 +1,21 @@
- name: Deploy BlockScout
hosts: localhost
hosts: all
tasks:
- block:
- name: Use role in loop
- name: Deploy
include_role:
name: main_software
loop: "{{ chain_custom_environment.keys() }}"
loop_control:
loop_var: chain
index_var: index
tags:
- update_vars
- build
- deploy
always:
- include_role:
name: s3
when: backend|bool == true and (upload_debug_info_to_s3|bool == true or upload_config_to_s3|bool ==true)
when: backend|bool and (upload_debug_info_to_s3|bool or upload_config_to_s3|bool)
- include_role:
name: s3_config
when: backend|bool == true and upload_config_to_s3|bool == true
when: backend|bool and upload_config_to_s3|bool
- include_role:
name: s3_debug
when: backend|bool == true and upload_debug_info_to_s3|bool == true
when: backend|bool and upload_debug_info_to_s3|bool

View File

@ -1,7 +1,7 @@
- name: Destroy infrastructure
hosts: localhost
hosts: all
roles:
- { role: destroy, when: "confirmation|bool == True" }
- { role: destroy, when: "confirmation|bool == True and inventory_hostname == groups[group_names[0]][0]" }
vars_prompt:
- name: "confirmation"
prompt: "Are you sure you want to destroy all the infra?"

View File

@ -19,50 +19,3 @@ upload_debug_info_to_s3: true
## Name of the bucket where TF state files will be stored
bucket: "poa-terraform-state"
## All resources will be prefixed with this one
prefix: "poa"
## This dictionary represents a set of environment variables required for each chain. Variables that commented out are optional.
chain_custom_environment:
core:
NETWORK: "(POA)" # Name of the organization/community that hosts the chain
SUBNETWORK: "Core Network" # Actual name of the particular network
NETWORK_ICON: "_network_icon.html" # Either _test_network_icon.html or _network_icon.html, depending on the type of the network (prod/test).
LOGO: "/images/blockscout_logo.svg" # Chain logo
ETHEREUM_JSONRPC_VARIANT: "parity" # Chain client installed at ETHEREUM_JSONRPC_HTTP_URL
ETHEREUM_JSONRPC_HTTP_URL: "http://localhost:8545" # Network RPC endpoint
ETHEREUM_JSONRPC_TRACE_URL: "http://localhost:8545" # Network RPC endpoint in trace mode. Can be the same as the previous variable
ETHEREUM_JSONRPC_WS_URL: "ws://localhost:8546" # Network RPC endpoint in websocket mode
NETWORK_PATH: "/poa/core" # relative URL path, for example: blockscout.com/$NETWORK_PATH
SECRET_KEY_BASE: "TPGMvGK0iIwlXBQuQDA5KRqk77VETbEBlG4gAWeb93TvBsYAjvoAvdODMd6ZeguPwf2YTRY3n7uvxXzQP4WayQ==" # Secret key for production assets protection. Use `mix phx.gen.secret` or `openssl rand -base64 64 | tr -d '\n'` to generate
PORT: 4000 # Port the application runs on
COIN: "POA" # Coin name at the Coinmarketcap, used to display current exchange rate
POOL_SIZE: 20 # Defines the number of database connections allowed
ECTO_USE_SSL: "false" # Specifies whether or not to use SSL on Ecto queries
ALB_SSL_POLICY: "ELBSecurityPolicy-2016-08" #SSL policy for Load Balancer. Required if ECTO_USE_SSL is set to true
ALB_CERTIFICATE_ARN: "arn:aws:acm:us-east-1:290379793816:certificate/6d1bab74-fb46-4244-aab2-832bf519ab24" #ARN of the certificate to attach to the LB. Required if ECTO_USE_SSL is set to true
HEART_BEAT_TIMEOUT: 30 # Heartbeat is an Erlang monitoring service that will restart BlockScout if it becomes unresponsive. This variables configures the timeout before Blockscout will be restarted.
HEART_COMMAND: "sudo systemctl restart explorer.service" # This variable represents a command that is used to restart the service
BLOCKSCOUT_VERSION: "v1.3.11-beta" # Added to the footer to signify the current BlockScout version
RELEASE_LINK: "https://github.com/poanetwork/blockscout/releases/tag/v1.3.9-beta" # The link to Blockscout release notes in the footer.
ELIXIR_VERSION: "v1.8.1" # Elixir version to install on the node before Blockscout deploy
BLOCK_TRANSFORMER: "base" # Transformer for blocks: base or clique.
GRAPHIQL_TRANSACTION: "0xbc426b4792c48d8ca31ec9786e403866e14e7f3e4d39c7f2852e518fae529ab4" # Random tx hash on the network, used as default for graphiql tx.
TXS_COUNT_CACHE_PERIOD: 7200 # Interval in seconds to restart the task, which calculates the total txs count.
ADDRESS_WITH_BALANCES_UPDATE_INTERVAL: 1800 #Interval in seconds to restart the task, which calculates addresses with balances
LINK_TO_OTHER_EXPLORERS: "false" # If true, links to other explorers are added in the footer
USE_PLACEMENT_GROUP: "false" # If true, BlockScout instance will be created in the placement group
#The following variables are optional
#FIRST_BLOCK: 0 # The block number, where indexing begins from.
#COINMARKETCAP_PAGES: 10 # Sets the number of pages at Coinmarketcap to search coin at. Defaults to 10
#METADATA_CONTRACT: # Address of metadata smart contract. Used by POA Network to obtain Validators information to display in the UI
#VALIDATORS_CONTRACT: #Address of the EMission Fund smart contract
#SUPPLY_MODULE: "false" # Used by the xDai Chain to calculate the total supply of the chain
#SOURCE_MODULE: "false" # Used to calculate the total supply
#DATABASE_URL: # Database URL. Usually generated automatically, but this variable can be used to modify the URL of the databases during the updates.
#CHECK_ORIGIN: "false" # Used to check the origin of requests when the origin header is present
#DATADOG_HOST: # Host configuration variable for Datadog integration
#DATADOG_PORT: # Port configuration variable for Datadog integration
#SPANDEX_BATCH_SIZE: # Spandex and Datadog configuration setting.
#SPANDEX_SYNC_THRESHOLD: # Spandex and Datadog configuration setting.
#BLOCK_COUNT_CACHE_TTL: #Time to live of block count cache in milliseconds

View File

@ -3,21 +3,8 @@
## An address of BlockScout repo to download
blockscout_repo: https://github.com/poanetwork/blockscout
## A branch at `blockscout_repo` with ready-to-deploy version of BlockScout
chain_branch:
core: "production-core"
sokol: "production-sokol"
## Usually you don't want to merge branches, so it is commented out by default
#chain_merge_commit:
# core: "2cdead1"
# sokol: "2cdead1"
## If you want you can download and configure repo on your own. It should has the following name - blockscout-{{ chain_name }} and exist inside root playbook folder. Use the following variable to prevent playbooks from overriding
skip_fetch: false
## Login data for the test database. Please, use postgres database with the version specified at BlockScout repo prerequisites
# Please, specify the credentials for the test Postgres installation
ps_host: localhost
ps_user: myuser
ps_password: mypass
ps_db: mydb

View File

@ -1,4 +1,4 @@
# Infrastructure related variables
# Infrastructure related group variables
## Exact path to the TF binary on your local machine
terraform_location: "/usr/local/bin/terraform"
@ -10,9 +10,6 @@ dynamodb_table: "poa-terraform-lock"
ec2_ssh_key_name: "sokol-test"
ec2_ssh_key_content: ""
## EC2 Instance will have the following size:
instance_type: "m5.large"
## VPC containing Blockscout resources will be created as following:
vpc_cidr: "10.0.0.0/16"
public_subnet_cidr: "10.0.0.0/24"
@ -27,47 +24,3 @@ dns_zone_name: "poa.internal"
## Size of the EC2 instance EBS root volume
root_block_size: 120
# DB related variables
## This value represents the name of the DB that will be created/attached. Must be unique. Will be prefixed with `prefix` variable.
chain_db_id:
core: "core"
sokol: "sokol"
## Each network should have it's own DB. This variable maps chain to DB name. Should not be messed with db_id variable, which represents the RDS instance ID.
chain_db_name:
core: "core"
sokol: "sokol"
## The following variables describes the DB configurations for each network including usernames, password, instance class, etc.
chain_db_username:
core: "core"
sokol: "sokol"
chain_db_password:
core: "fkowfjpoi309021"
sokol: "kopsdOPpa9213K"
chain_db_instance_class:
core: "db.m4.xlarge"
sokol: "db.m4.large"
## Size of storage in GiB.
chain_db_storage:
core: "200"
sokol: "100"
## Type of disk to be used for the DB.
chain_db_storage_type:
core: "io1"
sokol: "gp2"
## This should be set only if chain_db_storage is set to io1
#chain_db_iops:
# core: "1000"
# sokol: "1500"
## Blockscout uses Postgres as the DB engine. This variable describes the Postgres version used in each particular chain.
chain_db_version:
core: "10.5"
sokol: "10.6"

50
host_vars/all.yml.example Normal file
View File

@ -0,0 +1,50 @@
ansible_host: localhost # An address of machine where BlockScout staging will be built
ansible_connection: local # Comment out if your ansible_host is not localhost
chain: poa # Can be not unique. Represents chain name.
env_vars:
#NETWORK: "(POA)" # Name of the organization/community that hosts the chain
#SUBNETWORK: "Core Network" # Actual name of the particular network
#NETWORK_ICON: "_network_icon.html" # Either _test_network_icon.html or _network_icon.html, depending on the type of the network (prod/test).
#LOGO: "/images/blockscout_logo.svg" # Chain logo
#ETHEREUM_JSONRPC_VARIANT: "parity" # Chain client installed at ETHEREUM_JSONRPC_HTTP_URL
#ETHEREUM_JSONRPC_HTTP_URL: "http://localhost:8545" # Network RPC endpoint
#ETHEREUM_JSONRPC_TRACE_URL: "http://localhost:8545" # Network RPC endpoint in trace mode. Can be the same as the previous variable
#ETHEREUM_JSONRPC_WS_URL: "ws://localhost:8546" # Network RPC endpoint in websocket mode
#NETWORK_PATH: "/poa/core" # relative URL path, for example: blockscout.com/$NETWORK_PATH
#SECRET_KEY_BASE: "TPGMvGK0iIwlXBQuQDA5KRqk77VETbEBlG4gAWeb93TvBsYAjvoAvdODMd6ZeguPwf2YTRY3n7uvxXzQP4WayQ==" # Secret key for production assets protection. Use `mix phx.gen.secret` or `openssl rand -base64 64 | tr -d '\n'` to generate
#PORT: 4000 # Port the application runs on
#COIN: "POA" # Coin name at the Coinmarketcap, used to display current exchange rate
#POOL_SIZE: 20 # Defines the number of database connections allowed
#ECTO_USE_SSL: "false" # Specifies whether or not to use SSL on Ecto queries
#ALB_SSL_POLICY: "ELBSecurityPolicy-2016-08" #SSL policy for Load Balancer. Required if ECTO_USE_SSL is set to true
#ALB_CERTIFICATE_ARN: "arn:aws:acm:us-east-1:290379793816:certificate/6d1bab74-fb46-4244-aab2-832bf519ab24" #ARN of the certificate to attach to the LB. Required if ECTO_USE_SSL is set to true
#HEART_BEAT_TIMEOUT: 30 # Heartbeat is an Erlang monitoring service that will restart BlockScout if it becomes unresponsive. This variables configures the timeout before Blockscout will be restarted.
#HEART_COMMAND: "sudo systemctl restart explorer.service" # This variable represents a command that is used to restart the service
#BLOCKSCOUT_VERSION: "v2.0.0-beta" # Added to the footer to signify the current BlockScout version
#ELIXIR_VERSION: "v1.8.1" # Elixir version to install on the node before Blockscout deploy
#BLOCK_TRANSFORMER: "base" # Transformer for blocks: base or clique.
#GRAPHIQL_TRANSACTION: "0xbc426b4792c48d8ca31ec9786e403866e14e7f3e4d39c7f2852e518fae529ab4" # Random tx hash on the network, used as default for graphiql tx.
#TXS_COUNT_CACHE_PERIOD: 7200 # Interval in seconds to restart the task, which calculates the total txs count.
#ADDRESS_WITH_BALANCES_UPDATE_INTERVAL: 1800 #Interval in seconds to restart the task, which calculates addresses with balances
#LINK_TO_OTHER_EXPLORERS: "false" # If true, links to other explorers are added in the footer
#USE_PLACEMENT_GROUP: "false" # If true, BlockScout instance will be created in the placement group
##The following variables are optional
## SUPPORTED_CHAINS variable shoud have space before main content. This is due to the Ansible variable interpretation bug
#SUPPORTED_CHAINS: ' [{ "title": "POA Core", "url": "https://blockscout.com/poa/core" }]' # JSON array with links to other exporers
#FIRST_BLOCK: 0 # The block number, where indexing begins from.
#COINMARKETCAP_PAGES: 10 # Sets the number of pages at Coinmarketcap to search coin at. Defaults to 10
#METADATA_CONTRACT: # Address of metadata smart contract. Used by POA Network to obtain Validators information to display in the UI
#VALIDATORS_CONTRACT: #Address of the EMission Fund smart contract
#SUPPLY_MODULE: "false" # Used by the xDai Chain to calculate the total supply of the chain
#SOURCE_MODULE: "false" # Used to calculate the total supply
#DATABASE_URL: # Database URL. Usually generated automatically, but this variable can be used to modify the URL of the databases during the updates.
#CHECK_ORIGIN: "false" # Used to check the origin of requests when the origin header is present
#DATADOG_HOST: # Host configuration variable for Datadog integration
#DATADOG_PORT: # Port configuration variable for Datadog integration
#SPANDEX_BATCH_SIZE: # Spandex and Datadog configuration setting.
#SPANDEX_SYNC_THRESHOLD: # Spandex and Datadog configuration setting.
#BLOCK_COUNT_CACHE_PERIOD: 600 #Time to live of block count cache in milliseconds
#ALLOWED_EVM_VERSIONS: "homestead, tangerineWhistle, spuriousDragon, byzantium, constantinople, petersburg" # the comma-separated list of allowed EVM versions for contracts verification
#BUILD_* - redefine variables with BUILD_ prefix to override parameters used for building the dev server

View File

@ -0,0 +1,7 @@
skip_fetch: false
blockscout_repo: https://github.com/poanetwork/blockscout
branch: "production-core"
#merge_commit: "2cdead1"
ps_db: mydb # The name of the test DB to store data in;

View File

@ -0,0 +1,17 @@
terraform_location: "/usr/local/bin/terraform"
db_id: "core" # This value represents the name of the DB that will be created/attached. Must be unique. Will be prefixed with `prefix` variable.
db_name: "core" # Each network should have it's own DB. This variable maps chain to DB name. Should not be messed with db_id variable, which represents the RDS instance ID.
## The following variables describes the DB configurations for each network including usernames, password, instance class, etc.
db_username: "core"
db_password: "fkowfjpoi309021"
db_instance_class: "db.t3.medium"
db_storage: "100" # in GiB
db_storage_type: "gp2" # see https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html for details
#db_iops: "1000" # This should be set only if chain_db_storage is set to `io1`
db_version: "10.6" #Blockscout uses Postgres as the DB engine. This variable describes the Postgres version used in each particular chain.
instance_type: "m5.large" # EC2 BlockScout Instance will have this type
use_placement_group: false # Choose wheter or not to group BlockScout instances into group

1
hosts
View File

@ -1 +0,0 @@
localhost ansible_connection=local

11
hosts.example Normal file
View File

@ -0,0 +1,11 @@
# Each group and host name must be unique
[poa]
sokol
core
[eth]
kovan
main
ropst
rink

View File

@ -16,8 +16,6 @@
template:
src: roles/main_infra/templates/terraform.tfvars.j2
dest: roles/main_infra/files/terraform.tfvars
vars:
db_iops: "{{ chain_db_iops | default({}) }}"
- name: Generating backend file
template:

View File

@ -1,33 +1,27 @@
- name: Check prefix
fail:
msg: "The prefix '{{ prefix }}' is invalid. It must consist only of the lowercase characters a-z and digits 0-9, and must be between 3 and 5 characters long."
when: prefix|length < 3 or prefix|length > 5 or prefix is not match("^[a-z0-9]+$")
msg: "The prefix '{{ group_names[0] }}' is invalid. It must consist only of the lowercase characters a-z and digits 0-9, and must be between 3 and 5 characters long."
when: group_names[0] | length < 3 or group_names[0] | length > 5 or group_names[0] is not match("^[a-z0-9]+$")
- name: Check chain names
fail:
msg: "The prefix '{{ item }}' is invalid. It must consist only of the lowercase characters a-z and digits 0-9, and must not more than 5 characters long."
when: item.key|length > 5 or item.key is not match("^[a-z0-9]+$")
with_dict: "{{ chain_custom_environment }}"
msg: "The chain '{{ item }}' is invalid. It must consist only of the lowercase characters a-z and digits 0-9, and must not more than 5 characters long."
when: (item.key | length > 5 or item.key is not match("^[a-z0-9]+$")) and item.key != "all" and item.key != "ungrouped"
with_dict: "{{ groups }}"
- name: Check if terraform is installed
command: which terraform
command: "{{ terraform_location }} --version"
register: terraform_status
changed_when: false
- name: Terraform check result
fail:
msg: "Terraform is not installed"
when: terraform_status.stdout == ""
- name: Check if python is installed
command: which python
register: python_status
command: "{{ ansible_python_interpreter }} --version"
changed_when: false
- name: Python check result
fail:
msg: "Python either is not installed or is too old. Please install python version 2.6 or higher"
when: python_status.stdout == "" or python_int_version|int < 260
msg: "Python is too old. Please install python version 2.6 or higher"
when: python_int_version | int < 260
vars:
python_int_version: "{{ ansible_python_version.split('.')[0]|int * 100 + ansible_python_version.split('.')[1]|int * 10 + ansible_python_version.split('.')[2]|int }}"

View File

@ -1,83 +1,120 @@
- name: Ansible delete file glob
find:
paths: /tmp/
file_type: directory
patterns: "files-{{ group_names[0] }}"
register: files_to_delete
- name: Ansible remove file glob
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ files_to_delete.files }}"
- name: Copy files
copy:
src: "roles/main_infra/files/"
dest: "/tmp/files-{{ group_names[0] }}/"
- name: Local or remote backend selector (remote)
template:
src: roles/main_infra/templates/remote-backend-selector.tf.j2
dest: roles/main_infra/files/remote-backend-selector.tf
dest: "/tmp/files-{{ group_names[0] }}/remote-backend-selector.tf"
when:
- backend|bool == true
- name: Local or remote backend selector (local)
file:
state: absent
dest: roles/main_infra/files/remote-backend-selector.tf
dest: "/tmp/files-{{ group_names[0] }}/"
when:
- backend | default ('false') | bool != true
- not backend | default ('false') | bool
- name: Generating variables file
template:
src: roles/main_infra/templates/terraform.tfvars.j2
dest: roles/main_infra/files/terraform.tfvars
vars:
db_iops: "{{ chain_db_iops | default({}) }}"
dest: "/tmp/files-{{ group_names[0] }}/terraform.tfvars"
- name: Generating backend file
template:
src: roles/main_infra/templates/backend.tfvars.j2
dest: roles/main_infra/files/backend.tfvars
when: backend|bool == true
dest: "/tmp/files-{{ group_names[0] }}/backend.tfvars"
when: backend | bool
- name: Generate Terraform files
template:
src: "{{ item.key }}"
dest: "{{ item.value }}"
with_dict: {roles/main_infra/templates/hosts.tf.j2: roles/main_infra/files/hosts.tf,roles/main_infra/templates/routing.tf.j2: roles/main_infra/files/routing.tf,roles/main_infra/templates/provider.tf.j2: roles/main_infra/files/provider.tf}
# This is due to the TF0.11 bug which do not allow to completely destroy resources if interpolation syntax is used in outputs.tf at edge cases
# This is due to the TF0.11-12 bug which do not allow to completely destroy resources if interpolation syntax is used in outputs.tf at edge cases
- name: Check if outputs.tf exists
stat: path=roles/main_infra/files/outputs.tf
stat:
path: "/tmp/files-{{ group_names[0] }}/outputs.tf"
register: outputs_stat
- name: Temporarily remove outputs.tf file
command: mv roles/main_infra/files/outputs.tf roles/main_infra/files/outputs.tf.backup
command: "mv /tmp/files-{{ group_names[0] }}/outputs.tf /tmp/files-{{ group_names[0] }}/outputs.tf.backup"
when: outputs_stat.stat.exists
- name: Check if .terraform folder exists
stat:
path: "roles/main_infra/files/.terraform/"
path: "/tmp/files-{{ group_names[0] }}/.terraform/"
register: stat_result
- name: Remove .terraform folder
file:
path: roles/main_infra/files/.terraform/
path: "/tmp/files-{{ group_names[0] }}/.terraform/"
state: absent
when: stat_result.stat.exists == True
when: stat_result.stat.exists
- name: Terraform destroy main infra
- name: Terraform plan to destroy main infra
shell: "echo yes | {{ terraform_location }} {{ item }}"
args:
chdir: "roles/main_infra/files"
chdir: "/tmp/files-{{ group_names[0] }}/"
with_items:
- "init {{ '-backend-config=backend.tfvars' if backend|bool == true else '' }}"
- destroy
- "init {{ '-backend-config=backend.tfvars' if backend|bool else '' }}"
- plan -destroy -out terraform.tfplan
- show -no-color terraform.tfplan
register: tf_plan
- name: Terraform show destroy plan
debug:
var: tf_plan.results[2].stdout_lines
- name: User prompt
pause:
prompt: "Are you absolutely sure you want to execute the destruction plan shown above? [False]"
register: user_answer
until: user_answer.user_input | lower in conditional
retries: 10000
delay: 1
vars:
conditional: ['yes','no','true','false']
when: inventory_hostname == groups['all'][0]
- name: Terraform destroy
shell: "{{ terraform_location }} destroy -auto-approve"
args:
chdir: "/tmp/files-{{ group_names[0] }}"
when: hostvars[groups['all'][0]].user_answer.user_input | bool
- name: Delete vars from parameter store
include: parameter_store.yml
loop: "{{ chain_custom_environment.keys() }}"
loop_control:
loop_var: chain
index_var: index
- name: Check if outputs.tf.backup exists
stat: path=roles/main_infra/files/outputs.tf.backup
stat:
path: "/tmp/files-{{ group_names[0] }}/outputs.tf.backup"
register: outputs_backup_stat
- name: Get back outputs.tf file
command: mv roles/main_infra/files/outputs.tf.backup roles/main_infra/files/outputs.tf
command: "mv /tmp/files-{{ group_names[0] }}/outputs.tf.backup /tmp/files-{{ group_names[0] }}/outputs.tf"
when: outputs_backup_stat.stat.exists
- name: User prompt
pause:
prompt: "Do you want to delete S3 bucket with state file and DynamoDB attached to it also? [Yes/No] Default: No"
register: user_answer
until: user_answer.user_input | lower in conditional
retries: 10000
delay: 1
vars:
conditional: ['yes','no','true','false']
when: inventory_hostname == groups['all'][0]
- name: Destroy S3 bucket
s3_bucket:
@ -93,7 +130,7 @@
secret_key: "{{ aws_secret_key|default(omit) }}"
profile: "{{ aws_profile|default(omit) }}"
region: "{{ aws_region|default(omit) }}"
when: user_answer.user_input|bool == True
when: hostvars[groups['all'][0]].user_answer.user_input | bool
- dynamodb_table:
name: "{{ prefix }}-{{ dynamodb_table }}"
@ -107,4 +144,4 @@
secret_key: "{{ aws_secret_key|default(omit) }}"
profile: "{{ aws_profile|default(omit) }}"
region: "{{ aws_region|default(omit) }}"
when: user_answer.user_input|bool == True
when: hostvars[groups['all'][0]].user_answer.user_input | bool

View File

@ -2,19 +2,19 @@
set_fact:
chain_env: "{{ lookup('aws_ssm', path, aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, region=region, shortnames=true, bypath=true, recursive=true ) }}"
vars:
path: "/{{ prefix }}/{{ chain }}"
path: "/{{ group_names[0] }}/{{ chain }}"
when: aws_access_key is defined
- name: Fetch environment variables (via profile)
set_fact:
chain_env: "{{ lookup('aws_ssm', path, aws_profile=aws_profile, shortnames=true, bypath=true, recursive=true ) }}"
vars:
path: "/{{ prefix }}/{{ chain }}"
path: "/{{ group_names[0] }}/{{ chain }}"
when: aws_profile is defined
- name: Remove chain variables
aws_ssm_parameter_store:
name: "/{{ prefix }}/{{ chain }}/{{ item.key }}"
name: "/{{ group_names[0] }}/{{ chain }}/{{ item.key }}"
value: "{{ item.value }}"
state: absent
profile: "{{ profile }}"

View File

@ -1,13 +1,13 @@
- name: Create DynamoDB table
dynamodb_table:
name: "{{ prefix }}-{{ dynamodb_table }}"
name: "{{ group_names[0] }}-{{ dynamodb_table }}"
hash_key_name: LockID
hash_key_type: STRING
read_capacity: 1
write_capacity: 1
tags:
origin: terraform
prefix: "{{ prefix }}"
prefix: "{{ group_names[0] }}"
profile: "{{ profile }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"

View File

@ -9,3 +9,4 @@ db_subnet_cidr: "10.0.2.0/16"
dns_zone_name: "poa.internal"
instance_type: "m5.large"
root_block_size: 8
db_iops: {}

View File

@ -1,6 +1,6 @@
resource "aws_s3_bucket" "explorer_releases" {
bucket = "${var.prefix}-explorer-codedeploy-releases"
acl = "private"
bucket = "${var.prefix}-explorer-codedeploy-releases"
acl = "private"
force_destroy = "true"
versioning {
@ -13,11 +13,11 @@ resource "aws_codedeploy_app" "explorer" {
}
resource "aws_codedeploy_deployment_group" "explorer" {
count = "${length(var.chains)}"
app_name = "${aws_codedeploy_app.explorer.name}"
count = length(var.chains)
app_name = aws_codedeploy_app.explorer.name
deployment_group_name = "${var.prefix}-explorer-dg${count.index}"
service_role_arn = "${aws_iam_role.deployer.arn}"
autoscaling_groups = ["${aws_launch_configuration.explorer.name}-asg-${element(var.chains,count.index)}"]
service_role_arn = aws_iam_role.deployer.arn
autoscaling_groups = [aws_autoscaling_group.explorer[count.index].name]
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
@ -26,7 +26,7 @@ resource "aws_codedeploy_deployment_group" "explorer" {
load_balancer_info {
target_group_info {
name = "${aws_lb_target_group.explorer.*.name[count.index]}"
name = aws_lb_target_group.explorer[count.index].name
}
}
@ -46,3 +46,4 @@ resource "aws_codedeploy_deployment_group" "explorer" {
}
}
}

View File

@ -1,24 +1,27 @@
# Internal DNS Zone
resource "aws_route53_zone" "main" {
name = "${var.prefix}.${var.dns_zone_name}"
vpc_id = "${aws_vpc.vpc.id}"
vpc {
vpc_id = aws_vpc.vpc.id
}
tags {
prefix = "${var.prefix}"
tags = {
prefix = var.prefix
origin = "terraform"
}
}
# Private DNS records
resource "aws_route53_record" "db" {
zone_id = "${aws_route53_zone.main.zone_id}"
zone_id = aws_route53_zone.main.zone_id
name = "db${count.index}"
type = "A"
count = "${length(var.chains)}"
count = length(var.chains)
alias {
name = "${aws_db_instance.default.*.address[count.index]}"
zone_id = "${aws_db_instance.default.*.hosted_zone_id[count.index]}"
name = aws_db_instance.default[count.index].address
zone_id = aws_db_instance.default[count.index].hosted_zone_id
evaluate_target_health = false
}
}

View File

@ -0,0 +1,117 @@
data "aws_ami" "explorer" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
resource "aws_launch_configuration" "explorer" {
name_prefix = "${var.prefix}-explorer-launchconfig"
image_id = data.aws_ami.explorer.id
instance_type = var.instance_type
security_groups = [aws_security_group.app.id]
key_name = var.key_name
iam_instance_profile = aws_iam_instance_profile.explorer.id
associate_public_ip_address = false
depends_on = [aws_db_instance.default]
user_data = file("${path.module}/libexec/init.sh")
root_block_device {
volume_size = var.root_block_size
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_placement_group" "explorer" {
count = length(matchkeys(keys(var.use_placement_group),values(var.use_placement_group),["True"]))
name = "${var.prefix}-${var.chains[count.index]}-explorer-pg"
strategy = "cluster"
}
resource "aws_autoscaling_group" "explorer" {
count = length(var.chains)
name = "${var.prefix}-${var.chains[count.index]}-asg"
max_size = "4"
min_size = "1"
desired_capacity = "1"
launch_configuration = aws_launch_configuration.explorer.name
vpc_zone_identifier = [aws_subnet.default.id]
availability_zones = data.aws_availability_zones.available.names
target_group_arns = [aws_lb_target_group.explorer[0].arn]
placement_group = var.use_placement_group[var.chains[count.index]] == "True" ? "${var.prefix}-${var.chains[count.index]}-explorer-pg" : null
# Health checks are performed by CodeDeploy hooks
health_check_type = "EC2"
enabled_metrics = [
"GroupMinSize",
"GroupMaxSize",
"GroupDesiredCapacity",
"GroupInServiceInstances",
"GroupTotalInstances",
]
depends_on = [
aws_ssm_parameter.db_host,
aws_ssm_parameter.db_name,
aws_ssm_parameter.db_port,
aws_ssm_parameter.db_username,
aws_ssm_parameter.db_password,
aws_placement_group.explorer
]
lifecycle {
create_before_destroy = true
}
tag {
key = "prefix"
value = var.prefix
propagate_at_launch = true
}
tag {
key = "chain"
value = var.chains[count.index]
propagate_at_launch = true
}
tag {
key = "Name"
value = "${var.chains[count.index]} Application"
propagate_at_launch = true
}
}
# TODO: These autoscaling policies are not currently wired up to any triggers
resource "aws_autoscaling_policy" "explorer-up" {
count = length(var.chains)
name = "${var.prefix}-${var.chains[count.index]}-explorer-autoscaling-policy-up"
autoscaling_group_name = aws_autoscaling_group.explorer[count.index].name
adjustment_type = "ChangeInCapacity"
scaling_adjustment = 1
cooldown = 300
}
resource "aws_autoscaling_policy" "explorer-down" {
count = length(var.chains)
name = "${var.prefix}-${var.chains[count.index]}-explorer-autoscaling-policy-down"
autoscaling_group_name = aws_autoscaling_group.explorer[count.index].name
adjustment_type = "ChangeInCapacity"
scaling_adjustment = -1
cooldown = 300
}

View File

@ -103,6 +103,21 @@ log "Setting up application environment.."
mkdir -p /opt/app
chown -R ec2-user /opt/app
log "Creating logrotate config"
cat <<EOF > /etc/logrotate.d/blockscout
/var/log/messages* {
rotate 5
size 1G
compress
missingok
delaycompress
copytruncate
}
EOF
log "Creating explorer systemd service.."
cat <<EOF > /lib/systemd/system/explorer.service
@ -170,8 +185,7 @@ old_env="$(cat /etc/environment)"
# shellcheck disable=SC2016
echo 'PATH=/opt/elixir/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:$PATH'
# shellcheck disable=SC1117
echo "$parameters_json" | \
jq ".Parameters[] as \$ps | \"\(\$ps[\"Name\"] | gsub(\"-\"; \"_\") | ltrimstr(\"/$PREFIX/$CHAIN/\") | ascii_upcase)=\\\"\(\$ps[\"Value\"])\\\"\"" --raw-output
echo "$parameters_json" | echo "$parameters_json" | jq ".Parameters[] as \$ps | \"\(\$ps[\"Name\"] | gsub(\"-\"; \"_\") | ltrimstr(\"/$PREFIX/$CHAIN/\") | ascii_upcase)='\(\$ps[\"Value\"])'\"" --raw-output
echo "DYNO=\"$HOSTNAME\""
echo "HOSTNAME=\"$HOSTNAME\""
echo "DATABASE_URL=\"$DATABASE_URL/$DB_NAME\""

View File

@ -11,7 +11,13 @@ To deploy a new version of the application manually:
2) Follow the instructions in the output from the `aws deploy push` command
to deploy the uploaded application. Use the deployment group names shown below:
- ${join("\n - ", formatlist("%s", aws_codedeploy_deployment_group.explorer.*.deployment_group_name))}
- ${join(
"\n - ",
formatlist(
"%s",
aws_codedeploy_deployment_group.explorer.*.deployment_group_name,
),
)}
You will also need to specify a deployment config name. Example:
@ -25,7 +31,15 @@ To deploy a new version of the application manually:
4) Once the deployment is complete, you can access each chain explorer from its respective url:
- ${join("\n - ", formatlist("%s: %s", keys(zipmap(var.chains, aws_lb.explorer.*.dns_name)), values(zipmap(var.chains, aws_lb.explorer.*.dns_name))))}
- ${join(
"\n - ",
formatlist(
"%s: %s",
keys(zipmap(var.chains, aws_lb.explorer.*.dns_name)),
values(zipmap(var.chains, aws_lb.explorer.*.dns_name)),
),
)}
OUTPUT
}
}

View File

@ -0,0 +1,8 @@
provider "aws" {
version = "~> 2.17"
profile = var.aws_profile
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}

View File

@ -1,61 +1,61 @@
resource "aws_ssm_parameter" "db_host" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(var.chains,count.index)}/db_host"
value = "${aws_route53_record.db.*.fqdn[count.index]}"
count = length(var.chains)
name = "/${var.prefix}/${element(var.chains, count.index)}/db_host"
value = aws_route53_record.db[count.index].fqdn
type = "String"
}
resource "aws_ssm_parameter" "db_port" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(var.chains,count.index)}/db_port"
value = "${aws_db_instance.default.*.port[count.index]}"
count = length(var.chains)
name = "/${var.prefix}/${element(var.chains, count.index)}/db_port"
value = aws_db_instance.default[count.index].port
type = "String"
}
resource "aws_ssm_parameter" "db_name" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(var.chains,count.index)}/db_name"
value = "${lookup(var.chain_db_name,element(var.chains,count.index))}"
count = length(var.chains)
name = "/${var.prefix}/${element(var.chains, count.index)}/db_name"
value = var.chain_db_name[element(var.chains, count.index)]
type = "String"
}
resource "aws_ssm_parameter" "db_username" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(var.chains,count.index)}/db_username"
value = "${lookup(var.chain_db_username,element(var.chains,count.index))}"
count = length(var.chains)
name = "/${var.prefix}/${element(var.chains, count.index)}/db_username"
value = var.chain_db_username[element(var.chains, count.index)]
type = "String"
}
resource "aws_ssm_parameter" "db_password" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(var.chains,count.index)}/db_password"
value = "${lookup(var.chain_db_password,element(var.chains,count.index))}"
count = length(var.chains)
name = "/${var.prefix}/${element(var.chains, count.index)}/db_password"
value = var.chain_db_password[element(var.chains, count.index)]
type = "String"
}
resource "aws_db_instance" "default" {
count = "${length(var.chains)}"
name = "${lookup(var.chain_db_name,element(var.chains,count.index))}"
identifier = "${var.prefix}-${lookup(var.chain_db_id,element(var.chains,count.index))}"
count = length(var.chains)
name = var.chain_db_name[element(var.chains, count.index)]
identifier = "${var.prefix}-${var.chain_db_id[element(var.chains, count.index)]}"
engine = "postgres"
engine_version = "${lookup(var.chain_db_version,element(var.chains,count.index))}"
instance_class = "${lookup(var.chain_db_instance_class,element(var.chains,count.index))}"
storage_type = "${lookup(var.chain_db_storage_type,element(var.chains,count.index))}"
allocated_storage = "${lookup(var.chain_db_storage,element(var.chains,count.index))}"
engine_version = var.chain_db_version[element(var.chains, count.index)]
instance_class = var.chain_db_instance_class[element(var.chains, count.index)]
storage_type = var.chain_db_storage_type[element(var.chains, count.index)]
allocated_storage = var.chain_db_storage[element(var.chains, count.index)]
copy_tags_to_snapshot = true
skip_final_snapshot = true
username = "${lookup(var.chain_db_username,element(var.chains,count.index))}"
password = "${lookup(var.chain_db_password,element(var.chains,count.index))}"
vpc_security_group_ids = ["${aws_security_group.database.id}"]
db_subnet_group_name = "${aws_db_subnet_group.database.id}"
username = var.chain_db_username[element(var.chains, count.index)]
password = var.chain_db_password[element(var.chains, count.index)]
vpc_security_group_ids = [aws_security_group.database.id]
db_subnet_group_name = aws_db_subnet_group.database.id
apply_immediately = true
iops = "${lookup(var.chain_db_iops,element(var.chains,count.index),"0")}"
iops = lookup(var.chain_db_iops, element(var.chains, count.index), "0")
depends_on = [aws_security_group.database]
depends_on = ["aws_security_group.database"]
tags {
prefix = "${var.prefix}"
tags = {
prefix = var.prefix
origin = "terraform"
}
}

View File

@ -0,0 +1,73 @@
# Create a gateway to provide access to the outside world
resource "aws_internet_gateway" "default" {
vpc_id = aws_vpc.vpc.id
tags = {
prefix = var.prefix
origin = "terraform"
}
}
# Grant the VPC internet access in its main route table
resource "aws_route" "internet_access" {
route_table_id = aws_vpc.vpc.main_route_table_id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.default.id
}
# The ALB for the app server
resource "aws_lb" "explorer" {
count = length(var.chains)
name = "${var.prefix}-explorer-${element(var.chains, count.index)}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb.id]
subnets = [aws_subnet.default.id, aws_subnet.alb.id]
enable_deletion_protection = false
tags = {
prefix = var.prefix
origin = "terraform"
}
}
# The Target Group for the ALB
resource "aws_lb_target_group" "explorer" {
count = length(var.chains)
name = "${var.prefix}-explorer-${element(var.chains, count.index)}-alb-target"
port = 4000
protocol = "HTTP"
vpc_id = aws_vpc.vpc.id
tags = {
prefix = var.prefix
origin = "terraform"
}
stickiness {
type = "lb_cookie"
cookie_duration = 600
enabled = true
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 15
interval = 30
path = "/blocks"
port = 4000
}
}
resource "aws_alb_listener" "alb_listener" {
count = length(var.chains)
load_balancer_arn = aws_lb.explorer[count.index].arn
port = var.use_ssl[element(var.chains, count.index)] ? "443" : "80"
protocol = var.use_ssl[element(var.chains, count.index)] ? "HTTPS" : "HTTP"
ssl_policy = var.use_ssl[element(var.chains, count.index)] ? var.alb_ssl_policy[element(var.chains, count.index)] : null
certificate_arn = var.use_ssl[element(var.chains, count.index)] ? var.alb_certificate_arn[element(var.chains, count.index)] : null
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.explorer[count.index].arn
}
}

View File

@ -63,7 +63,7 @@ data "aws_iam_policy_document" "codedeploy-policy" {
actions = ["s3:Get*", "s3:List*"]
resources = [
"${aws_s3_bucket.explorer_releases.arn}",
aws_s3_bucket.explorer_releases.arn,
"${aws_s3_bucket.explorer_releases.arn}/*",
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-east-2/*",
@ -90,38 +90,38 @@ data "aws_iam_policy" "AmazonEC2RoleForSSM" {
}
resource "aws_iam_role_policy_attachment" "ec2-codedeploy-policy-attachment" {
role = "${aws_iam_role.role.name}"
policy_arn = "${data.aws_iam_policy.AmazonEC2RoleForAWSCodeDeploy.arn}"
role = aws_iam_role.role.name
policy_arn = data.aws_iam_policy.AmazonEC2RoleForAWSCodeDeploy.arn
}
resource "aws_iam_role_policy_attachment" "ec2-ssm-policy-attachment" {
role = "${aws_iam_role.role.name}"
policy_arn = "${data.aws_iam_policy.AmazonEC2RoleForSSM.arn}"
role = aws_iam_role.role.name
policy_arn = data.aws_iam_policy.AmazonEC2RoleForSSM.arn
}
resource "aws_iam_instance_profile" "explorer" {
name = "${var.prefix}-explorer-profile"
role = "${aws_iam_role.role.name}"
role = aws_iam_role.role.name
path = "/${var.prefix}/"
}
resource "aws_iam_role_policy" "config" {
name = "${var.prefix}-config-policy"
role = "${aws_iam_role.role.id}"
policy = "${data.aws_iam_policy_document.config-policy.json}"
role = aws_iam_role.role.id
policy = data.aws_iam_policy_document.config-policy.json
}
resource "aws_iam_role" "role" {
name = "${var.prefix}-explorer-role"
description = "The IAM role given to each Explorer instance"
path = "/${var.prefix}/"
assume_role_policy = "${data.aws_iam_policy_document.instance-assume-role-policy.json}"
assume_role_policy = data.aws_iam_policy_document.instance-assume-role-policy.json
}
resource "aws_iam_role_policy" "deployer" {
name = "${var.prefix}-codedeploy-policy"
role = "${aws_iam_role.deployer.id}"
policy = "${data.aws_iam_policy_document.codedeploy-policy.json}"
role = aws_iam_role.deployer.id
policy = data.aws_iam_policy_document.codedeploy-policy.json
}
data "aws_iam_policy" "AWSCodeDeployRole" {
@ -129,21 +129,21 @@ data "aws_iam_policy" "AWSCodeDeployRole" {
}
resource "aws_iam_role_policy_attachment" "codedeploy-policy-attachment" {
role = "${aws_iam_role.deployer.name}"
policy_arn = "${data.aws_iam_policy.AWSCodeDeployRole.arn}"
role = aws_iam_role.deployer.name
policy_arn = data.aws_iam_policy.AWSCodeDeployRole.arn
}
resource "aws_iam_role" "deployer" {
name = "${var.prefix}-deployer-role"
description = "The IAM role given to the CodeDeploy service"
assume_role_policy = "${data.aws_iam_policy_document.deployer-assume-role-policy.json}"
assume_role_policy = data.aws_iam_policy_document.deployer-assume-role-policy.json
}
# A security group for the ALB so it is accessible via the web
resource "aws_security_group" "alb" {
name = "${var.prefix}-poa-alb"
description = "A security group for the app server ALB, so it is accessible via the web"
vpc_id = "${aws_vpc.vpc.id}"
vpc_id = aws_vpc.vpc.id
# HTTP from anywhere
ingress {
@ -152,11 +152,11 @@ resource "aws_security_group" "alb" {
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 4000
to_port = 4000
protocol = "tcp"
from_port = 4000
to_port = 4000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
@ -176,8 +176,8 @@ resource "aws_security_group" "alb" {
cidr_blocks = ["0.0.0.0/0"]
}
tags {
prefix = "${var.prefix}"
tags = {
prefix = var.prefix
origin = "terraform"
}
}
@ -185,21 +185,21 @@ resource "aws_security_group" "alb" {
resource "aws_security_group" "app" {
name = "${var.prefix}-poa-app"
description = "A security group for the app server, allowing SSH and HTTP(S)"
vpc_id = "${aws_vpc.vpc.id}"
vpc_id = aws_vpc.vpc.id
# HTTP from the VPC
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${var.vpc_cidr}"]
cidr_blocks = [var.vpc_cidr]
}
ingress {
from_port = 4000
to_port = 4000
protocol = "tcp"
cidr_blocks = ["${var.vpc_cidr}"]
from_port = 4000
to_port = 4000
protocol = "tcp"
cidr_blocks = [var.vpc_cidr]
}
# HTTPS from the VPC
@ -207,7 +207,7 @@ resource "aws_security_group" "app" {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["${var.vpc_cidr}"]
cidr_blocks = [var.vpc_cidr]
}
# SSH from anywhere
@ -226,8 +226,8 @@ resource "aws_security_group" "app" {
cidr_blocks = ["0.0.0.0/0"]
}
tags {
prefix = "${var.prefix}"
tags = {
prefix = var.prefix
origin = "terraform"
}
}
@ -235,14 +235,14 @@ resource "aws_security_group" "app" {
resource "aws_security_group" "database" {
name = "${var.prefix}-poa-database"
description = "Allow any inbound traffic from public/private subnet"
vpc_id = "${aws_vpc.vpc.id}"
vpc_id = aws_vpc.vpc.id
# Allow anything from within the app server subnet
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
cidr_blocks = [var.public_subnet_cidr]
}
# Unrestricted outbound
@ -253,8 +253,9 @@ resource "aws_security_group" "database" {
cidr_blocks = ["0.0.0.0/0"]
}
tags {
prefix = "${var.prefix}"
tags = {
prefix = var.prefix
origin = "terraform"
}
}

View File

@ -1,5 +1,6 @@
resource "aws_key_pair" "blockscout" {
count = "${var.key_content == "" ? 0 : 1}"
key_name = "${var.key_name}"
public_key = "${var.key_content}"
count = var.key_content == "" ? 0 : 1
key_name = var.key_name
public_key = var.key_content
}

View File

@ -1,44 +1,44 @@
## Public subnet
resource "aws_subnet" "default" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.public_subnet_cidr}"
availability_zone = "${data.aws_availability_zones.available.names[0]}"
vpc_id = aws_vpc.vpc.id
cidr_block = var.public_subnet_cidr
availability_zone = data.aws_availability_zones.available.names[0]
map_public_ip_on_launch = true
tags {
tags = {
Name = "${var.prefix}-default-subnet"
prefix = "${var.prefix}"
prefix = var.prefix
origin = "terraform"
}
}
## ALB subnet
resource "aws_subnet" "alb" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.public_subnet_cidr}"
cidr_block = "${cidrsubnet(var.db_subnet_cidr, 5, 1)}"
availability_zone = "${data.aws_availability_zones.available.names[1]}"
vpc_id = aws_vpc.vpc.id
#cidr_block = var.public_subnet_cidr
cidr_block = cidrsubnet(var.db_subnet_cidr, 5, 1)
availability_zone = data.aws_availability_zones.available.names[1]
map_public_ip_on_launch = true
tags {
tags = {
Name = "${var.prefix}-default-subnet"
prefix = "${var.prefix}"
prefix = var.prefix
origin = "terraform"
}
}
## Database subnet
resource "aws_subnet" "database" {
count = "${length(data.aws_availability_zones.available.names)}"
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${cidrsubnet(var.db_subnet_cidr, 8, 1 + count.index)}"
availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.vpc.id
cidr_block = cidrsubnet(var.db_subnet_cidr, 8, 1 + count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = false
tags {
tags = {
Name = "${var.prefix}-database-subnet${count.index}"
prefix = "${var.prefix}"
prefix = var.prefix
origin = "terraform"
}
}
@ -46,10 +46,11 @@ resource "aws_subnet" "database" {
resource "aws_db_subnet_group" "database" {
name = "${var.prefix}-database"
description = "The group of database subnets"
subnet_ids = ["${aws_subnet.database.*.id}"]
subnet_ids = aws_subnet.database.*.id
tags {
prefix = "${var.prefix}"
tags = {
prefix = var.prefix
origin = "terraform"
}
}

View File

@ -1,14 +1,45 @@
variable "prefix" {}
variable "key_name" {}
variable "vpc_cidr" {}
variable "public_subnet_cidr" {}
variable "db_subnet_cidr" {}
variable "dns_zone_name" {}
variable "instance_type" {}
variable "root_block_size" {}
variable "aws_profile" {
default = null
}
variable "aws_region" {
default = null
}
variable "aws_access_key" {
default = null
}
variable "aws_secret_key" {
default = null
}
variable "prefix" {
}
variable "key_name" {
}
variable "vpc_cidr" {
}
variable "public_subnet_cidr" {
}
variable "db_subnet_cidr" {
}
variable "dns_zone_name" {
}
variable "instance_type" {
}
variable "root_block_size" {
}
variable "pool_size" {
default = {}
default = {}
}
variable "use_placement_group" {
@ -60,7 +91,7 @@ variable "chain_db_version" {
}
variable "secret_key_base" {
default = {}
default = {}
}
variable "alb_ssl_policy" {
@ -72,5 +103,6 @@ variable "alb_certificate_arn" {
}
variable "use_ssl" {
default = {}
default = {}
}

View File

@ -0,0 +1,4 @@
terraform {
required_version = ">= 0.12"
}

View File

@ -6,31 +6,33 @@
# - A private subnet
# - NAT to give the private subnet access to internet
data "aws_availability_zones" "available" {}
data "aws_availability_zones" "available" {
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags {
Name = "${var.prefix}"
prefix = "${var.prefix}"
tags = {
Name = var.prefix
prefix = var.prefix
origin = "terraform"
}
}
resource "aws_vpc_dhcp_options" "poa_dhcp" {
domain_name = "${var.dns_zone_name}"
domain_name = var.dns_zone_name
domain_name_servers = ["AmazonProvidedDNS"]
tags {
prefix = "${var.prefix}"
tags = {
prefix = var.prefix
origin = "terraform"
}
}
resource "aws_vpc_dhcp_options_association" "poa_dhcp" {
vpc_id = "${aws_vpc.vpc.id}"
dhcp_options_id = "${aws_vpc_dhcp_options.poa_dhcp.id}"
vpc_id = aws_vpc.vpc.id
dhcp_options_id = aws_vpc_dhcp_options.poa_dhcp.id
}

View File

@ -1,57 +1,66 @@
- name: Ansible delete file glob
find:
paths: /tmp/
file_type: directory
patterns: "files-{{ group_names[0] }}"
register: files_to_delete
- name: Ansible remove file glob
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ files_to_delete.files }}"
- name: Copy files
copy:
src: "roles/main_infra/files/"
dest: "/tmp/files-{{ group_names[0] }}/"
- name: Local or remote backend selector (remote)
template:
src: remote-backend-selector.tf.j2
dest: roles/main_infra/files/remote-backend-selector.tf
dest: "/tmp/files-{{ group_names[0] }}/remote-backend-selector.tf"
when:
- backend|bool == true
- backend | bool
- name: Local or remote backend selector (local)
file:
state: absent
dest: roles/main_infra/files/remote-backend-selector.tf
dest: "/tmp/files-{{ group_names[0] }}/remote-backend-selector.tf"
when:
- backend | default ('false') | bool != true
- not backend | default('false') | bool
- name: Generating variables file
template:
src: terraform.tfvars.j2
dest: roles/main_infra/files/terraform.tfvars
vars:
db_iops: "{{ chain_db_iops | default({}) }}"
dest: "/tmp/files-{{ group_names[0] }}/terraform.tfvars"
- name: Generating backend file
template:
src: backend.tfvars.j2
dest: roles/main_infra/files/backend.tfvars
when: backend|bool == true
dest: "/tmp/files-{{ group_names[0] }}/backend.tfvars"
when: backend | default('false') | bool
- name: Check if .terraform folder exists
stat:
path: "roles/main_infra/files/.terraform/"
register: stat_result
- name: Remove .terraform folder
- name: Remove Terraform state
file:
path: roles/main_infra/files/.terraform/
path: "{{ item }}"
state: absent
when: stat_result.stat.exists == True
with_items:
- "/tmp/files-{{ group_names[0] }}/.terraform/"
- "/tmp/files-{{ group_names[0] }}/terraform.tfstate"
- "/tmp/files-{{ group_names[0] }}/terraform.tfstate.backup"
- "/tmp/files-{{ group_names[0] }}/terraform.tfplan"
- name: Generate Terraform files
template:
src: "{{ item.key }}"
dest: "{{ item.value }}"
with_dict: {hosts.tf.j2: roles/main_infra/files/hosts.tf,routing.tf.j2: roles/main_infra/files/routing.tf,provider.tf.j2: roles/main_infra/files/provider.tf}
#Workaround since terraform module return unexpected error.
- name: Terraform plan construct
shell: "echo yes | {{ terraform_location }} {{ item }}"
register: tf_plan
args:
chdir: "roles/main_infra/files"
chdir: "/tmp/files-{{ group_names[0] }}"
with_items:
- "init{{ ' -backend-config=backend.tfvars' if backend|bool == true else '' }}"
- "init{{ ' -backend-config=backend.tfvars' if backend|bool else '' }}"
- plan -out terraform.tfplan
- show terraform.tfplan -no-color
- show -no-color terraform.tfplan
- name: Show Terraform plan
debug:
@ -61,36 +70,48 @@
pause:
prompt: "Are you absolutely sure you want to execute the deployment plan shown above? [False]"
register: user_answer
until: user_answer.user_input | lower in conditional
retries: 10000
delay: 1
vars:
conditional: ['yes','no','true','false']
when: inventory_hostname == groups['all'][0]
- name: Insert vars into parameter store
include: parameter_store.yml
loop: "{{ chain_custom_environment.keys() }}"
loop_control:
loop_var: chain
index_var: index
when: user_answer.user_input|bool == True
when: hostvars[groups['all'][0]].user_answer.user_input | bool
- name: Terraform provisioning
shell: "echo yes | {{ terraform_location }} apply terraform.tfplan"
args:
chdir: "roles/main_infra/files"
when: user_answer.user_input|bool == True
ignore_errors: True
- name: Ensure Terraform resources has been provisioned
shell: "echo yes | {{ terraform_location }} apply"
args:
chdir: "roles/main_infra/files"
when: user_answer.user_input|bool == True
chdir: "/tmp/files-{{ group_names[0] }}"
when: hostvars[groups['all'][0]].user_answer.user_input | bool
retries: 1
delay: 3
register: result
until: result.rc == 0
- name: Terraform output info into variable
shell: "{{ terraform_location }} output -json"
register: output
args:
chdir: "roles/main_infra/files"
when: user_answer.user_input|bool == True
chdir: "/tmp/files-{{ group_names[0] }}"
when: hostvars[groups['all'][0]].user_answer.user_input | bool
- name: Output info from Terraform
debug:
var: output.stdout_lines
when: user_answer.user_input|bool == True
var: (output.stdout|from_json).instructions.value
when: hostvars[groups['all'][0]].user_answer.user_input | bool
- name: Ansible delete file glob
find:
paths: /tmp/
file_type: directory
patterns: "files-{{ group_names[0] }}"
register: files_to_delete
- name: Ansible remove file glob
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ files_to_delete.files }}"

View File

@ -1,13 +1,13 @@
- name: Prepare variables for Parameter Store
set_fact:
chain_ps_env: "{{ chain_ps_env | combine ({item.key|lower : item.value}) }}"
with_dict: "{{ chain_custom_environment[chain] }}"
with_dict: "{{ hostvars[inventory_hostname]['env_vars'] }}"
vars:
chain_ps_env: {}
- name: Insert variables in PS
aws_ssm_parameter_store:
name: "/{{ prefix }}/{{ chain }}/{{ item.key }}"
name: "/{{ group_names[0] }}/{{ chain }}/{{ item.key }}"
value: "{{ item.value }}"
profile: "{{ profile }}"
aws_access_key: "{{ access_key }}"

View File

@ -1,3 +1,3 @@
bucket = "{{ prefix }}-{{ bucket }}"
dynamodb_table = "{{ prefix }}-{{ dynamodb_table }}"
bucket = "{{ group_names[0] }}-{{ bucket }}"
dynamodb_table = "{{ group_names[0] }}-{{ dynamodb_table }}"
key = "terraform.tfstate"

View File

@ -1,122 +0,0 @@
data "aws_ami" "explorer" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "owner-alias"
values = ["amazon"]
}
}
resource "aws_launch_configuration" "explorer" {
name_prefix = "${var.prefix}-explorer-launchconfig"
image_id = "${data.aws_ami.explorer.id}"
instance_type = "${var.instance_type}"
security_groups = ["${aws_security_group.app.id}"]
key_name = "${var.key_name}"
iam_instance_profile = "${aws_iam_instance_profile.explorer.id}"
associate_public_ip_address = false
depends_on = ["aws_db_instance.default"]
user_data = "${file("${path.module}/libexec/init.sh")}"
root_block_device {
volume_size = "${var.root_block_size}"
}
lifecycle {
create_before_destroy = true
}
}
{% for key, value in chain_custom_environment.iteritems() %}
{% if value['USE_PLACEMENT_GROUP']|default('true') == "true" %}
resource "aws_placement_group" "explorer-{{key}}" {
name = "${var.prefix}-{{key}}-explorer-pg"
strategy = "cluster"
}
{% endif %}
{% endfor %}
{% for key, value in chain_custom_environment.iteritems() %}
resource "aws_autoscaling_group" "explorer-{{key}}" {
name = "${aws_launch_configuration.explorer.name}-asg-{{key}}"
max_size = "4"
min_size = "1"
desired_capacity = "1"
{% if value['USE_PLACEMENT_GROUP']|default('true') == "true" %} placement_group = "${var.prefix}-{{key}}-explorer-pg"
{% endif %}
launch_configuration = "${aws_launch_configuration.explorer.name}"
vpc_zone_identifier = ["${aws_subnet.default.id}"]
availability_zones = ["${data.aws_availability_zones.available.names}"]
target_group_arns = ["${aws_lb_target_group.explorer.*.arn[{{loop.index-1}}]}"]
# Health checks are performed by CodeDeploy hooks
health_check_type = "EC2"
enabled_metrics = [
"GroupMinSize",
"GroupMaxSize",
"GroupDesiredCapacity",
"GroupInServiceInstances",
"GroupTotalInstances",
]
depends_on = [
"aws_ssm_parameter.db_host",
"aws_ssm_parameter.db_name",
"aws_ssm_parameter.db_port",
"aws_ssm_parameter.db_username",
"aws_ssm_parameter.db_password"
]
lifecycle {
create_before_destroy = true
}
tag {
key = "prefix"
value = "${var.prefix}"
propagate_at_launch = true
}
tag {
key = "chain"
value = "{{ key }}"
propagate_at_launch = true
}
tag {
key = "Name"
value = "{{ key }} Application"
propagate_at_launch = true
}
}
# TODO: These autoscaling policies are not currently wired up to any triggers
resource "aws_autoscaling_policy" "explorer-up" {
name = "${var.prefix}-{{key}}-explorer-autoscaling-policy-up"
autoscaling_group_name = "${aws_autoscaling_group.explorer-{{key}}.name}"
adjustment_type = "ChangeInCapacity"
scaling_adjustment = 1
cooldown = 300
}
resource "aws_autoscaling_policy" "explorer-down" {
name = "${var.prefix}-{{key}}-explorer-autoscaling-policy-down"
autoscaling_group_name = "${aws_autoscaling_group.explorer-{{key}}.name}"
adjustment_type = "ChangeInCapacity"
scaling_adjustment = -1
cooldown = 300
}
{% endfor %}

View File

@ -1,7 +0,0 @@
provider "aws" {
version = "~> 1.15"
{% if aws_access_key is undefined %}
profile = "{{ aws_profile|default("default") }}"
{% endif %}
region = "{{ aws_region|default("us-east-1") }}"
}

View File

@ -1,6 +1,6 @@
terraform {
backend "s3" {
{% if aws_access_key is undefined %}
{% if aws_access_key is undefined or aws_access_key == '' %}
profile = "{{ aws_profile|default("default") }}"
{% else %}
access_key = "{{ aws_access_key }}"

View File

@ -1,76 +0,0 @@
# Create a gateway to provide access to the outside world
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
# Grant the VPC internet access in its main route table
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.vpc.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
# The ALB for the app server
resource "aws_lb" "explorer" {
count = "${length(var.chains)}"
name = "${var.prefix}-explorer-${element(var.chains,count.index)}-alb"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.alb.id}"]
subnets = ["${aws_subnet.default.id}", "${aws_subnet.alb.id}"]
enable_deletion_protection = false
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
# The Target Group for the ALB
resource "aws_lb_target_group" "explorer" {
count = "${length(var.chains)}"
name = "${var.prefix}-explorer-${element(var.chains,count.index)}-alb-target"
port = 4000
protocol = "HTTP"
vpc_id = "${aws_vpc.vpc.id}"
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
stickiness {
type = "lb_cookie"
cookie_duration = 600
enabled = true
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 15
interval = 30
path = "/blocks"
port = 4000
}
}
{% for key, value in chain_custom_environment.iteritems() %}
resource "aws_alb_listener" "alb_listener{{loop.index-1}}" {
load_balancer_arn = "${aws_lb.explorer.*.arn[{{loop.index-1}}]}"
port = "${lookup(var.use_ssl,element(var.chains,{{loop.index-1}})) ? "443" : "80" }"
protocol = "${lookup(var.use_ssl,element(var.chains,{{loop.index-1}})) ? "HTTPS" : "HTTP" }"
{% if value['ECTO_USE_SSL']|default('false') == "true" %}
ssl_policy = "${lookup(var.alb_ssl_policy,element(var.chains,{{loop.index-1}}))}"
certificate_arn = "${lookup(var.alb_certificate_arn,element(var.chains,{{loop.index-1}}))}"
{% endif %}
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.explorer.*.arn[{{loop.index-1}}]}"
}
}
{% endfor %}

View File

@ -1,4 +1,12 @@
prefix = "{{ prefix }}"
{% if aws_access_key is undefined or aws_access_key == '' %}
aws_profile = "{{ aws_profile|default('default') }}"
{% else %}
aws_access_key = "{{ aws_access_key | default('null') }}"
aws_secret_key = "{{ aws_secret_key | default('null') }}"
{% endif %}
aws_region = "{{ aws_region | default('us-east-1') }}"
prefix = "{{ group_names[0] }}"
key_name = "{{ ec2_ssh_key_name }}"
key_content = "{{ ec2_ssh_key_content }}"
vpc_cidr = "{{ vpc_cidr }}"
@ -8,93 +16,99 @@ dns_zone_name = "{{ dns_zone_name }}"
instance_type = "{{ instance_type }}"
root_block_size = "{{ root_block_size }}"
use_placement_group = {
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }}="{{ hostvars[host]['use_placement_group'] | default('false') }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
pool_size = {
{% for key, value in chain_custom_environment.iteritems() %}
{{ key }}="{{ value['POOL_SIZE']|default('30') }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }}="{{ hostvars[host]['env_vars']['POOL_SIZE'] | default('30') }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
secret_key_base = {
{% for key, value in chain_custom_environment.iteritems() %}
{{ key }}="{{ value['SECRET_KEY_BASE']|default('TPGMvGK0iIwlXBQuQDA5KRqk77VETbEBlG4gAWeb93TvBsYAjvoAvdODMd6ZeguPwf2YTRY3n7uvxXzQP4WayQ==') }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }}="{{ hostvars[host]['env_vars']['SECRET_KEY_BASE']|default('TPGMvGK0iIwlXBQuQDA5KRqk77VETbEBlG4gAWeb93TvBsYAjvoAvdODMd6ZeguPwf2YTRY3n7uvxXzQP4WayQ==') }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
use_ssl = {
{% for key, value in chain_custom_environment.iteritems() %}
{{ key }}="{{ value['ECTO_USE_SSL']|default('false') }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }}="{{ hostvars[host]['env_vars']['ECTO_USE_SSL']|default('false') }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
alb_ssl_policy = {
{% for key, value in chain_custom_environment.iteritems() %}
{{ key }}="{{ value['ALB_SSL_POLICY']|default('') }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }}="{{ hostvars[host]['env_vars']['ALB_SSL_POLICY']|default('') }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
alb_certificate_arn = {
{% for key, value in chain_custom_environment.iteritems() %}
{{ key }}="{{ value['ALB_CERTIFICATE_ARN']|default('') }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }}="{{ hostvars[host]['env_vars']['ALB_CERTIFICATE_ARN']|default('') }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chains = [
{% for key,value in chain_custom_environment.iteritems() %}
"{{ key }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
"{{ hostvars[host]['chain'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
]
chain_db_id = {
{% for key, value in chain_db_id.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_id'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_name = {
{% for key, value in chain_db_name.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_name'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_username = {
{% for key, value in chain_db_username.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_username'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_password = {
{% for key, value in chain_db_password.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_password'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_instance_class = {
{% for key, value in chain_db_instance_class.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_instance_class'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_storage = {
{% for key, value in chain_db_storage.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_storage'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_storage_type = {
{% for key, value in chain_db_storage_type.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_storage_type'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_iops = {
{% for key, value in db_iops.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_iops']|default('0') }}"{% if not loop.last %},{% endif %}
{% endfor %}
}
chain_db_version = {
{% for key, value in chain_db_version.iteritems() %}
{{ key }} = "{{ value }}"{% if not loop.last %},{% endif %}
{% for host in groups[group_names[0]] %}
{{ hostvars[host]['chain'] }} = "{{ hostvars[host]['db_version'] }}"{% if not loop.last %},{% endif %}
{% endfor %}
}

View File

@ -1,52 +1,136 @@
- name: Clone BlockScout
git:
repo: "{{ blockscout_repo }}"
dest: "blockscout-{{ chain }}"
version: "{{ chain_branch[chain] }}"
dest: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}"
version: "{{ branch }}"
force: true
when: skip_fetch | bool != true
tags:
- build
- name: Git clean
command: "git clean -fdx"
args:
chdir: "blockscout-{{ chain }}"
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}"
when: skip_fetch | bool != true
tags:
- build
- name: Merge branches
command: "git merge {{ chain_merge_commit[chain] }}"
command: "git merge {{ merge_commit_item }}"
args:
chdir: "blockscout-{{ chain }}"
when: skip_fetch | bool != true and chain_merge_commit_item != 'false'
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}"
when: merge_commit_item and not skip_fetch | bool
vars:
chain_mc: "{{ chain_merge_commit | default({}) }}"
chain_merge_commit_item: "{{ chain_mc[chain] | default('false') }}"
merge_commit_item: "{{ merge_commit | default(false) }}"
tags:
- build
- name: Copy web config files
copy:
src: "blockscout-{{ chain }}/apps/block_scout_web/config/dev.secret.exs.example"
dest: "blockscout-{{ chain }}/apps/block_scout_web/config/dev.secret.exs"
src: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web/config/dev.secret.exs.example"
dest: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web/config/dev.secret.exs"
tags:
- build
- name: Template explorer config files
- name: Template explorer config files
template:
src: dev.secret.exs.j2
dest: "blockscout-{{ chain }}/apps/explorer/config/dev.secret.exs"
when: ps_db is defined
dest: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/explorer/config/dev.secret.exs"
when: ps_user is defined
tags:
- build
- name: Copy default explorer config files
copy:
src: "blockscout-{{ chain }}/apps/explorer/config/dev.secret.exs.example"
dest: "blockscout-{{ chain }}/apps/explorer/config/dev.secret.exs"
when: ps_db is undefined or ps_db == ""
src: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/explorer/config/dev.secret.exs.example"
dest: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/explorer/config/dev.secret.exs"
when: ps_user is undefined or ps_user == ""
tags:
- build
- name: Remove static assets from previous deployment, if any
file:
path: "blockscout-{{ chain }}/apps/block_scout_web/priv/static"
path: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web/priv/static"
state: absent
tags:
- build
- name: Fetch environment variables (via access key)
set_fact:
env_compiled: "{{ lookup('aws_ssm', path, aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, region=aws_region|default('us-east-1'), shortnames=true, bypath=true, recursive=true ) }}"
vars:
path: "/{{ group_names[0] }}/{{ chain }}"
when: aws_access_key is defined
tags:
- update_vars
- build
- name: Fetch environment variables (via profile)
set_fact:
env_compiled: "{{ lookup('aws_ssm', path, region=aws_region|default('us-east-1'), aws_profile=aws_profile, shortnames=true, bypath=true, recursive=true ) }}"
vars:
path: "/{{ group_names[0] }}/{{ chain }}"
when: aws_access_key is undefined
tags:
- update_vars
- build
- name: Make config variables lowercase
set_fact:
lower_env: "{{ lower_env | combine ({item.key|lower : item.value}) }}"
with_dict: "{{ env_vars }}"
when: env_vars is defined
vars:
lower_env: {}
tags:
- update_vars
- build
- name: Override env variables
set_fact:
env_compiled: "{{ env_compiled | combine(lower_env) }}"
when: lower_env is defined
tags:
- build
- name: Uppercase chain
set_fact:
upper_env: "{{ upper_env | combine ({item.key|upper : item.value}) }}"
with_dict: "{{ env_compiled }}"
vars:
upper_env: {}
tags:
- build
- name: Add server port
set_fact:
server_port: "{{ 65535|random(seed=inventory_hostname,start=1024) }}"
tags:
- build
- name: Combine server env
set_fact:
server_env: "{{ upper_env | combine({'NETWORK_PATH':'/','PORT':server_port}) }}"
tags:
- build
- name: Override build variables
set_fact:
server_env: "{{ server_env | combine({item.key|regex_replace('BUILD_'):item.value}) if item.key | search('BUILD_') else server_env }}"
with_dict: "{{ server_env }}"
tags:
- build
- name: Show Server environment variables
debug:
var: server_env
- name: Compile BlockScout
command: "mix do {{ item }}"
args:
chdir: "blockscout-{{ chain }}"
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}"
environment: "{{ server_env }}"
with_items:
- deps.get
- local.rebar --force
@ -55,134 +139,139 @@
- ecto.drop
- ecto.create
- ecto.migrate
tags:
- build
- name: Install Node modules at apps/block_scout_web/assets
environment: "{{ server_env }}"
command: npm install
args:
chdir: "blockscout-{{ chain }}/apps/block_scout_web/assets"
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web/assets"
tags:
- build
- name: Execute webpack.js at apps/block_scout_web/assets/node_modules/webpack/bin
environment: "{{ server_env }}"
command: node_modules/webpack/bin/webpack.js --mode production
args:
chdir: "blockscout-{{ chain }}/apps/block_scout_web/assets"
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web/assets"
tags:
- build
- name: Instal Node modules at apps/explorer
environment: "{{ server_env }}"
command: npm install
args:
chdir: "blockscout-{{ chain }}/apps/explorer"
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/explorer"
tags:
- build
- name: Install SSL certificates
environment: "{{ server_env }}"
command: mix phx.gen.cert blockscout blockscout.local
args:
chdir: "blockscout-{{ chain }}/apps/block_scout_web"
- name: Fetch environment variables (via access key)
set_fact:
chain_env: "{{ lookup('aws_ssm', path, aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, region=aws_region|default('us-east-1'), shortnames=true, bypath=true, recursive=true ) }}"
vars:
path: "/{{ prefix }}/{{ chain }}"
when: aws_access_key is defined
- name: Fetch environment variables (via profile)
set_fact:
chain_env: "{{ lookup('aws_ssm', path, aws_profile=aws_profile, shortnames=true, bypath=true, recursive=true ) }}"
vars:
path: "/{{ prefix }}/{{ chain }}"
when: aws_access_key is undefined
- name: Make config variables lowercase
set_fact:
chain_lower_env: "{{ chain_lower_env | combine ({item.key|lower : item.value}) }}"
with_dict: "{{ chain_custom_environment_chain }}"
when: chain_custom_environment_chain|length > 0
vars:
chain_lower_env: {}
chain_custom_environment_chain: "{{ chain_cec[chain] | default({}) if chain_cec[chain]>0 else {} }}"
chain_cec: "{{ chain_custom_environment | default ({}) }}"
- name: Override env variables
set_fact:
chain_env: "{{ chain_env | combine(chain_lower_env) }}"
when: chain_lower_env is defined
- name: Uppercase chain
set_fact:
chain_upper_env: "{{ chain_upper_env | combine ({item.key|upper : item.value}) }}"
with_dict: "{{ chain_env }}"
vars:
chain_upper_env: {}
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web"
tags:
- build
- name: Start server
tags:
- build
block:
- name: Start server
command: "mix phx.server"
environment: "{{ chain_upper_env | combine({'NETWORK_PATH':'/'}) }}"
ignore_errors: true
environment: "{{ server_env }}"
args:
chdir: "blockscout-{{ chain }}"
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}"
async: 10000
poll: 0
- debug:
msg: "Please, open your browser at following addresses:"
run_once: true
- debug:
msg: "{{ ansible_host }}:{{ server_port }}"
- name: User prompt
pause:
prompt: "Please, open your browser and open 4000 port at the machine were Ansible is currently run. BlockScout should appear. Ensure that there is no visual artifacts and then press Enter to continue. Press Ctrl+C and then A if you face any issues to cancel the deployment."
rescue:
- name: 'Stop execution'
fail:
msg: "Execution aborted."
prompt: "BlockScout should appear. Ensure that there is no visual artifacts and then press Enter to continue. Press Ctrl+C and then A if you face any issues to cancel the deployment. Note: Localhost stands for the machine were Ansible is currently run."
run_once: true
register: prompt
always:
- name: kill server
command: "pkill -f {{ item }}"
with_items:
- beam.smp
- webpack.js
failed_when: false
when:
failed_when: false
- name: Check for execution interrupt
fail:
msg: "Execution aborted"
when: prompt is failed
tags:
- build
- name: Build static assets
environment: "{{ server_env }}"
command: mix phx.digest
args:
chdir: "blockscout-{{ chain }}"
chdir: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}"
tags:
- build
- name: User prompt
pause:
prompt: "Would you like to remove staging dependencies? [Yes/No] Default: Yes"
prompt: "Would you like to remove staging dependencies? [Yes/No]"
register: user_answer
until: user_answer.user_input | lower in conditional
retries: 10000
delay: 1
vars:
conditional: ['yes','no','true','false']
when: inventory_hostname == groups['all'][0]
tags:
- build
- name: Remove dev dependencies
file:
state: absent
path: "{{ item }}"
with_items:
- "blockscout-{{ chain }}/_build/"
- "blockscout-{{ chain }}/deps/"
- "blockscout-{{ chain }}/apps/block_scout_web/assets/node_modules/"
- "blockscout-{{ chain }}/apps/explorer/node_modules/"
- "blockscout-{{ chain }}/logs/dev/"
when: user_answer.user_input|lower != "false" and user_answer.user_input|lower != "no"
- "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/_build/"
- "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/deps/"
- "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web/assets/node_modules/"
- "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/explorer/node_modules/"
- "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/logs/dev/"
when: hostvars[groups['all'][0]].user_answer.user_input == "" or hostvars[groups['all'][0]].user_answer.user_input | lower | bool
tags:
- build
- name: Fix bug with favicon
replace:
regexp: '\"favicon\.ico\"\:\"favicon-[a-z0-9]+?\.ico\"'
replace: '"images/favicon.ico":"favicon.ico"'
path: "blockscout-{{ chain }}/apps/block_scout_web/priv/static/cache_manifest.json"
- name: Upload Blockscout to S3
command: "{{ 'AWS_ACCESS_KEY='~aws_access_key~' AWS_SECRET_ACCESS_KEY='~aws_secret_key~' AWS_DEFAULT_REGION='~aws_region if aws_profile is undefined else '' }} aws deploy push --application-name={{ prefix }}-explorer --s3-location s3://{{ prefix }}-explorer-codedeploy-releases/blockscout-{{ chain }}.zip --source=blockscout-{{ chain }} {{ '--profile='~aws_profile if aws_profile is defined else '' }}"
register: push_output
- name: Upload output
debug:
msg: "If deployment will fail, you can try to deploy blockscout manually using the following commands: {{ 'AWS_ACCESS_KEY=XXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXX AWS_DEFAULT_REGION='~aws_region if aws_profile is undefined else '' }} {{ push_output.stdout_lines }} {{ '--profile='~aws_profile if aws_profile is defined else '' }}"
path: "/tmp/blockscout-{{ group_names[0] }}-{{ chain }}/apps/block_scout_web/priv/static/cache_manifest.json"
tags:
- build
- name: User prompt
pause:
prompt: "Do you want to update the Parameter Store variables? [Yes/No] Default: Yes"
prompt: "Do you want to update the Parameter Store variables? [Yes/No]"
register: user_answer
until: user_answer.user_input | lower in conditional
retries: 10000
delay: 1
vars:
conditional: ["",'yes','no','true','false']
when: inventory_hostname == groups['all'][0]
tags:
- update_vars
- name: Update chain variables
aws_ssm_parameter_store:
name: "/{{ prefix }}/{{ chain }}/{{ item.key }}"
name: "/{{ group_names[0] }}/{{ chain }}/{{ item.key }}"
value: "{{ item.value }}"
profile: "{{ profile }}"
aws_access_key: "{{ access_key }}"
@ -193,15 +282,42 @@
secret_key: "{{ aws_secret_key|default(omit) }}"
profile: "{{ aws_profile|default(omit) }}"
region: "{{ aws_region|default(omit) }}"
with_dict: "{{ chain_lower_env }}"
when: user_answer.user_input|lower != "false" and user_answer.user_input|lower != "no"
with_dict: "{{ lower_env }}"
when: hostvars[groups['all'][0]].user_answer.user_input == "" or hostvars[groups['all'][0]].user_answer.user_input | lower | bool
tags:
- update_vars
- name: User prompt
pause:
prompt: "Do you want to deploy BlockScout? [Yes/No] Default: Yes"
prompt: "Do you want to deploy BlockScout? [Yes/No]"
register: user_answer
until: user_answer.user_input | lower in conditional
retries: 10000
delay: 1
vars:
conditional: ["",'yes','no','true','false']
when: inventory_hostname == groups['all'][0]
tags:
- deploy
- name: Upload Blockscout to S3
command: "{{ 'AWS_ACCESS_KEY='~aws_access_key~' AWS_SECRET_ACCESS_KEY='~aws_secret_key~' AWS_DEFAULT_REGION='~aws_region if aws_profile is undefined else '' }} aws deploy push --application-name={{ group_names[0] }}-explorer --s3-location s3://{{ group_names[0] }}-explorer-codedeploy-releases/blockscout-{{ group_names[0] }}-{{ chain }}.zip --source=/tmp/blockscout-{{ group_names[0] }}-{{ chain }} {{ '--profile='~aws_profile~' --region='~aws_region if aws_profile is defined else '' }}"
register: push_output
when: hostvars[groups['all'][0]].user_answer.user_input == "" or hostvars[groups['all'][0]].user_answer.user_input | lower | bool
tags:
- deploy
- name: Upload output
debug:
msg: "If deployment will fail, you can try to deploy blockscout manually using the following commands: {{ 'AWS_ACCESS_KEY=XXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXX AWS_DEFAULT_REGION='~aws_region if aws_profile is undefined else '' }} {{ push_output.stdout_lines }} {{ '--profile='~aws_profile~' --region'~aws_region if aws_profile is defined else '' }}"
when: hostvars[groups['all'][0]].user_answer.user_input == "" or hostvars[groups['all'][0]].user_answer.user_input | lower | bool
tags:
- deploy
- name: Deploy Blockscout
command: "{{ 'AWS_ACCESS_KEY='~aws_access_key~' AWS_SECRET_ACCESS_KEY='~aws_secret_key~' AWS_DEFAULT_REGION='~aws_region if aws_profile is undefined else '' }} {{ push_output.stdout_lines[1] }} --deployment-group-name {{ prefix }}-explorer-dg{{ index }} --deployment-config-name CodeDeployDefault.OneAtATime --description '{{ chain_upper_env['BLOCKSCOUT_VERSION'] }}' {{ '--profile='~aws_profile if aws_profile is defined else '' }}"
when: user_answer.user_input|lower != "false" and user_answer.user_input|lower != "no"
command: "{{ 'AWS_ACCESS_KEY='~aws_access_key~' AWS_SECRET_ACCESS_KEY='~aws_secret_key~' AWS_DEFAULT_REGION='~aws_region if aws_profile is undefined else '' }} {{ push_output.stdout_lines[1] }} --deployment-group-name {{ group_names[0] }}-explorer-dg{{ groups[group_names[0]].index(inventory_hostname) }} --deployment-config-name CodeDeployDefault.OneAtATime {{ '--profile='~aws_profile~' --region='~aws_region if aws_profile is defined else '' }}"
when: hostvars[groups['all'][0]].user_answer.user_input == "" or hostvars[groups['all'][0]].user_answer.user_input | lower | bool
tags:
- deploy

View File

@ -1,6 +1,6 @@
- name: Create S3 bucket
aws_s3:
bucket: "{{ prefix }}-{{ bucket }}"
bucket: "{{ group_names[0] }}-{{ bucket }}"
mode: create
permission: private
profile: "{{ profile }}"
@ -15,11 +15,11 @@
- name: Apply tags and versioning to create S3 bucket
s3_bucket:
name: "{{ prefix }}-{{ bucket }}"
name: "{{ group_names[0] }}-{{ bucket }}"
versioning: yes
tags:
origin: terraform
prefix: "{{ prefix }}"
prefix: "{{ inventory_hostname }}"
profile: "{{ profile }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
@ -32,7 +32,7 @@
- name: Add lifecycle management policy to created S3 bucket
s3_lifecycle:
name: "{{ prefix }}-{{ bucket }}"
name: "{{ group_names[0] }}-{{ bucket }}"
rule_id: "expire"
noncurrent_version_expiration_days: 90
status: enabled

View File

@ -0,0 +1,38 @@
- name: Check if config file exists
stat:
path: "{{ playbook_dir }}/{{ file }}"
register: stat_result
- name: Copy temporary file to be uploaded
command: "cp {{ playbook_dir }}/{{ file }} {{ playbook_dir }}/{{ file }}.temp"
when: stat_result.stat.exists
- name: Remove insecure AWS variables
replace:
path: "{{ playbook_dir }}/{{ file }}.temp"
regexp: 'aws_.*'
replace: '<There was an insecure variable to keep at S3. Removed>'
when: stat_result.stat.exists
- name: Upload config to S3 bucket
aws_s3:
bucket: "{{ group_names[0] }}-{{ bucket }}"
object: all.yml
src: "{{ playbook_dir }}/{{ file }}.temp"
mode: put
profile: "{{ profile }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ region }}"
vars:
access_key: "{{ aws_access_key|default(omit) }}"
secret_key: "{{ aws_secret_key|default(omit) }}"
profile: "{{ aws_profile|default(omit) }}"
region: "{{ aws_region|default(omit) }}"
when: stat_result.stat.exists
- name: Remove temp file
file:
path: "{{ playbook_dir }}/{{ file }}.temp"
state: absent
when: stat_result.stat.exists

View File

@ -1,45 +1,8 @@
- name: Check if config file exists
stat:
path: "{{ playbook_dir }}/group_vars/all.yml"
register: stat_result
- name: Copy temporary file to be uploaded
command: "cp {{ playbook_dir }}/group_vars/all.yml {{ playbook_dir }}/group_vars/all.yml.temp"
when: stat_result.stat.exists == True
- name: Remove insecure AWS variables
replace:
path: "{{ playbook_dir }}/group_vars/all.yml.temp"
regexp: 'aws_.*'
replace: '<There was an aws-related insecure variable to keep at S3. Removed>'
when: stat_result.stat.exists == True
- name: Remove other insecure variables
replace:
path: "{{ playbook_dir }}/group_vars/all.yml.temp"
regexp: 'secret_.*'
replace: '<There was an insecure variable to keep at S3. Removed>'
when: stat_result.stat.exists == True
- name: Upload config to S3 bucket
aws_s3:
bucket: "{{ prefix }}-{{ bucket }}"
object: all.yml
src: "{{ playbook_dir }}/group_vars/all.yml.temp"
mode: put
profile: "{{ profile }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
region: "{{ region }}"
vars:
access_key: "{{ aws_access_key|default(omit) }}"
secret_key: "{{ aws_secret_key|default(omit) }}"
profile: "{{ aws_profile|default(omit) }}"
region: "{{ aws_region|default(omit) }}"
when: stat_result.stat.exists == True
- name: Remove temp file
file:
path: "{{ playbook_dir }}/group_vars/all.yml.temp"
state: absent
when: stat_result.stat.exists == True
- name: "Loop over config files"
include: config.yml file={{item}}
with_items:
- "group_vars/all.yml"
- "group_vars/{{ group_names[0] }}"
- "group_vars/{{ group_names[0] }}.yml"
- "host_vars/{{ inventory_hostname }}.yml"
- "host_vars/{{ inventory_hostname }}"

View File

@ -5,7 +5,7 @@
- name: Upload logs to s3
aws_s3:
bucket: "{{ prefix }}-{{ bucket }}"
bucket: "{{ group_names[0] }}-{{ bucket }}"
object: log.txt
src: "{{ playbook_dir }}/log.txt"
mode: put
@ -18,4 +18,4 @@
secret_key: "{{ aws_secret_key|default(omit) }}"
profile: "{{ aws_profile|default(omit) }}"
region: "{{ aws_region|default(omit) }}"
when: stat_result.stat.exists == true
when: stat_result.stat.exists == true