Version 1.0

This commit provides the first version of POAs automated infrastructure
based in AWS. It uses Terraform for the infrastructure automation
itself, with some bash thrown in to cover the rest.

Run `bin/infra help` to get started
This commit is contained in:
Paul Schoenfelder 2018-04-26 12:41:22 -04:00
commit b861da7809
37 changed files with 1868 additions and 0 deletions

16
.gitignore vendored Normal file
View File

@ -0,0 +1,16 @@
.DS_Store
# Testing
/ignore.tfvars
# Terraform State
/.terraform
/terraform.tfstate.d
# Sensitive information
/*.privkey
/*.tfvars
# Stack-specific information
/PREFIX
/plans/*.planfile

33
Makefile Normal file
View File

@ -0,0 +1,33 @@
.PHONY: help
IMAGE_NAME ?= poa-aws
INFRA_PREFIX ?= poa-example
KEY_PAIR ?= poa
help:
@echo "$(IMAGE_NAME)"
@perl -nle'print $& if m{^[a-zA-Z_-]+:.*?## .*$$}' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
check: lint ## Run linters and validation
@bin/infra precheck
@terraform validate -var-file=ignore.tfvars base
@if [ -f main.tfvars ]; then \
terraform validate \
-var='db_password=foo' \
-var='new_relic_app_name=foo' \
-var='new_relic_license_key=foo' \
-var-file=main.tfvars main; \
fi
@rm ignore.tfvars
format: ## Apply canonical formatting to Terraform files
@terraform fmt
lint: shellcheck check-format ## Lint scripts and config files
check-format:
@terraform fmt -check=true
shellcheck:
@shellcheck --shell=bash bin/infra
@shellcheck --shell=bash modules/stack/libexec/init.sh

97
README Normal file
View File

@ -0,0 +1,97 @@
# Usage
## Prerequisites
The bootstrap script included in this project expects the AWS CLI, jq, and Terraform to be installed and on the PATH.
On macOS, with Homebrew installed, just run: `brew install awscli jq terraform`
For other platforms, or if you don't have Homebrew installed, please see the following links:
- [jq](https://stedolan.github.io/jq/download/)
- [awscli](https://docs.aws.amazon.com/cli/latest/userguide/installing.html)
- [terraform](https://www.terraform.io/intro/getting-started/install.html)
## AWS
You will need to set up a new AWS account, and then login to that account using the AWS CLI (via `aws configure`).
It is critical that this account have full permissions to the following AWS resources/services:
- VPCs and associated networking resources (subnets, routing tables, etc.)
- Security Groups
- EC2
- S3
- SSM
- DynamoDB
- Route53
- RDS
These are required to provision the various AWS resources used by this project. If you are lacking permissions,
Terraform will fail when applying its plan, and you will have to make sure those permissions are provided.
## Usage
Once the prerequisites are out of the way, you are ready to spin up your new infrastructure!
From the root of the project:
```
$ bin/infra help
```
This will show you the tasks and options available to you with this script.
The infra script will request any information it needs to proceed, and then call Terraform to bootstrap the necessary infrastructure
for its own state management. This state management infra is needed to ensure that Terraforms state is stored in a centralized location,
so that multiple people can use Terraform on the same infra without stepping on each others toes. Terraform prevents this from happening by
holding locks (via DynamoDB) against the state data (stored in S3). Generating the S3 bucket and DynamoDB table has to be done using local state
the first time, but once provisioned, the local state is migrated to S3, and all further invocations of `terraform` will use the state stored in S3.
The infra created, at a high level, is as follows:
- An SSH keypair (or you can choose to use one which was already created), this is used with any EC2 hosts
- A VPC containing all of the resources provisioned
- A public subnet for the app servers, and a private subnet for the database (and Redis for now)
- An internet gateway to provide internet access for the VPC
- An ELB which exposes the app server HTTP endpoints to the world
- A security group to lock down ingress to the app servers to 80/443 + SSH
- A security group to allow the ELB to talk to the app servers
- A security group to allow the app servers access to the database
- An internal DNS zone
- A DNS record for the database
- An autoscaling group and launch configuration for each chain
- A CodeDeploy application and deployment group targeting the corresponding autoscaling groups
Each configured chain will receive its own ASG (autoscaling group) and deployment group, when application updates
are pushed to CodeDeploy, all autoscaling groups will deploy the new version using a blue/green strategy. Currently,
there is only one EC2 host to run, and the ASG is configured to allow scaling up, but no triggers are set up to actually perform the
scaling yet. This is something that may come in the future.
**IMPORTANT**: This repository's `.gitignore` prevents the storage of several files generated during provisioning, but it is important
that you keep them around in your own fork, so that subsequent runs of the `infra` script are using the same configuration and state.
These files are `PREFIX`, `backend.tfvars`, `main.tfvars`, the contents of `plans`, and the Terraform state directories. If you generated
a private key for EC2 (the default), then you will also have a `*.privkey` file in your project root, you need to store this securely out of
band once created, but does not need to be in the repository.
## Defining Chains/Adding Chains
The default of this repo is to build infra for the `sokol` chain, but you may not want that, or want a different set, so you need to
create/edit `user.tfvars` and add the following configuration:
```terraform
chains = {
"mychain" = "url/to/endpoint"
}
chain_trace_endpoints = {
"mychain" = "url/to/debug/endpoint/or/the/main/chain/endpoint"
}
```
This will ensure that those chains are used when provisioning the infrastructure.
## Configuration
Config is stored in the Systems Manager Parameter Store, each chain has its own set of config values. If you modify one of these values,
you will need to go and terminate the instances for that chain so that they are reprovisioned with the new configuration.
You will need to make sure to import the changes into the Terraform state though, or you run the risk of getting out of sync.

19
appspec.yml.example Normal file
View File

@ -0,0 +1,19 @@
version: 0.0
os: linux
files:
- source: .
destination: /opt/app
hooks:
ApplicationStop:
- location: bin/stop.sh
timeout: 300
AfterInstall:
- location: bin/build.sh
ApplicationStart:
- location: bin/migrate.sh
timeout: 300
- location: bin/start.sh
timeout: 3600
ValidateService:
- location: bin/health_check.sh
timeout: 3600

1
base/backend.tf Symbolic link
View File

@ -0,0 +1 @@
../common/backend.tf

1
base/main.tf Symbolic link
View File

@ -0,0 +1 @@
../setup/main.tf

1
base/provider.tf Symbolic link
View File

@ -0,0 +1 @@
../common/provider.tf

1
base/variables.tf Symbolic link
View File

@ -0,0 +1 @@
../common/variables.tf

427
bin/infra Executable file
View File

@ -0,0 +1,427 @@
#!/usr/bin/env bash
set -e
# Color support
function disable_color() {
IS_TTY=false
txtrst=
txtbld=
bldred=
bldgrn=
bldylw=
bldblu=
bldmag=
bldcyn=
}
IS_TTY=false
if [ -t 1 ]; then
if command -v tput >/dev/null; then
IS_TTY=true
fi
fi
if [ "$IS_TTY" = "true" ]; then
txtrst=$(tput sgr0 || echo '\e[0m') # Reset
txtbld=$(tput bold || echo '\e[1m') # Bold
bldred=${txtbld}$(tput setaf 1 || echo '\e[31m') # Red
bldgrn=${txtbld}$(tput setaf 2 || echo '\e[32m') # Green
bldylw=${txtbld}$(tput setaf 3 || echo '\e[33m') # Yellow
bldblu=${txtbld}$(tput setaf 4 || echo '\e[34m') # Blue
bldmag=${txtbld}$(tput setaf 5 || echo '\e[35m') # Magenta
bldcyn=${txtbld}$(tput setaf 8 || echo '\e[38m') # Cyan
else
disable_color
fi
# Logging
# Print the given message in cyan, but only when --verbose was passed
function debug() {
if [ ! -z "$VERBOSE" ]; then
printf '%s%s%s\n' "$bldcyn" "$1" "$txtrst"
fi
}
# Print the given message in blue
function info() {
printf '%s%s%s\n' "$bldblu" "$1" "$txtrst"
}
# Print the given message in magenta
function action() {
printf '%s%s%s\n' "$bldmag" "$1" "$txtrst"
}
# Print the given message in yellow
function warn() {
printf '%s%s%s\n' "$bldylw" "$1" "$txtrst"
}
# Like warn, but expects the message via redirect
function warnb() {
printf '%s' "$bldylw"
while read -r data; do
printf '%s\n' "$data"
done
printf '%s\n' "$txtrst"
}
# Print the given message in red
function error() {
printf '%s%s%s\n' "$bldred" "$1" "$txtrst"
}
# Like error, but expects the message via redirect
function errorb() {
printf '%s' "$bldred"
while read -r data; do
printf '%s\n' "$data"
done
printf '%s\n' "$txtrst"
exit 1
}
# Print the given message in green
function success() {
printf '%s%s%s\n' "$bldgrn" "$1" "$txtrst"
}
# Print help if requested
function help() {
cat << EOF
POA Infrastructure Management Tool
Usage:
./infra [global options] <task> [task args]
This script will bootstrap required AWS resources, then generate infrastructure via Terraform.
Tasks:
help Show help
provision Run the provisioner to generate or modify POA infrastructure
destroy Tear down any provisioned resources and local state
Global Options:
-v | --verbose This will print out verbose execution information for debugging
-h | --help Print this help message
--dry-run Perform as many actions as possible without performing side-effects
--no-color Turn off color
EOF
exit 2
}
# Verify tools
function check_prereqs() {
if ! which jq >/dev/null; then
warnb << EOF
This script requires that the 'jq' utility has been installed and can be found in $PATH
On macOS, with Homebrew, this is as simple as 'brew install jq'.
For installs on other platforms, see https://stedolan.github.io/jq/download/
EOF
exit 2
fi
if ! which aws >/dev/null; then
warnb << EOF
This script requires that the AWS CLI tool has been installed and can be found in $PATH
On macOS, with Homebrew, this is as simple as 'brew install awscli'.
For installs on other platforms, see https://docs.aws.amazon.com/cli/latest/userguide/installing.html
EOF
exit 2
fi
if ! which terraform >/dev/null; then
warnb << EOF
This script requires that the Terraform CLI be installed and available in PATH!
On macOS, with Homebrew, this is as simple as 'brew install terraform'.
For other platforms, see https://www.terraform.io/intro/getting-started/install.html
EOF
exit 2
fi
}
# Tear down all provisioned infra
function destroy() {
terraform destroy -var-file=backend.tfvars -var-file=main.tfvars base
rm -f ./PREFIX
rm -f ./backend.tfvars
rm -f ./main.tfvars
success "All generated infrastructure successfully removed!"
}
# Provision infrastructure
function provision() {
# If INFRA_PREFIX has not been set yet, request it from user
if [ -z "$INFRA_PREFIX" ]; then
DEFAULT_INFRA_PREFIX=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 8 | head -n 1)
warnb << EOF
# Infrastructure Prefix
In order to ensure that provisioned resources are unique, this script uses a
unique prefix for all resource names and ids.
By default, a random 8 character alphanumeric string is generated for you, but
if you wish to provide your own, now is your chance. This value will be stored
in './PREFIX' so that you only need provide it once, but make sure you source
control the file.
EOF
read -r -p "What prefix should be used? (default is $DEFAULT_INFRA_PREFIX): "
INFRA_PREFIX="$REPLY"
if [ -z "$INFRA_PREFIX" ]; then
INFRA_PREFIX="$DEFAULT_INFRA_PREFIX"
fi
if [ "$DRY_RUN" == "false" ]; then
echo "$INFRA_PREFIX" > ./PREFIX
fi
fi
# EC2 key pairs
if [ -z "$KEY_PAIR" ]; then
read -r -p "Please provide the name of the key pair to use with EC2 hosts: "
KEY_PAIR="$REPLY"
if [ -z "$KEY_PAIR" ]; then
error "You must provide a valid key pair name!"
exit 2
fi
fi
if [ -z "$SECRET_KEY_BASE" ]; then
SECRET_KEY_BASE="$(openssl rand -base64 32)"
fi
if ! aws ec2 describe-key-pairs --key-names="$KEY_PAIR" 2>/dev/null; then
if [ "$DRY_RUN" == "true" ]; then
action "DRY RUN: Would have created an EC2 key pair"
else
info "The key pair '$KEY_PAIR' does not exist, creating..."
if ! output=$(aws ec2 create-key-pair --key-name="$KEY_PAIR"); then
error "$output\\nFailed to generate key pair!"
fi
echo "$output" | jq '.KeyMaterial' --raw-output > "$KEY_PAIR.privkey"
success "Created keypair successfully! Private key has been saved to ./$KEY_PAIR.privkey"
fi
fi
EXTRA_VARS=""
if [ -f ./user.tfvars ]; then
EXTRA_VARS="-var-file=user.tfvars"
fi
# Save variables used by Terraform modules
if [ ! -f ./backend.tfvars ]; then
# shellcheck disable=SC2154
region="$TF_VAR_region"
if [ -z "$region" ]; then
# Try to pull region from local config
if [ -f "$HOME/.aws/config" ]; then
region=$(grep 'region' ~/.aws/config | sed -e 's/region = //')
fi
fi
if [ -z "$region" ]; then
# If unset still, use default of us-east-2
region='us-east-2'
fi
# Backend config only!
{
echo "region = \"$region\""
echo "bucket = \"$INFRA_PREFIX-poa-terraform-state\""
echo "dynamodb_table = \"$INFRA_PREFIX-poa-terraform-locks\""
echo "key = \"terraform.tfstate\""
} > ./backend.tfvars
# Other configuration needs to go in main.tfvars or init will break
{
echo "region = \"$region\""
echo "bucket = \"poa-terraform-state\""
echo "dynamodb_table = \"poa-terraform-locks\""
echo "key_name = \"$KEY_PAIR\""
echo "prefix = \"$INFRA_PREFIX\""
echo "secret_key_base = \"$SECRET_KEY_BASE\""
} > ./main.tfvars
fi
workspace="$(terraform workspace show)"
if [ ! "$workspace" == "main" ]; then
if [ "$workspace" == "default" ]; then
# Setup base workspace
if workspaces=$(terraform workspace list); then
if ! echo "$workspaces" | grep "base$" >/dev/null; then
terraform workspace new base setup
fi
else
exit 2
fi
# We first switch to the base workspace using the setup directory as the initial config
terraform workspace select base setup
# Init the base workspace using the setup directory
terraform init -backend-config=backend.tfvars setup
# Generate the plan for the S3 backend resources
terraform plan $EXTRA_VARS -var-file=main.tfvars -out plans/setup.planfile setup
if [ "$DRY_RUN" == "true" ]; then
action "DRY RUN: Would have executed Terraform plan for S3 backend"
else
# Apply the plan to provision the S3 backend resources
terraform apply plans/setup.planfile
fi
fi
if [ "$DRY_RUN" == "true" ]; then
action "DRY RUN: Would have migrated Terraform state to S3 backend"
else
# Now initialize the base workspace using the S3 backend
# This has the effect of migrating the local state to S3
#init_env=$(sed -r -e 's/^([^\s]*) = /TF_VAR_\1=/' backend.tfvars)
#for ev in $init_env; do
#export $ev
#done
terraform init -backend-config=backend.tfvars base
fi
# Setup main workspace
if ! terraform workspace list | grep "main$" >/dev/null; then
terraform workspace new main main
fi
# Switch to the main workspace which contains the rest of our infra
terraform workspace select main main
fi
if [ "$DRY_RUN" == "true" ]; then
action "DRY RUN: Would have initialized Terraform state for remaining infrastructure"
action "DRY RUN: Would have generated Terraform plan for remaining infrastructure"
action "DRY RUN: Would have executed Terraform plan for remaining infrastructure"
else
# Initialize main workspace using S3 backend
terraform init -backend-config=backend.tfvars main
# Generate the plan for the remaining infra
terraform plan $EXTRA_VARS -var-file=main.tfvars -out plans/main.planfile main
# Apply the plan to provision the remaining infra
terraform apply plans/main.planfile
success "Infrastructure has been successfully provisioned!"
fi
}
# Print all resource ARNs tagged with prefix=INFRA_PREFIX
function resources() {
if [ -z "$INFRA_PREFIX" ]; then
error "No prefix set, unable to locate tagged resources"
exit 1
fi
# Yes, stagging, blame Amazon
aws resourcegroupstaggingapi get-resources \
--no-paginate \
--tag-filters="Key=prefix,Values=$INFRA_PREFIX" | \
jq '.ResourceTagMappingList[].ResourceARN' --raw-output
}
# Provide test data for validation
function precheck() {
# Save variables used by Terraform modules
if [ ! -f ./ignore.tfvars ]; then
{
echo "bucket = \"poa-terraform-state\""
echo "dynamodb_table = \"poa-terraform-locks\""
echo "key = \"terraform.tfstate\""
echo "key_name = \"poa\""
echo "prefix = \"prefix\""
} > ./ignore.tfvars
fi
}
# Parse options for this script
VERBOSE=false
HELP=false
DRY_RUN=false
COMMAND=
while [ "$1" != "" ]; do
param=$(echo "$1" | sed -re 's/^([^=]*)=/\1/')
#val=$(echo "$1" | sed -e 's/^[^=]*=//g')
case $param in
-h | --help)
HELP=true
;;
-v | --verbose)
VERBOSE=true
;;
--dry-run)
DRY_RUN=true
;;
--no-color)
disable_color
;;
--)
shift
break
;;
*)
COMMAND="$param"
shift
break
;;
esac
shift
done
# Turn on debug mode if --verbose was set
if [ "$VERBOSE" == "true" ]; then
set -x
fi
# Set working directory to the project root
cd "$(dirname "${BASH_SOURCE[0]}")/.."
# If cached prefix is in PREFIX file, then use it
if [ -f ./PREFIX ]; then
INFRA_PREFIX=$(cat ./PREFIX)
fi
# Override command if --help or -h was passed
if [ "$HELP" == "true" ]; then
# If we ever want to show help for a specific command we'll need this
# HELP_COMMAND="$COMMAND"
COMMAND=help
fi
check_prereqs
case $COMMAND in
help)
help
;;
provision)
provision
;;
destroy)
destroy
;;
resources)
resources
;;
precheck)
precheck
;;
*)
error "Unknown task '$COMMAND'. Try 'help' to see valid tasks"
exit 1
esac
exit 0

3
common/backend.tf Normal file
View File

@ -0,0 +1,3 @@
terraform {
backend "s3" {}
}

5
common/provider.tf Normal file
View File

@ -0,0 +1,5 @@
provider "aws" {
version = "~> 1.15"
region = "${var.region}"
}

16
common/variables.tf Normal file
View File

@ -0,0 +1,16 @@
variable "bucket" {
description = "The name of the S3 bucket which will hold Terraform state"
}
variable "dynamodb_table" {
description = "The name of the DynamoDB table which will hold Terraform locks"
}
variable "region" {
description = "The AWS region to use"
default = "us-east-2"
}
variable "prefix" {
description = "The prefix used to identify all resources generated with this plan"
}

1
main/backend.tf Symbolic link
View File

@ -0,0 +1 @@
../common/backend.tf

1
main/backend_vars.tf Symbolic link
View File

@ -0,0 +1 @@
../common/variables.tf

37
main/main.tf Normal file
View File

@ -0,0 +1,37 @@
module "backend" {
source = "../modules/backend"
bootstrap = "0"
bucket = "${var.bucket}"
dynamodb_table = "${var.dynamodb_table}"
prefix = "${var.prefix}"
}
module "stack" {
source = "../modules/stack"
prefix = "${var.prefix}"
region = "${var.region}"
key_name = "${var.key_name}"
chains = "${var.chains}"
chain_trace_endpoints = "${var.chain_trace_endpoints}"
vpc_cidr = "${var.vpc_cidr}"
public_subnet_cidr = "${var.public_subnet_cidr}"
instance_type = "${var.instance_type}"
db_subnet_cidr = "${var.db_subnet_cidr}"
redis_subnet_cidr = "${var.redis_subnet_cidr}"
dns_zone_name = "${var.dns_zone_name}"
db_id = "${var.db_id}"
db_name = "${var.db_name}"
db_username = "${var.db_username}"
db_password = "${var.db_password}"
db_storage = "${var.db_storage}"
db_storage_type = "${var.db_storage_type}"
db_instance_class = "${var.db_instance_class}"
secret_key_base = "${var.secret_key_base}"
new_relic_app_name = "${var.new_relic_app_name}"
new_relic_license_key = "${var.new_relic_license_key}"
}

35
main/outputs.tf Normal file
View File

@ -0,0 +1,35 @@
output "instructions" {
description = "Instructions for executing deployments"
value = <<OUTPUT
To deploy a new version of the application manually:
1) Run the following command to upload the application to S3.
aws deploy push --application-name=${module.stack.codedeploy_app} --s3-location s3://${module.stack.codedeploy_bucket}/path/to/release.zip --source=path/to/repo
2) Follow the instructions in the output from the `aws deploy push` command
to deploy the uploaded application. Use the deployment group names shown below:
- ${join("\n - ", formatlist("%s", module.stack.codedeploy_deployment_group_names))}
You will also need to specify a deployment config name. Example:
--deployment-config-name=CodeDeployDefault.OneAtATime
A deployment description is optional.
3) Monitor the deployment using the deployment id returned by the `aws deploy create-deployment` command:
aws deploy get-deployment --deployment-id=<deployment-id>
4) Once the deployment is complete, you can access each chain explorer from its respective url:
- ${join("\n - ", formatlist("%s: %s", keys(module.stack.explorer_urls), values(module.stack.explorer_urls)))}
OUTPUT
}
output "db_instance_address" {
description = "The internal IP address of the RDS instance"
value = "${module.stack.db_instance_address}"
}

1
main/provider.tf Symbolic link
View File

@ -0,0 +1 @@
../common/provider.tf

97
main/variables.tf Normal file
View File

@ -0,0 +1,97 @@
variable "key_name" {
description = "The name of the SSH key to use with EC2 hosts"
default = "poa"
}
variable "vpc_cidr" {
description = "Virtual Private Cloud CIDR block"
default = "10.0.0.0/16"
}
variable "public_subnet_cidr" {
description = "The CIDR block for the public subnet"
default = "10.0.0.0/24"
}
variable "db_subnet_cidr" {
description = "The CIDR block for the database subnet"
default = "10.0.1.0/16"
}
variable "redis_subnet_cidr" {
description = "The CIDR block for the redis subnet"
default = "10.0.128.0/24"
}
variable "dns_zone_name" {
description = "The internal DNS name"
default = "poa.internal"
}
variable "instance_type" {
description = "The EC2 instance type to use for app servers"
default = "m5.xlarge"
}
variable "chains" {
description = "A map of chain names to urls"
default = {
"sokol" = "https://sokol-trace.poa.network"
}
}
variable "chain_trace_endpoints" {
description = "A map of chain names to trace urls"
default = {
"sokol" = "https://sokol-trace.poa.network"
}
}
# RDS/Database configuration
variable "db_id" {
description = "The identifier for the RDS database"
default = "poa"
}
variable "db_name" {
description = "The name of the database associated with the application"
default = "poa"
}
variable "db_username" {
description = "The name of the user which will be used to connect to the database"
default = "poa"
}
variable "db_password" {
description = "The password associated with the database user"
}
variable "db_storage" {
description = "The database storage size in GB"
default = "20"
}
variable "db_storage_type" {
description = "The type of database storage to use: magnetic, gp2, io1"
default = "gp2"
}
variable "db_instance_class" {
description = "The instance class of the database"
default = "db.m4.large"
}
variable "secret_key_base" {
description = "The secret key base to use for Explorer"
}
variable "new_relic_app_name" {
description = "The name of the application in New Relic"
}
variable "new_relic_license_key" {
description = "The license key for talking to New Relic"
}

45
modules/backend/main.tf Normal file
View File

@ -0,0 +1,45 @@
# S3 bucket
resource "aws_s3_bucket" "terraform_state" {
count = "${var.bootstrap}"
bucket = "${var.prefix}-${var.bucket}"
acl = "private"
versioning {
enabled = true
}
lifecycle_rule {
id = "expire"
enabled = true
noncurrent_version_expiration {
days = 90
}
}
tags {
origin = "terraform"
prefix = "${var.prefix}"
}
}
# DynamoDB table
resource "aws_dynamodb_table" "terraform_statelock" {
count = "${var.bootstrap}"
name = "${var.prefix}-${var.dynamodb_table}"
read_capacity = 1
write_capacity = 1
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags {
origin = "terraform"
prefix = "${var.prefix}"
}
}

View File

@ -0,0 +1,8 @@
variable "bootstrap" {
description = "Whether we are bootstrapping the required infra or not"
default = 0
}
variable "bucket" {}
variable "dynamodb_table" {}
variable "prefix" {}

130
modules/stack/config.tf Normal file
View File

@ -0,0 +1,130 @@
resource "aws_ssm_parameter" "new_relic_app_name" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/new_relic_app_name"
value = "${var.new_relic_app_name}"
type = "String"
}
resource "aws_ssm_parameter" "new_relic_license_key" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/new_relic_license_key"
value = "${var.new_relic_license_key}"
type = "String"
}
locals {
redis_host = "${aws_elasticache_cluster.default.cache_nodes.0.address}"
redis_port = "${aws_elasticache_cluster.default.cache_nodes.0.port}"
}
resource "aws_ssm_parameter" "redis_url" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/redis_url"
value = "redis://${local.redis_host}:${local.redis_host}/${var.prefix}"
type = "String"
}
resource "aws_ssm_parameter" "pool_size" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/pool_size"
value = "10"
type = "String"
}
resource "aws_ssm_parameter" "ecto_use_ssl" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/ecto_use_ssl"
value = "false"
type = "String"
}
resource "aws_ssm_parameter" "ethereum_url" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/ethereum_url"
value = "${element(values(var.chains),count.index)}"
type = "String"
}
resource "aws_ssm_parameter" "trace_url" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chain_trace_endpoints),count.index)}/trace_url"
value = "${element(values(var.chain_trace_endpoints), count.index)}"
type = "String"
}
resource "aws_ssm_parameter" "exq_blocks_concurrency" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/exq_blocks_concurrency"
value = "1"
type = "String"
}
resource "aws_ssm_parameter" "exq_concurrency" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/exq_concurrency"
value = "1"
type = "String"
}
resource "aws_ssm_parameter" "exq_internal_transactions_concurrency" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/exq_internal_transactions_concurrency"
value = "1"
type = "String"
}
resource "aws_ssm_parameter" "exq_receipts_concurrency" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/exq_receipts_concurrency"
value = "1"
type = "String"
}
resource "aws_ssm_parameter" "exq_transactions_concurrency" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/exq_transactions_concurrency"
value = "1"
type = "String"
}
resource "aws_ssm_parameter" "secret_key_base" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/secret_key_base"
value = "${var.secret_key_base}"
type = "String"
}
resource "aws_ssm_parameter" "port" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/port"
value = "80"
type = "String"
}
resource "aws_ssm_parameter" "db_username" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_username"
value = "${var.db_username}"
type = "String"
}
resource "aws_ssm_parameter" "db_password" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_password"
value = "${var.db_password}"
type = "String"
}
resource "aws_ssm_parameter" "db_host" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_host"
value = "${aws_route53_record.db.fqdn}"
type = "String"
}
resource "aws_ssm_parameter" "db_port" {
count = "${length(var.chains)}"
name = "/${var.prefix}/${element(keys(var.chains),count.index)}/db_port"
value = "${aws_db_instance.default.port}"
type = "String"
}

46
modules/stack/deploy.tf Normal file
View File

@ -0,0 +1,46 @@
resource "aws_s3_bucket" "explorer_releases" {
bucket = "${var.prefix}-explorer-codedeploy-releases"
acl = "private"
versioning {
enabled = true
}
}
resource "aws_codedeploy_app" "explorer" {
name = "${var.prefix}-explorer"
}
resource "aws_codedeploy_deployment_group" "explorer" {
count = "${length(var.chains)}"
app_name = "${aws_codedeploy_app.explorer.name}"
deployment_group_name = "${var.prefix}-explorer-dg${count.index}"
service_role_arn = "${aws_iam_role.deployer.arn}"
autoscaling_groups = ["${aws_autoscaling_group.explorer.*.id[count.index]}"]
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
load_balancer_info {
elb_info {
name = "${var.prefix}-explorer-${element(keys(var.chains),count.index)}-elb"
}
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "STOP_DEPLOYMENT"
wait_time_in_minutes = 30
}
green_fleet_provisioning_option {
action = "DISCOVER_EXISTING"
}
terminate_blue_instances_on_deployment_success {
action = "KEEP_ALIVE"
}
}
}

23
modules/stack/dns.tf Normal file
View File

@ -0,0 +1,23 @@
# Internal DNS
resource "aws_route53_zone" "main" {
name = "${var.prefix}.${var.dns_zone_name}"
vpc_id = "${aws_vpc.vpc.id}"
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
# DNS Records
resource "aws_route53_record" "db" {
zone_id = "${aws_route53_zone.main.zone_id}"
name = "db"
type = "A"
alias {
name = "${aws_db_instance.default.address}"
zone_id = "${aws_db_instance.default.hosted_zone_id}"
evaluate_target_health = false
}
}

120
modules/stack/hosts.tf Normal file
View File

@ -0,0 +1,120 @@
data "aws_ami" "explorer" {
most_recent = true
filter {
name = "name"
values = ["amzn-ami-*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "owner-alias"
values = ["amazon"]
}
}
resource "aws_launch_configuration" "explorer" {
name_prefix = "${var.prefix}-explorer-launchconfig-"
image_id = "${data.aws_ami.explorer.id}"
instance_type = "${var.instance_type}"
security_groups = ["${aws_security_group.app.id}"]
key_name = "${var.key_name}"
iam_instance_profile = "${aws_iam_instance_profile.explorer.id}"
associate_public_ip_address = false
depends_on = ["aws_db_instance.default"]
user_data = "${file("${path.module}/libexec/init.sh")}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_placement_group" "explorer" {
count = "${length(var.chains)}"
name = "${var.prefix}-explorer-placement-group${count.index}"
strategy = "cluster"
}
resource "aws_autoscaling_group" "explorer" {
count = "${length(var.chains)}"
name = "${aws_launch_configuration.explorer.name}-asg${count.index}"
max_size = "${length(var.chains) * 4}"
min_size = "${length(var.chains)}"
desired_capacity = "${length(var.chains)}"
placement_group = "${aws_placement_group.explorer.*.id[count.index]}"
launch_configuration = "${aws_launch_configuration.explorer.name}"
vpc_zone_identifier = ["${aws_subnet.default.id}"]
availability_zones = ["${data.aws_availability_zones.available.names}"]
load_balancers = ["${aws_elb.explorer.*.name[count.index]}"]
# Health checks are performed by CodeDeploy hooks
health_check_type = "EC2"
enabled_metrics = [
"GroupMinSize",
"GroupMaxSize",
"GroupDesiredCapacity",
"GroupInServiceInstances",
"GroupTotalInstances",
]
depends_on = [
"aws_ssm_parameter.new_relic_app_name",
"aws_ssm_parameter.new_relic_license_key",
"aws_ssm_parameter.redis_url",
"aws_ssm_parameter.pool_size",
"aws_ssm_parameter.ecto_use_ssl",
"aws_ssm_parameter.exq_blocks_concurrency",
"aws_ssm_parameter.exq_concurrency",
"aws_ssm_parameter.exq_internal_transactions_concurrency",
"aws_ssm_parameter.exq_receipts_concurrency",
"aws_ssm_parameter.exq_transactions_concurrency",
"aws_ssm_parameter.secret_key_base",
"aws_ssm_parameter.port",
"aws_ssm_parameter.db_username",
"aws_ssm_parameter.db_password",
"aws_ssm_parameter.db_host",
"aws_ssm_parameter.db_port",
"aws_ssm_parameter.ethereum_url",
"aws_ssm_parameter.trace_url",
]
lifecycle {
create_before_destroy = true
}
tag {
key = "prefix"
value = "${var.prefix}"
propagate_at_launch = true
}
tag {
key = "chain"
value = "${element(keys(var.chains),count.index)}"
propagate_at_launch = true
}
}
# TODO: These autoscaling policies are not currently wired up to any triggers
resource "aws_autoscaling_policy" "explorer-up" {
name = "${var.prefix}-explorer-autoscaling-policy-up"
autoscaling_group_name = "${aws_autoscaling_group.explorer.name}"
adjustment_type = "ChangeInCapacity"
scaling_adjustment = 1
cooldown = 300
}
resource "aws_autoscaling_policy" "explorer-down" {
name = "${var.prefix}-explorer-autoscaling-policy-down"
autoscaling_group_name = "${aws_autoscaling_group.explorer.name}"
adjustment_type = "ChangeInCapacity"
scaling_adjustment = -1
cooldown = 300
}

158
modules/stack/libexec/init.sh Executable file
View File

@ -0,0 +1,158 @@
#!/usr/bin/env bash
set -e
LOG=/var/log/user_data.log
METADATA_URL="http://169.254.169.254/latest/meta-data"
DYNDATA_URL="http://169.254.169.254/latest/dynamic"
INSTANCE_ID="$(curl -s $METADATA_URL/instance-id)"
HOSTNAME="$(curl -s $METADATA_URL/local-hostname)"
function log() {
ts=$(date '+%Y-%m-%dT%H:%M:%SZ')
printf '%s [init.sh] %s\n' "$ts" "$1" | tee -a "$LOG"
}
yum update -y
yum upgrade -y --enablerepo=epel >"$LOG"
if ! which wget >/dev/null; then
yum install -y wget >"$LOG"
fi
if ! which unzip >/dev/null; then
yum install -y unzip >"$LOG"
fi
if ! which ruby >/dev/null; then
yum install -y ruby >"$LOG"
fi
if ! which jq >/dev/null; then
log "Installing jq.."
yum install -y --enablerepo=epel jq >"$LOG"
fi
log "Determining region this instance is in.."
REGION="$(curl -s $DYNDATA_URL/instance-identity/document | jq -r '.region')"
log "Region is: $REGION"
log "Installing CodeDeploy agent.."
pushd /home/ec2-user
aws s3 cp "s3://aws-codedeploy-$REGION/latest/install" . --region="$REGION" >"$LOG"
chmod +x ./install
./install auto >"$LOG"
service codedeploy-agent stop >"$LOG"
log "CodeDeploy agent installed successfully!"
log "Fetching instance tags.."
aws ec2 describe-tags --region="$REGION" --filters "Name=resource-id,Values=$INSTANCE_ID" > tags.json
tags="$(jq '.Tags[] as $t | if ($t["Key"] | contains(":") | not) then "\($t["Key"])=\($t["Value"])" else "" end' --raw-output tags.json | awk NF)"
log "$(printf 'Tags:\n%s' "$tags")"
# PREFIX and CHAIN are key to configuring this instance
PREFIX="$(jq '.Tags[] | select(.Key == "prefix") | .Value' tags.json --raw-output)"
CHAIN="$(jq '.Tags[] | select(.Key == "chain") | .Value' tags.json --raw-output)"
log "Setting up application environment.."
mkdir -p /opt/app
log "Installing Erlang.."
wget https://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm >"$LOG"
rpm -Uvh erlang-solutions-1.0-1.noarch.rpm >"$LOG"
yum install -y \
erlang-erts \
erlang-kernel \
erlang-stdlib \
erlang-compiler \
erlang-asn1 \
erlang-crypto \
erlang-debugger \
erlang-dialyzer \
erlang-edoc \
erlang-erl_interface \
erlang-eunit \
erlang-hipe \
erlang-inets \
erlang-mnesia \
erlang-os_mon \
erlang-parsetools \
erlang-public_key \
erlang-runtime_tools \
erlang-sasl \
erlang-ssh \
erlang-ssl \
erlang-syntax_tools \
erlang-tools \
>"$LOG"
log "Installing Elixir to /opt/elixir.."
mkdir -p /opt/elixir
wget https://github.com/elixir-lang/elixir/releases/download/v1.6.4/Precompiled.zip >"$LOG"
unzip Precompiled.zip -d /opt/elixir >"$LOG"
log "Elixir installed successfully!"
log "Fetching configuration from Parameter Store..."
parameters_json=$(aws ssm get-parameters-by-path --region "$REGION" --path "/$PREFIX/$CHAIN")
params=$(echo "$parameters_json" | jq '.Parameters[].Name' --raw-output)
log "$(printf 'Found the following parameters:\n\n%s\n' "$params")"
function get_param() {
echo "$parameters_json" |\
jq ".Parameters[] | select(.Name == \"/$PREFIX/$CHAIN/$1\") | .Value" \
--raw-output
}
DB_USER="$(get_param 'db_username')"
DB_PASS="$(get_param 'db_password')"
DB_HOST="$(get_param 'db_host')"
DB_PORT="$(get_param 'db_port')"
DB_NAME="$CHAIN"
DATABASE_URL="postgresql://$DB_USER:$DB_PASS@$DB_HOST:$DB_PORT"
# Need to map the Parameter Store response to a set of NAME="<value>" entries,
# one per line, which will then be written to /etc/environment so that they are
# set for all users on the system
old_env="$(cat /etc/environment)"
{
echo "$old_env"
# shellcheck disable=SC2016
echo 'PATH=/opt/elixir/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:$PATH'
# shellcheck disable=SC1117
echo "$parameters_json" | \
jq ".Parameters[] as \$ps | \"\(\$ps[\"Name\"] | gsub(\"-\"; \"_\") | ltrimstr(\"/$PREFIX/$CHAIN/\") | ascii_upcase)=\\\"\(\$ps[\"Value\"])\\\"\"" --raw-output
echo "DYNO=\"$HOSTNAME\""
echo "HOSTNAME=\"$HOSTNAME\""
echo "DATABASE_URL=\"$DATABASE_URL/$DB_NAME\""
} > /etc/environment
log "Parameters have been written to /etc/environment successfully!"
log "Setting permissions on /opt/app"
chown -R ec2-user /opt/app
log "Creating pgsql database for $CHAIN"
if ! which psql >/dev/null; then
log "Installing psql.."
yum install -y --enablerepo=epel postgresql >"$LOG"
fi
function has_db() {
psql --tuples-only --no-align \
"$DATABASE_URL/postgres" \
-c "SELECT COUNT(*) FROM pg_catalog.pg_database WHERE datname = '$DB_NAME';"
}
if [ "$(has_db)" != "1" ]; then
psql "$DATABASE_URL/postgres" \
-c "CREATE DATABASE $DB_NAME;" >"$LOG"
fi
log "Application environment is ready!"
log "Starting CodeDeploy agent.."
service codedeploy-agent start >"$LOG"
exit 0

29
modules/stack/outputs.tf Normal file
View File

@ -0,0 +1,29 @@
output "codedeploy_app" {
description = "The name of the CodeDeploy application"
value = "${aws_codedeploy_app.explorer.name}"
}
output "codedeploy_deployment_group_names" {
description = "The names of all the CodeDeploy deployment groups"
value = "${aws_codedeploy_deployment_group.explorer.*.deployment_group_name}"
}
output "codedeploy_bucket" {
description = "The name of the CodeDeploy S3 bucket for applciation revisions"
value = "${aws_s3_bucket.explorer_releases.id}"
}
output "codedeploy_bucket_path" {
description = "The path for releases in the CodeDeploy S3 bucket"
value = "/"
}
output "explorer_urls" {
description = "A map of each chain to the DNS name of its corresponding Explorer instance"
value = "${zipmap(keys(var.chains), aws_elb.explorer.*.dns_name)}"
}
output "db_instance_address" {
description = "The IP address of the RDS instance"
value = "${aws_db_instance.default.address}"
}

25
modules/stack/rds.tf Normal file
View File

@ -0,0 +1,25 @@
resource "aws_db_instance" "default" {
identifier = "${var.prefix}-${var.db_id}"
engine = "postgres"
engine_version = "9.6"
instance_class = "${var.db_instance_class}"
storage_type = "${var.db_storage_type}"
allocated_storage = "${var.db_storage}"
copy_tags_to_snapshot = true
skip_final_snapshot = true
username = "${var.db_username}"
password = "${var.db_password}"
vpc_security_group_ids = ["${aws_security_group.database.id}"]
db_subnet_group_name = "${aws_db_subnet_group.database.id}"
depends_on = ["aws_security_group.database"]
lifecycle {
prevent_destroy = true
}
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}

21
modules/stack/redis.tf Normal file
View File

@ -0,0 +1,21 @@
resource "aws_elasticache_cluster" "default" {
cluster_id = "${var.prefix}-explorer-redis"
engine = "redis"
node_type = "cache.m4.large"
num_cache_nodes = 1
parameter_group_name = "default.redis3.2"
port = 6379
security_group_ids = ["${aws_security_group.redis.id}"]
availability_zone = "${data.aws_availability_zones.available.names[0]}"
subnet_group_name = "${aws_elasticache_subnet_group.redis.id}"
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_elasticache_subnet_group" "redis" {
name = "${var.prefix}-redis-subnet-group"
subnet_ids = ["${aws_subnet.redis.id}"]
}

64
modules/stack/routing.tf Normal file
View File

@ -0,0 +1,64 @@
# Create a gateway to provide access to the outside world
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
# Grant the VPC internet access in its main route table
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.vpc.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
# The ELB for the app server
resource "aws_elb" "explorer" {
count = "${length(var.chains)}"
name = "${var.prefix}-explorer-${element(keys(var.chains),count.index)}-elb"
subnets = ["${aws_subnet.default.id}"]
security_groups = ["${aws_security_group.elb.id}"]
cross_zone_load_balancing = true
connection_draining = true
connection_draining_timeout = 400
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
interval = 30
target = "HTTP:80/"
}
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
#listener {
# instance_port = 443
# instance_protocol = "http"
# lb_port = 443
# lb_protocol = "https"
# ssl_certificate_id = "arn:aws:iam::ID:server-certificate/NAME"
#}
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_lb_cookie_stickiness_policy" "explorer" {
count = "${length(var.chains)}"
name = "${var.prefix}-explorer-${element(keys(var.chains),count.index)}-stickiness-policy"
load_balancer = "${aws_elb.explorer.*.id[count.index]}"
lb_port = 80
cookie_expiration_period = 600
}

279
modules/stack/security.tf Normal file
View File

@ -0,0 +1,279 @@
data "aws_iam_policy_document" "instance-assume-role-policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
data "aws_iam_policy_document" "deployer-assume-role-policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["codedeploy.amazonaws.com"]
}
}
}
data "aws_iam_policy_document" "config-policy" {
statement {
effect = "Allow"
actions = ["ssm:DescribeParameters"]
resources = ["*"]
}
statement {
effect = "Allow"
actions = ["ssm:GetParameter", "ssm:GetParameters", "ssm:GetParametersByPath"]
resources = [
"arn:aws:ssm:*:*:parameter/${var.prefix}/*",
"arn:aws:ssm:*:*:parameter/${var.prefix}/*/*"
]
}
statement {
effect = "Allow"
actions = ["ec2:DescribeTags"]
resources = ["*"]
}
statement {
effect = "Allow"
actions = ["s3:*"]
resources = [
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-east-2/*",
"arn:aws:s3:::aws-codedeploy-us-west-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-2/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-south-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-2/*",
"arn:aws:s3:::aws-codedeploy-eu-central-1/*",
"arn:aws:s3:::aws-codedeploy-eu-west-1/*",
"arn:aws:s3:::aws-codedeploy-sa-east-1/*",
]
}
}
data "aws_iam_policy_document" "codedeploy-policy" {
statement {
effect = "Allow"
actions = [
"autoscaling:CompleteLifecycleAction",
"autoscaling:DeleteLifecycleHook",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:PutLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"codedeploy:*",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"tag:GetTags",
"tag:GetResources",
"sns:Publish",
]
resources = ["*"]
}
statement {
effect = "Allow"
actions = ["s3:Get*", "s3:List*"]
resources = [
"${aws_s3_bucket.explorer_releases.arn}",
"${aws_s3_bucket.explorer_releases.arn}/*",
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-east-2/*",
"arn:aws:s3:::aws-codedeploy-us-west-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-2/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-south-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-2/*",
"arn:aws:s3:::aws-codedeploy-eu-central-1/*",
"arn:aws:s3:::aws-codedeploy-eu-west-1/*",
"arn:aws:s3:::aws-codedeploy-sa-east-1/*",
]
}
}
resource "aws_iam_instance_profile" "explorer" {
name = "${var.prefix}-explorer-profile"
role = "${aws_iam_role.role.name}"
path = "/${var.prefix}/"
}
resource "aws_iam_role_policy" "config" {
name = "${var.prefix}-config-policy"
role = "${aws_iam_role.role.id}"
policy = "${data.aws_iam_policy_document.config-policy.json}"
}
resource "aws_iam_role" "role" {
name = "${var.prefix}-explorer-role"
description = "The IAM role given to each Explorer instance"
path = "/${var.prefix}/"
assume_role_policy = "${data.aws_iam_policy_document.instance-assume-role-policy.json}"
}
resource "aws_iam_role_policy" "deployer" {
name = "${var.prefix}-codedeploy-policy"
role = "${aws_iam_role.deployer.id}"
policy = "${data.aws_iam_policy_document.codedeploy-policy.json}"
}
resource "aws_iam_role" "deployer" {
name = "${var.prefix}-deployer-role"
description = "The IAM role given to the CodeDeploy service"
assume_role_policy = "${data.aws_iam_policy_document.deployer-assume-role-policy.json}"
}
# A security group for the ELB so it is accessible via the web
resource "aws_security_group" "elb" {
name = "${var.prefix}-poa-elb"
description = "A security group for the app server ELB, so it is accessible via the web"
vpc_id = "${aws_vpc.vpc.id}"
# HTTP from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTPS from anywhere
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Unrestricted outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_security_group" "app" {
name = "${var.prefix}-poa-app"
description = "A security group for the app server, allowing SSH and HTTP(S)"
vpc_id = "${aws_vpc.vpc.id}"
# HTTP from the VPC
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${var.vpc_cidr}"]
}
# HTTPS from the VPC
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["${var.vpc_cidr}"]
}
# SSH from anywhere
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Unrestricted outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_security_group" "database" {
name = "${var.prefix}-poa-database"
description = "Allow any inbound traffic from public/private subnet"
vpc_id = "${aws_vpc.vpc.id}"
# Allow anything from within the app server subnet
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
# Unrestricted outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_security_group" "redis" {
name = "${var.prefix}-poa-redis"
description = "Allow any inbound traffic from public/private subnet"
vpc_id = "${aws_vpc.vpc.id}"
# Allow traffic from within app server subnet
ingress {
from_port = 6379
to_port = 6379
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
# Unrestricted outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}

54
modules/stack/subnets.tf Normal file
View File

@ -0,0 +1,54 @@
## Public subnet
resource "aws_subnet" "default" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.public_subnet_cidr}"
availability_zone = "${data.aws_availability_zones.available.names[0]}"
map_public_ip_on_launch = true
tags {
name = "${var.prefix}-default-subnet"
prefix = "${var.prefix}"
origin = "terraform"
}
}
## Redis subnet
resource "aws_subnet" "redis" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.redis_subnet_cidr}"
availability_zone = "${data.aws_availability_zones.available.names[0]}"
map_public_ip_on_launch = false
tags {
name = "${var.prefix}-redis-subnet"
prefix = "${var.prefix}"
origin = "terraform"
}
}
## Database subnet
resource "aws_subnet" "database" {
count = "${length(data.aws_availability_zones.available.names)}"
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${cidrsubnet(var.db_subnet_cidr, 8, 1 + count.index)}"
availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
map_public_ip_on_launch = false
tags {
name = "${var.prefix}-database-subnet${count.index}"
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_db_subnet_group" "database" {
name = "${var.prefix}-database"
description = "The group of database subnets"
subnet_ids = ["${aws_subnet.database.*.id}"]
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}

View File

@ -0,0 +1,29 @@
variable "region" {}
variable "prefix" {}
variable "key_name" {}
variable "vpc_cidr" {}
variable "public_subnet_cidr" {}
variable "db_subnet_cidr" {}
variable "redis_subnet_cidr" {}
variable "dns_zone_name" {}
variable "instance_type" {}
variable "chains" {
default = {}
}
variable "chain_trace_endpoints" {
default = {}
}
variable "db_id" {}
variable "db_name" {}
variable "db_username" {}
variable "db_password" {}
variable "db_storage" {}
variable "db_storage_type" {}
variable "db_instance_class" {}
variable "new_relic_app_name" {}
variable "new_relic_license_key" {}
variable "secret_key_base" {}

35
modules/stack/vpc.tf Normal file
View File

@ -0,0 +1,35 @@
# This module will create the VPC for POA
# It is composed of:
# - VPC
# - Security group for VPC
# - A public subnet
# - A private subnet
# - NAT to give the private subnet access to internet
data "aws_availability_zones" "available" {}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
enable_dns_support = true
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_vpc_dhcp_options" "poa_dhcp" {
domain_name = "${var.dns_zone_name}"
domain_name_servers = ["AmazonProvidedDNS"]
tags {
prefix = "${var.prefix}"
origin = "terraform"
}
}
resource "aws_vpc_dhcp_options_association" "poa_dhcp" {
vpc_id = "${aws_vpc.vpc.id}"
dhcp_options_id = "${aws_vpc_dhcp_options.poa_dhcp.id}"
}

0
plans/.gitkeep Normal file
View File

8
setup/main.tf Normal file
View File

@ -0,0 +1,8 @@
module "backend" {
source = "../modules/backend"
bootstrap = "${terraform.workspace == "base" ? 1 : 0}"
bucket = "${var.bucket}"
dynamodb_table = "${var.dynamodb_table}"
prefix = "${var.prefix}"
}

1
setup/provider.tf Symbolic link
View File

@ -0,0 +1 @@
../common/provider.tf

1
setup/variables.tf Symbolic link
View File

@ -0,0 +1 @@
../common/variables.tf