Merge branch 'master' into fast-dev-dp

This commit is contained in:
lcaggio 2022-02-04 13:55:20 +01:00 committed by GitHub
commit 16a36b2452
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
42 changed files with 1170 additions and 1129 deletions

View File

@ -57,6 +57,11 @@ jobs:
run: | run: |
python3 tools/check_documentation.py examples modules fast python3 tools/check_documentation.py examples modules fast
- name: Check documentation links (fabric)
id: documentation-links-fabric
run: |
python3 tools/check_links.py .
# markdown-link-check: # markdown-link-check:
# runs-on: ubuntu-latest # runs-on: ubuntu-latest
# steps: # steps:

View File

@ -32,33 +32,10 @@ A resource factory consumes a simple representation of a resource (e.g., in YAML
FAST uses YAML-based factories to deploy subnets and firewall rules and, as its name suggests, in the [project factory](./stages/03-project-factory/) stage. FAST uses YAML-based factories to deploy subnets and firewall rules and, as its name suggests, in the [project factory](./stages/03-project-factory/) stage.
## High level design ## Stages and high level design
As mentioned before, fast relies on multiple stages to progressively bring up your GCP organization(s). In this section we briefly describe each stage. As mentioned before, fast relies on multiple stages to progressively bring up your GCP organization(s).
Please refer to the [stages](./stages/) section for further details.
### Organizational level (00-01)
- [Bootstrap](stages/00-bootstrap/README.md)<br/>
Enables critical organization-level functionality that directly depends on Organization Administrator permissions. It has two primary purposes. The first is to bootstrap the resources needed to automate this and the following stages (service accounts, GCS buckets). And secondly, it applies the minimum amount of configuration needed at the organization level to avoid the need to grant organization-level permissions via Organization Administrator later on, and to implement a minimum of security features like sinks and exports from the start.
- [Resource Management](stages/01-resman/README.md)<br/>
Creates the base resource hierarchy (folders) and the automation resources required to delegate each part of the hierarchy to separate stages. This stage also configures organization-level policies and any exceptions needed by different branches of the resource hierarchy.
### Shared resources (02)
- [Security](stages/02-security/README.md)<br/>
Manages centralized security configurations in a separate stage, typically owned by the security team. This stage implements VPC Security Controls via separate perimeters for environments and central services, and creates projects to host centralized KMS keys used by the whole organization. It's intentionally easy to extend to include other security-related resources, like Secret Manager.
- Networking ([VPN](02-networking/README.md)/[NVA](02-networking-nva/README.md))
Manages centralized network resources in a separate stage, and is typically owned by the networking team. This stage implements a hub-and-spoke design, and includes connectivity via VPN to on-premises, and YAML-based factories for firewall rules (hierarchical and VPC-level) and subnets. It's currently available in two versions: [spokes connected via VPN](02-networking/README.md), [and spokes connected via appliances](02-networking-nva/README.md).
### Environment-level resources (03)
- [Project Factory](stages/03-project-factory/prod/README.md)<br/>
YAML-based factory to create and configure application- or team-level projects. Configuration includes VPC-level settings for Shared VPC, service-level configuration for CMEK encryption via centralized keys, and service account creation for workloads and applications. This stage is meant to be used once per environment.
- Data Platform (in development)
- GKE Multitenant (in development)
- GCE Migration (in development)
Please refer to the READMEs of each stage for further details.
## Implementation ## Implementation

Binary file not shown.

Before

Width:  |  Height:  |  Size: 272 KiB

After

Width:  |  Height:  |  Size: 253 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 177 KiB

After

Width:  |  Height:  |  Size: 187 KiB

View File

@ -305,9 +305,9 @@ Names used in internal references (e.g. `module.foo-prod.id`) are only used by T
| name | description | sensitive | consumers | | name | description | sensitive | consumers |
|---|---|:---:|---| |---|---|:---:|---|
| [billing_dataset](outputs.tf#L91) | BigQuery dataset prepared for billing export. | | | | [billing_dataset](outputs.tf#L85) | BigQuery dataset prepared for billing export. | | |
| [project_ids](outputs.tf#L96) | Projects created by this stage. | | | | [project_ids](outputs.tf#L90) | Projects created by this stage. | | |
| [providers](outputs.tf#L107) | Terraform provider files for this stage and dependent stages. | ✓ | <code>stage-01</code> | | [providers](outputs.tf#L101) | Terraform provider files for this stage and dependent stages. | ✓ | <code>stage-01</code> |
| [tfvars](outputs.tf#L116) | Terraform variable files for the following stages. | ✓ | | | [tfvars](outputs.tf#L110) | Terraform variable files for the following stages. | ✓ | |
<!-- END TFDOC --> <!-- END TFDOC -->

View File

@ -42,12 +42,6 @@ locals {
organization = var.organization organization = var.organization
prefix = var.prefix prefix = var.prefix
}) })
"02-networking-nva" = jsonencode({
billing_account_id = var.billing_account.id
custom_roles = module.organization.custom_role_id
organization = var.organization
prefix = var.prefix
})
"02-security" = jsonencode({ "02-security" = jsonencode({
billing_account_id = var.billing_account.id billing_account_id = var.billing_account.id
organization = var.organization organization = var.organization

View File

@ -175,12 +175,12 @@ Due to its simplicity, this stage lends itself easily to customizations: adding
| name | description | sensitive | consumers | | name | description | sensitive | consumers |
|---|---|:---:|---| |---|---|:---:|---|
| [networking](outputs.tf#L88) | Data for the networking stage. | | <code>02-networking</code> | | [networking](outputs.tf#L84) | Data for the networking stage. | | <code>02-networking</code> |
| [project_factories](outputs.tf#L98) | Data for the project factories stage. | | <code>xx-teams</code> | | [project_factories](outputs.tf#L94) | Data for the project factories stage. | | <code>xx-teams</code> |
| [providers](outputs.tf#L115) | Terraform provider files for this stage and dependent stages. | ✓ | <code>02-networking</code> · <code>02-security</code> · <code>xx-sandbox</code> · <code>xx-teams</code> | | [providers](outputs.tf#L111) | Terraform provider files for this stage and dependent stages. | ✓ | <code>02-networking</code> · <code>02-security</code> · <code>xx-sandbox</code> · <code>xx-teams</code> |
| [sandbox](outputs.tf#L122) | Data for the sandbox stage. | | <code>xx-sandbox</code> | | [sandbox](outputs.tf#L118) | Data for the sandbox stage. | | <code>xx-sandbox</code> |
| [security](outputs.tf#L132) | Data for the networking stage. | | <code>02-security</code> | | [security](outputs.tf#L128) | Data for the networking stage. | | <code>02-security</code> |
| [teams](outputs.tf#L142) | Data for the teams stage. | | | | [teams](outputs.tf#L138) | Data for the teams stage. | | |
| [tfvars](outputs.tf#L155) | Terraform variable files for the following stages. | ✓ | | | [tfvars](outputs.tf#L151) | Terraform variable files for the following stages. | ✓ | |
<!-- END TFDOC --> <!-- END TFDOC -->

View File

@ -56,10 +56,6 @@ locals {
folder_id = module.branch-network-folder.id folder_id = module.branch-network-folder.id
project_factory_sa = local._project_factory_sas project_factory_sa = local._project_factory_sas
}) })
"02-networkin-nva" = jsonencode({
folder_id = module.branch-network-folder.id
project_factory_sa = local._project_factory_sas
})
"02-security" = jsonencode({ "02-security" = jsonencode({
folder_id = module.branch-security-folder.id folder_id = module.branch-security-folder.id
kms_restricted_admins = { kms_restricted_admins = {

View File

@ -1,7 +1,6 @@
# Networking with Network Virtual Appliance # Networking with Network Virtual Appliance
This stage sets up the shared network infrastructure for the whole organization. This stage sets up the shared network infrastructure for the whole organization.
It is an alternative to the [02-networking stage](../02-networking/README.md).
It is designed for those who would like to leverage Network Virtual Appliances (NVAs) between trusted and untrusted areas of the network, for example for Intrusion Prevention System (IPS) purposes. It is designed for those who would like to leverage Network Virtual Appliances (NVAs) between trusted and untrusted areas of the network, for example for Intrusion Prevention System (IPS) purposes.
@ -145,7 +144,7 @@ This configuration is battle-tested, and flexible enough to lend itself to simpl
## How to run this stage ## How to run this stage
This stage is meant to be executed after the [resman](../01-resman) stage has run. It leverages the automation service account and the storage bucket created there, and additional resources configured in the [bootstrap](../00-boostrap) stage. This stage is meant to be executed after the [resman](../01-resman) stage has run. It leverages the automation service account and the storage bucket created there, and additional resources configured in the [bootstrap](../00-bootstrap) stage.
It's possible to run this stage in isolation, but that's outside of the scope of this document. Please, refer to the previous stages for the environment requirements. It's possible to run this stage in isolation, but that's outside of the scope of this document. Please, refer to the previous stages for the environment requirements.
@ -153,7 +152,7 @@ Before running this stage, you need to make sure you have the correct credential
### Providers configuration ### Providers configuration
The default way of making sure you have the right permissions, is to use the identity of the service account pre-created for this stage, during the [resource management](./01-resman) stage, and that you are a member of the group that can impersonate it via provider-level configuration (`gcp-devops` or `organization-admins`). The default way of making sure you have the right permissions, is to use the identity of the service account pre-created for this stage, during the [resource management](../01-resman) stage, and that you are a member of the group that can impersonate it via provider-level configuration (`gcp-devops` or `organization-admins`).
To simplify the setup, the previous stage pre-configures a valid providers file in its output and optionally writes it to a local file if the `outputs_location` variable is set to a valid path. To simplify the setup, the previous stage pre-configures a valid providers file in its output and optionally writes it to a local file if the `outputs_location` variable is set to a valid path.
@ -161,15 +160,15 @@ If you have set a valid value for `outputs_location` in the bootstrap stage, sim
```bash ```bash
# `outputs_location` is set to `../../configs/example` # `outputs_location` is set to `../../configs/example`
ln -s ../../configs/example/02-networking-nva/providers.tf ln -s ../../configs/example/02-networking/providers.tf
``` ```
If you have not configured `outputs_location` in bootstrap, you can derive the providers file from that stage outputs: If you have not configured `outputs_location` in bootstrap, you can derive the providers file from that stage outputs:
```bash ```bash
cd ../00-bootstrap cd ../00-bootstrap
terraform output -json providers | jq -r '.["02-networking-nva"]' \ terraform output -json providers | jq -r '.["02-networking"]' \
> ../02-networking-nva-nva/providers.tf > ../02-networking-nva/providers.tf
``` ```
### Variable configuration ### Variable configuration
@ -185,8 +184,8 @@ If you have set a valid value for `outputs_location` in the bootstrap and in the
```bash ```bash
# `outputs_location` is set to `../../configs/example` # `outputs_location` is set to `../../configs/example`
ln -s ../../configs/example/02-networking-nva/terraform-bootstrap.auto.tfvars.json ln -s ../../configs/example/02-networking/terraform-bootstrap.auto.tfvars.json
ln -s ../../configs/example/02-networking-nva/terraform-resman.auto.tfvars.json ln -s ../../configs/example/02-networking/terraform-resman.auto.tfvars.json
``` ```
Please, refer to the [variables](#variables) table below for a map of the variable origins, and use the sections below to understand how to adapt this stage to your networking configuration. Please, refer to the [variables](#variables) table below for a map of the variable origins, and use the sections below to understand how to adapt this stage to your networking configuration.
@ -290,7 +289,7 @@ Variables managing L7 Internal Load Balancers (`l7ilb_subnets`) and Private Serv
VPC network peering connectivity to the `trusted landing VPC` is managed by the `vpc-peering-*.tf` files. VPC network peering connectivity to the `trusted landing VPC` is managed by the `vpc-peering-*.tf` files.
Copy `vpc-peering-prod.tf` to `vpc-peering-staging.tf` and replace "prod" with "staging", where relevant. Copy `vpc-peering-prod.tf` to `vpc-peering-staging.tf` and replace "prod" with "staging", where relevant.
Configure the NVAs deployed or update the sample NVA config files ([ew1](data/nva-startup-script-ew1.tftpl) and [ew4](data/nva-startup-script-ew1.tftpl)), thus making sure they support the new subnets. Configure the NVAs deployed or update the sample [NVA config file](data/nva-startup-script.tftpl) making sure they support the new subnets.
DNS configurations are managed in the `dns-*.tf` files. DNS configurations are managed in the `dns-*.tf` files.
Copy the `dns-prod.tf` to `dns-staging.tf` and replace within the files "prod" with "staging", where relevant. Copy the `dns-prod.tf` to `dns-staging.tf` and replace within the files "prod" with "staging", where relevant.

View File

@ -121,7 +121,7 @@ This configuration is battle-tested, and flexible enough to lend itself to simpl
## How to run this stage ## How to run this stage
This stage is meant to be executed after the [resman](../01-resman) stage has run, as it leverages the automation service account and bucket created there, and additional resources configured in the [bootstrap](../00-boostrap) stage. This stage is meant to be executed after the [resman](../01-resman) stage has run, as it leverages the automation service account and bucket created there, and additional resources configured in the [bootstrap](../00-bootstrap) stage.
It's of course possible to run this stage in isolation, but that's outside the scope of this document, and you would need to refer to the code for the previous stages for the environmental requirements. It's of course possible to run this stage in isolation, but that's outside the scope of this document, and you would need to refer to the code for the previous stages for the environmental requirements.
@ -129,7 +129,7 @@ Before running this stage, you need to make sure you have the correct credential
### Providers configuration ### Providers configuration
The default way of making sure you have the right permissions, is to use the identity of the service account pre-created for this stage during the [resource management](./01-resman) stage, and that you are a member of the group that can impersonate it via provider-level configuration (`gcp-devops` or `organization-admins`). The default way of making sure you have the right permissions, is to use the identity of the service account pre-created for this stage during the [resource management](../01-resman) stage, and that you are a member of the group that can impersonate it via provider-level configuration (`gcp-devops` or `organization-admins`).
To simplify setup, the previous stage pre-configures a valid providers file in its output, and optionally writes it to a local file if the `outputs_location` variable is set to a valid path. To simplify setup, the previous stage pre-configures a valid providers file in its output, and optionally writes it to a local file if the `outputs_location` variable is set to a valid path.
@ -209,7 +209,7 @@ To add a new firewall rule, create a new file or edit an existing one in the `da
### DNS architecture ### DNS architecture
The DNS ([`dns`](https://github.com/terraform-google-modules/cloud-foundation-fabric/tree/master/modules/dns)) infrastructure is defined in [`dns.tf`](dns.tf). The DNS ([`dns`](https://github.com/terraform-google-modules/cloud-foundation-fabric/tree/master/modules/dns)) infrastructure is defined in the respective `vpc-xxx.tf` files.
Cloud DNS manages onprem forwarding, the main GCP zone (in this example `gcp.example.com`) and is peered to environment-specific zones (i.e. `dev.gcp.example.com` and `prod.gcp.example.com`). Cloud DNS manages onprem forwarding, the main GCP zone (in this example `gcp.example.com`) and is peered to environment-specific zones (i.e. `dev.gcp.example.com` and `prod.gcp.example.com`).
@ -226,7 +226,7 @@ DNS queries sent to the on-premises infrastructure come from the `35.199.192.0/1
#### On-prem to cloud #### On-prem to cloud
The [Inbound DNS Policy](https://cloud.google.com/dns/docs/server-policies-overview#dns-server-policy-in) defined in module `landing-vpc` ([`landing.tf`](./landing.tf)) automatically reserves the first available IP address on each created subnet (typically the third one in a CIDR) to expose the Cloud DNS service so that it can be consumed from outside of GCP. The [Inbound DNS Policy](https://cloud.google.com/dns/docs/server-policies-overview#dns-server-policy-in) defined in module `landing-vpc` ([`landing.tf`](./vpc-landing.tf)) automatically reserves the first available IP address on each created subnet (typically the third one in a CIDR) to expose the Cloud DNS service so that it can be consumed from outside of GCP.
### Private Google Access ### Private Google Access

View File

Before

Width:  |  Height:  |  Size: 138 KiB

After

Width:  |  Height:  |  Size: 138 KiB

View File

Before

Width:  |  Height:  |  Size: 357 KiB

After

Width:  |  Height:  |  Size: 357 KiB

View File

@ -28,7 +28,7 @@ The project factory takes care of the following activities:
## How to run this stage ## How to run this stage
This stage is meant to be executed after "foundational stages" (i.e., stages [`00-bootstrap`](../../00-bootstrap), [`01-resman`](../../01-resman), [`02-networking`](../../02-networking) and [`02-security`](../../02-security)) have been run. This stage is meant to be executed after "foundational stages" (i.e., stages [`00-bootstrap`](../../00-bootstrap), [`01-resman`](../../01-resman), 02-networking (either [VPN](../../02-networking-vpn) or [NVA](../../02-networking-nva)) and [`02-security`](../../02-security)) have been run.
It's of course possible to run this stage in isolation, by making sure the architectural prerequisites are satisfied (e.g., networking), and that the Service Account running the stage is granted the roles/permissions below: It's of course possible to run this stage in isolation, by making sure the architectural prerequisites are satisfied (e.g., networking), and that the Service Account running the stage is granted the roles/permissions below:
@ -73,7 +73,7 @@ To avoid the tedious job of filling in the first group of variables with values
If you configured a valid path for `outputs_location` in the bootstrap and networking stage, simply link the relevant `terraform-*.auto.tfvars.json` files from this stage's outputs folder (under the path you specified), where the `*` above is set to the name of the stage that produced it. For this stage, a single `.tfvars` file is available: If you configured a valid path for `outputs_location` in the bootstrap and networking stage, simply link the relevant `terraform-*.auto.tfvars.json` files from this stage's outputs folder (under the path you specified), where the `*` above is set to the name of the stage that produced it. For this stage, a single `.tfvars` file is available:
```bash ```bash
# Variable `outputs_location` is set to `../../config` in stages 01-bootstrap and 02-networking # Variable `outputs_location` is set to `../../config` in stages 01-bootstrap and the 02-networking stage in use
ln -s ../../../config/03-project-factory-prod/terraform-bootstrap.auto.tfvars.json ln -s ../../../config/03-project-factory-prod/terraform-bootstrap.auto.tfvars.json
ln -s ../../../config/03-project-factory-prod/terraform-networking.auto.tfvars.json ln -s ../../../config/03-project-factory-prod/terraform-networking.auto.tfvars.json
``` ```

View File

@ -17,8 +17,8 @@ Refer to each stage's documentation for a detailed description of its purpose, t
- [Security](02-security/README.md) - [Security](02-security/README.md)
Manages centralized security configurations in a separate stage, and is typically owned by the security team. This stage implements VPC Security Controls via separate perimeters for environments and central services, and creates projects to host centralized KMS keys used by the whole organization. It's meant to be easily extended to include other security-related resources which are required, like Secret Manager. Manages centralized security configurations in a separate stage, and is typically owned by the security team. This stage implements VPC Security Controls via separate perimeters for environments and central services, and creates projects to host centralized KMS keys used by the whole organization. It's meant to be easily extended to include other security-related resources which are required, like Secret Manager.
- Networking ([VPN](02-networking/README.md)/[NVA](02-networking-nva/README.md)) - Networking ([VPN](02-networking-vpn/README.md)/[NVA](02-networking-nva/README.md))
Manages centralized network resources in a separate stage, and is typically owned by the networking team. This stage implements a hub-and-spoke design, and includes connectivity via VPN to on-premises, and YAML-based factories for firewall rules (hierarchical and VPC-level) and subnets. It's currently available in two versions: [spokes connected via VPN](02-networking/README.md), [and spokes connected via appliances](02-networking-nva/README.md). Manages centralized network resources in a separate stage, and is typically owned by the networking team. This stage implements a hub-and-spoke design, and includes connectivity via VPN to on-premises, and YAML-based factories for firewall rules (hierarchical and VPC-level) and subnets. It's currently available in two versions: [spokes connected via VPN](02-networking-vpn/README.md), [and spokes connected via appliances](02-networking-nva/README.md).
## Environment-level resources (03) ## Environment-level resources (03)

View File

@ -15,7 +15,7 @@
*/ */
module "stage" { module "stage" {
source = "../../../../../fast/stages/02-networking" source = "../../../../../fast/stages/02-networking-vpn"
billing_account_id = "000000-111111-222222" billing_account_id = "000000-111111-222222"
organization = { organization = {
domain = "gcp-pso-italy.net" domain = "gcp-pso-italy.net"
@ -27,5 +27,5 @@ module "stage" {
dev = "foo@iam" dev = "foo@iam"
prod = "bar@iam" prod = "bar@iam"
} }
data_dir = "../../../../../fast/stages/02-networking/data/" data_dir = "../../../../../fast/stages/02-networking-vpn/data/"
} }

View File

@ -1,2 +1,3 @@
click click
marko
yamale yamale

80
tools/check_links.py Executable file
View File

@ -0,0 +1,80 @@
#!/usr/bin/env python3
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''Recursively check link destination validity in Markdown files.
This tool recursively checks that local links in Markdown files point to valid
destinations. Its main use is in CI pipelines triggered by pull requests.
'''
import collections
import pathlib
import urllib.parse
import click
import marko
BASEDIR = pathlib.Path(__file__).resolve().parents[1]
DOC = collections.namedtuple('DOC', 'path relpath links')
LINK = collections.namedtuple('LINK', 'dest valid')
def check_docs(dir_name):
'Traverse dir_name and check links in Markdown files.'
dir_path = BASEDIR / dir_name
for readme_path in sorted(dir_path.glob('**/*.md')):
if '.terraform' in str(readme_path) or '.pytest' in str(readme_path):
continue
links = []
for el in marko.parser.Parser().parse(readme_path.read_text()).children:
if not isinstance(el, marko.block.Paragraph):
continue
for subel in el.children:
if not isinstance(subel, marko.inline.Link):
continue
link_valid = None
url = urllib.parse.urlparse(subel.dest)
if url.scheme:
link_valid = True
else:
link_valid = (readme_path.parent / url.path).exists()
links.append(LINK(subel.dest, link_valid))
yield DOC(readme_path, str(readme_path.relative_to(dir_path)), links)
@ click.command()
@ click.argument('dirs', type=str, nargs=-1)
def main(dirs):
'Check links in Markdown files contained in dirs.'
errors = 0
for dir_name in dirs:
print(f'----- {dir_name} -----')
for doc in check_docs(dir_name):
state = '' if all(l.valid for l in doc.links) else ''
print(f'[{state}] {doc.relpath} ({len(doc.links)})')
if state == '':
errors += 1
for l in doc.links:
if not l.valid:
print(f' {l.dest}')
if errors:
raise SystemExit('Errors found.')
if __name__ == '__main__':
main()