# Networking with separated single environment This stage sets up the shared network infrastructure for the whole organization. It implements a single shared VPC per environment, where each environment is independently connected to the on-premise environment, to maintain a fully separated routing domain on GCP. While no communication between environment is implemented on this design, that can be achieved with a number of different options: - [VPC Peering](https://cloud.google.com/vpc/docs/vpc-peering) - which is not recommended as it would effectively create a full line of sight between workloads belonging to different environments - [VPN HA](https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies) tunnels between environments, exchanging a subset of well-defined routes. - [Multi-NIC appliances](https://cloud.google.com/architecture/best-practices-vpc-design#multi-nic) connecting the different environments, allowing the use of NVAs to enforce networking policies. The following diagram illustrates the high-level design, and should be used as a reference for the following sections. The final number of subnets, and their IP addressing design will of course depend on customer-specific requirements, and can be easily changed via variables or external data files without having to edit the actual code.

Networking diagram

## Table of contents - [Design overview and choices](#design-overview-and-choices) - [VPC design](#vpc-design) - [External connectivity](#external-connectivity) - [IP ranges, subnetting, routing](#ip-ranges-subnetting-routing) - [Internet egress](#internet-egress) - [VPC and Hierarchical Firewall](#vpc-and-hierarchical-firewall) - [DNS](#dns) - [Stage structure and files layout](#stage-structure-and-files-layout) - [VPCs](#vpcs) - [VPNs](#vpns) - [Routing and BGP](#routing-and-bgp) - [Firewall](#firewall) - [DNS architecture](#dns-architecture) - [Private Google Access](#private-google-access) - [How to run this stage](#how-to-run-this-stage) - [Provider and Terraform variables](#provider-and-terraform-variables) - [Impersonating the automation service account](#impersonating-the-automation-service-account) - [Variable configuration](#variable-configuration) - [Running the stage](#running-the-stage) - [Post-deployment activities](#post-deployment-activities) - [Customizations](#customizations) - [Changing default regions](#changing-default-regions) ## Design overview and choices ### VPC design This architecture creates one VPC for each environment, each in its respective project. Each VPC hosts external connectivity and shared services solely serving its own environment. As each environment is fully independent, this design trivialises the creation of new environments. ### External connectivity External connectivity to on-prem is implemented here via HA VPN (two tunnels per region), as this is the minimum common denominator often used directly, or as a stop-gap solution to validate routing and transfer data, while waiting for [interconnects](https://cloud.google.com/network-connectivity/docs/interconnect) to be provisioned. Connectivity to additional on-prem sites or other cloud providers should be implemented in a similar fashion, via VPN tunnels or interconnects on each of the environment VPCs, sharing the same regional router. ### IP ranges, subnetting, routing Minimizing the number of routes (and subnets) in use on the cloud environment is an important consideration, as it simplifies management and avoids hitting [Cloud Router](https://cloud.google.com/network-connectivity/docs/router/quotas) and [VPC](https://cloud.google.com/vpc/docs/quota) quotas and limits. For this reason, we recommend careful planning of the IP space used in your cloud environment, to be able to use large IP CIDR blocks in routes whenever possible. This stage uses a dedicated /16 block (which should of course be sized to your needs) shared by all regions and environments, and subnets created in each VPC derive their ranges from their relevant block. Each VPC also defines and reserves two "special" CIDR ranges dedicated to [PSA (Private Service Access)](https://cloud.google.com/vpc/docs/private-services-access) and [Internal HTTPs Load Balancers (L7ILB)](https://cloud.google.com/load-balancing/docs/l7-internal). Routes in GCP are either automatically created for VPC subnets, manually created via static routes, or dynamically programmed by [Cloud Routers](https://cloud.google.com/network-connectivity/docs/router#docs) via BGP sessions, which can be configured to advertise VPC ranges, and/or custom ranges via custom advertisements. In this setup: - routes between multiple subnets within the same VPC are automatically programmed by GCP - on-premises is connected to each environment VPC and dynamically exchanges BGP routes with GCP using HA VPN ### Internet egress The path of least resistance for Internet egress is using Cloud NAT, and that is what's implemented in this setup, with a NAT gateway configured for each VPC. Several other scenarios are possible of course, with varying degrees of complexity: - a forward proxy, with optional URL filters - a default route to on-prem to leverage existing egress infrastructure - a full-fledged perimeter firewall to control egress and implement additional security features like IPS Future pluggable modules will allow to easily experiment, or deploy the above scenarios. ### VPC and Hierarchical Firewall The GCP Firewall is a stateful, distributed feature that allows the creation of L4 policies, either via VPC-level rules or more recently via hierarchical policies applied on the resource hierarchy (organization, folders). The current setup adopts both firewall types, and uses hierarchical rules on the Networking folder for common ingress rules (egress is open by default), e.g. from health check or IAP forwarders ranges, and VPC rules for the environment or workload-level ingress. Rules and policies are defined in simple YAML files, described below. ### DNS DNS often goes hand in hand with networking, especially on GCP where Cloud DNS zones and policies are associated at the VPC level. This setup implements both DNS flows: - on-prem to cloud via private zones for cloud-managed domains, and an [inbound policy](https://cloud.google.com/dns/docs/server-policies-overview#dns-server-policy-in) used as forwarding target or via delegation (requires some extra configuration) from on-prem DNS resolvers - cloud to on-prem via forwarding zones for the on-prem managed domains - Private Google Access is enabled for a selection of the [supported domains](https://cloud.google.com/vpc/docs/configure-private-google-access#domain-options), namely - `private.googleapis.com` - `restricted.googleapis.com` - `gcr.io` - `packages.cloud.google.com` - `pkg.dev` - `pki.goog` To complete the configuration, the 35.199.192.0/19 range should be routed on the VPN tunnels from on-prem, and the following names configured for DNS forwarding to cloud: - `private.googleapis.com` - `restricted.googleapis.com` - `gcp.example.com` (used as a placeholder) From cloud, the `example.com` domain (used as a placeholder) is forwarded to on-prem. This configuration is battle-tested, and flexible enough to lend itself to simple modifications without subverting its design, for example by forwarding and peering root zones to bypass Cloud DNS external resolution. ## Stage structure and files layout ### VPCs VPCs are defined in separate files, one for each of `prod` and `dev`. Each file contains the same resources, described in the following paragraphs. The **project** ([`project`](../../../modules/project)) contains the VPC, and enables the required APIs and sets itself as a "[host project](https://cloud.google.com/vpc/docs/shared-vpc)". The **VPC** ([`net-vpc`](../../../modules/net-vpc)) manages the DNS inbound policy, explicit routes for `{private,restricted}.googleapis.com`, and its **subnets**. Subnets are created leveraging a "resource factory" paradigm, where the configuration is separated from the module that implements it, and stored in a well-structured file. To add a new subnet, simply create a new file in the `data_folder` directory defined in the module, following the examples found in the [Fabric `net-vpc` documentation](../../../modules/net-vpc#subnet-factory). Sample subnets are shipped in [data/subnets](./data/subnets), and can be easily customised to fit your needs. Subnets for [L7 ILBs](https://cloud.google.com/load-balancing/docs/l7-internal/proxy-only-subnets) are handled differently, and defined in variable `l7ilb_subnets`, while ranges for [PSA](https://cloud.google.com/vpc/docs/configure-private-services-access#allocating-range) are configured by variable `psa_ranges` - such variables are consumed by spoke VPCs. **Cloud NAT** ([`net-cloudnat`](../../../modules/net-cloudnat)) manages the networking infrastructure required to enable internet egress. ### VPNs Connectivity to on-prem is implemented with HA VPN ([`net-vpn`](../../../modules/net-vpn-ha)) and defined in `vpn-onprem-{dev,prod}.tf`. The files provisionally implement each a single logical connection between onprem and environment at `europe-west1`, and the relevant parameters for its configuration are found in variable `vpn_onprem_configs`. ### Routing and BGP Each VPC network ([`net-vpc`](../../../modules/net-vpc)) manages a separate routing table, which can define static routes (e.g. to private.googleapis.com) and receives dynamic routes from BGP sessions established with neighbor networks (e.g. from onprem). Static routes are defined in `net-*.tf` files, in the `routes` section of each `net-vpc` module. ### Firewall **VPC firewall rules** ([`net-vpc-firewall`](../../../modules/net-vpc-firewall)) are defined per-vpc on each `net-*.tf` file and leverage a resource factory to massively create rules. To add a new firewall rule, create a new file or edit an existing one in the `data_folder` directory defined in the module `net-vpc-firewall`, following the examples of the "[Rules factory](../../../modules/net-vpc-firewall#rules-factory)" section of the module documentation. Sample firewall rules are shipped in [data/firewall-rules/dev](./data/firewall-rules/dev) and can be easily customised. **Hierarchical firewall policies** ([`folder`](../../../modules/folder)) are defined in `main.tf`, and managed through a policy factory implemented by the `folder` module, which applies the defined hierarchical to the `Networking` folder, which contains all the core networking infrastructure. Policies are defined in the `rules_file` file - to define a new one simply use the instructions found on "[Firewall policy factory](../../../modules/organization#firewall-policy-factory)". Sample hierarchical firewall policies are shipped in [data/hierarchical-policy-rules.yaml](./data/hierarchical-policy-rules.yaml) and can be easily customised. ### DNS architecture The DNS ([`dns`](../../../modules/dns)) infrastructure is defined in the respective `dns-xxx.tf` files. Cloud DNS manages onprem forwarding and environment-specific zones (i.e. `dev.gcp.example.com` and `prod.gcp.example.com`). #### Cloud to on-prem Leveraging the forwarding zones defined on each environment, the cloud environment can resolve `in-addr.arpa.` and `onprem.example.com.` using the on-premises DNS infrastructure. Onprem resolvers IPs are set in variable `dns`. DNS queries sent to the on-premises infrastructure come from the `35.199.192.0/19` source range, which is only accessible from within a VPC or networks connected to one. When implementing this architecture, make sure you'll be able to route packets coming from the /19 range to the right environment (route to prod requests coming from prod and to dev for requests coming from dev). As an alternative, consider leveraging self-managed DNS resolvers (e.g. CoreDNS forwarders) on each environment. #### On-prem to cloud The [Inbound DNS Policy](https://cloud.google.com/dns/docs/server-policies-overview#dns-server-policy-in) defined on eachVPC automatically reserves the first available IP address on each created subnet (typically the third one in a CIDR) to expose the Cloud DNS service so that it can be consumed from outside of GCP. ## How to run this stage This stage is meant to be executed after the [resource management](../1-resman) stage has run, as it leverages the automation service account and bucket created there, and additional resources configured in the [bootstrap](../0-bootstrap) stage. It's of course possible to run this stage in isolation, but that's outside the scope of this document, and you would need to refer to the code for the previous stages for the environmental requirements. Before running this stage, you need to make sure you have the correct credentials and permissions, and localize variables by assigning values that match your configuration. ### Provider and Terraform variables As all other FAST stages, the [mechanism used to pass variable values and pre-built provider files from one stage to the next](../0-bootstrap/README.md#output-files-and-cross-stage-variables) is also leveraged here. The commands to link or copy the provider and terraform variable files can be easily derived from the `stage-links.sh` script in the FAST root folder, passing it a single argument with the local output files folder (if configured) or the GCS output bucket in the automation project (derived from stage 0 outputs). The following examples demonstrate both cases, and the resulting commands that then need to be copy/pasted and run. ```bash ../../stage-links.sh ~/fast-config # copy and paste the following commands for '2-networking-a-peering' ln -s ~/fast-config/providers/2-networking-providers.tf ./ ln -s ~/fast-config/tfvars/globals.auto.tfvars.json ./ ln -s ~/fast-config/tfvars/0-bootstrap.auto.tfvars.json ./ ln -s ~/fast-config/tfvars/1-resman.auto.tfvars.json ./ ``` ```bash ../../stage-links.sh gs://xxx-prod-iac-core-outputs-0 # copy and paste the following commands for '2-networking-a-peering' gcloud alpha storage cp gs://xxx-prod-iac-core-outputs-0/providers/2-networking-providers.tf ./ gcloud alpha storage cp gs://xxx-prod-iac-core-outputs-0/tfvars/globals.auto.tfvars.json ./ gcloud alpha storage cp gs://xxx-prod-iac-core-outputs-0/tfvars/0-bootstrap.auto.tfvars.json ./ gcloud alpha storage cp gs://xxx-prod-iac-core-outputs-0/tfvars/1-resman.auto.tfvars.json ./ ``` ### Impersonating the automation service account The preconfigured provider file uses impersonation to run with this stage's automation service account's credentials. The `gcp-devops` and `organization-admins` groups have the necessary IAM bindings in place to do that, so make sure the current user is a member of one of those groups. ### Variable configuration Variables in this stage -- like most other FAST stages -- are broadly divided into three separate sets: - variables which refer to global values for the whole organization (org id, billing account id, prefix, etc.), which are pre-populated via the `globals.auto.tfvars.json` file linked or copied above - variables which refer to resources managed by previous stage, which are prepopulated here via the `0-bootstrap.auto.tfvars.json` and `1-resman.auto.tfvars.json` files linked or copied above - and finally variables that optionally control this stage's behaviour and customizations, and can to be set in a custom `terraform.tfvars` file The latter set is explained in the [Customization](#customizations) sections below, and the full list can be found in the [Variables](#variables) table at the bottom of this document. ### Using delayed billing association for projects This configuration is possible but unsupported and only exists for development purposes, use at your own risk: - temporarily switch `billing_account.id` to `null` in `globals.auto.tfvars.json` - for each project resources in the project modules used in this stage (`dev-spoke-project`, `prod-spoke-project`) - apply using `-target`, for example `terraform apply -target 'module.prod-spoke-project.google_project.project[0]'` - untaint the project resource after applying, for example `terraform untaint 'module.prod-spoke-project.google_project.project[0]'` - go through the process to associate the billing account with the two projects - switch `billing_account.id` back to the real billing account id - resume applying normally ### Running the stage Once provider and variable values are in place and the correct user is configured, the stage can be run: ```bash terraform init terraform apply ``` ### Post-deployment activities - On-prem routers should be configured to advertise all relevant CIDRs to the GCP environments. To avoid hitting GCP quotas, we recomment aggregating routes as much as possible. - On-prem routers should accept BGP sessions from their cloud peers. - On-prem DNS servers should have forward zones for GCP-managed ones. #### Private Google Access [Private Google Access](https://cloud.google.com/vpc/docs/private-google-access) (or PGA) enables VMs and on-prem systems to consume Google APIs from within the Google network, and is already fully configured on this environment. For PGA to work: - Private Google Access should be enabled on the subnet. \ Subnets created by the `net-vpc` module are PGA-enabled by default. - 199.36.153.4/30 (`restricted.googleapis.com`) and 199.36.153.8/30 (`private.googleapis.com`) should be routed from on-prem to VPC, and from there to the `default-internet-gateway`. \ Per variable `vpn_onprem_configs` such ranges are advertised to onprem - furthermore every VPC has explicit routes set in case the `0.0.0.0/0` route is changed. - A private DNS zone for `googleapis.com` should be created and configured per [this article](https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid#config-domain) ## Customizations ### Changing default regions Regions are defined via the `regions` variable which sets up a mapping between the `regions.primary` and `regions.secondary` logical names and actual GCP region names. If you need to change regions from the defaults: - change the values of the mappings in the `regions` variable to the regions you are going to use - change the regions in the factory subnet files in the `data` folder ## Files | name | description | modules | resources | |---|---|---|---| | [dns-dev.tf](./dns-dev.tf) | Development spoke DNS zones and peerings setup. | dns | | | [dns-prod.tf](./dns-prod.tf) | Production spoke DNS zones and peerings setup. | dns | | | [main.tf](./main.tf) | Networking folder and hierarchical policy. | folder | | | [monitoring.tf](./monitoring.tf) | Network monitoring dashboards. | | google_monitoring_dashboard | | [outputs.tf](./outputs.tf) | Module outputs. | | google_storage_bucket_object · local_file | | [regions.tf](./regions.tf) | Compute short names for regions. | | | | [spoke-dev.tf](./spoke-dev.tf) | Dev spoke VPC and related resources. | net-cloudnat · net-vpc · net-vpc-firewall · project | google_project_iam_binding | | [spoke-prod.tf](./spoke-prod.tf) | Production spoke VPC and related resources. | net-cloudnat · net-vpc · net-vpc-firewall · project | google_project_iam_binding | | [test-resources.tf](./test-resources.tf) | Temporary instances for testing | compute-vm | | | [variables.tf](./variables.tf) | Module variables. | | | | [vpn-onprem-dev.tf](./vpn-onprem-dev.tf) | VPN between dev and onprem. | net-vpn-ha | | | [vpn-onprem-prod.tf](./vpn-onprem-prod.tf) | VPN between prod and onprem. | net-vpn-ha | | ## Variables | name | description | type | required | default | producer | |---|---|:---:|:---:|:---:|:---:| | [automation](variables.tf#L17) | Automation resources created by the bootstrap stage. | object({…}) | ✓ | | 0-bootstrap | | [billing_account](variables.tf#L25) | Billing account id. If billing account is not part of the same org set `is_org_level` to false. | object({…}) | ✓ | | 0-bootstrap | | [folder_ids](variables.tf#L92) | Folders to be used for the networking resources in folders/nnnnnnnnnnn format. If null, folder will be created. | object({…}) | ✓ | | 1-resman | | [organization](variables.tf#L102) | Organization details. | object({…}) | ✓ | | 0-bootstrap | | [prefix](variables.tf#L118) | Prefix used for resources that need unique names. Use 9 characters or less. | string | ✓ | | 0-bootstrap | | [custom_adv](variables.tf#L38) | Custom advertisement definitions in name => range format. | map(string) | | {…} | | | [custom_roles](variables.tf#L54) | Custom roles defined at the org level, in key => id format. | object({…}) | | null | 0-bootstrap | | [dns](variables.tf#L63) | Onprem DNS resolvers. | map(list(string)) | | {…} | | | [factories_config](variables.tf#L72) | Configuration for network resource factories. | object({…}) | | {…} | | | [outputs_location](variables.tf#L112) | Path where providers and tfvars files for the following stages are written. Leave empty to disable. | string | | null | | | [psa_ranges](variables.tf#L129) | IP ranges used for Private Service Access (e.g. CloudSQL). | object({…}) | | null | | | [regions](variables.tf#L166) | Region definitions. | object({…}) | | {…} | | | [router_onprem_configs](variables.tf#L176) | Configurations for routers used for onprem connectivity. | map(object({…})) | | {…} | | | [service_accounts](variables.tf#L199) | Automation service accounts in name => email format. | object({…}) | | null | 1-resman | | [vpn_onprem_configs](variables.tf#L211) | VPN gateway configuration for onprem interconnection. | map(object({…})) | | {…} | | ## Outputs | name | description | sensitive | consumers | |---|---|:---:|---| | [dev_cloud_dns_inbound_policy](outputs.tf#L59) | IP Addresses for Cloud DNS inbound policy for the dev environment. | | | | [host_project_ids](outputs.tf#L64) | Network project ids. | | | | [host_project_numbers](outputs.tf#L69) | Network project numbers. | | | | [prod_cloud_dns_inbound_policy](outputs.tf#L74) | IP Addresses for Cloud DNS inbound policy for the prod environment. | | | | [shared_vpc_self_links](outputs.tf#L79) | Shared VPC host projects. | | | | [tfvars](outputs.tf#L84) | Terraform variables file for the following stages. | ✓ | | | [vpn_gateway_endpoints](outputs.tf#L90) | External IP Addresses for the GCP VPN gateways. | | |