Introduce mandatory OWNERS file for blueprint maintainership (#2131)

* Delete deprecated/broken blueprints

* Adding OWNERS to all blueprints

* Fix links

* Update OWNERS

---------

Co-authored-by: javiergp <javiergp@users.noreply.github.com>
This commit is contained in:
Julio Castillo 2024-03-08 09:40:46 +01:00 committed by GitHub
parent 8a8d9bec2c
commit 993bef71aa
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
99 changed files with 55 additions and 3655 deletions

View File

@ -9,7 +9,7 @@ Currently available blueprints:
- **data solutions** - [GCE and GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion), [Data Platform](./data-solutions/data-platform-foundations), [Minimal Data Platform](./data-solutions/data-platform-minimal), [Spinning up a foundation data pipeline on Google Cloud using Cloud Storage, Dataflow and BigQuery](./data-solutions/gcs-to-bq-with-least-privileges), [#SQL Server Always On Groups blueprint](./data-solutions/sqlserver-alwayson), [Data Playground](./data-solutions/data-playground), [MLOps with Vertex AI](./data-solutions/vertex-mlops), [Shielded Folder](./data-solutions/shielded-folder), [BigQuery ML and Vertex AI Pipeline](./data-solutions/bq-ml)
- **factories** - [Fabric resource factories](./factories)
- **GKE** - [Binary Authorization Pipeline Blueprint](./gke/binauthz), [Storage API](./gke/binauthz/image), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api), [GKE Multitenant Blueprint](./gke/multitenant-fleet), [Shared VPC with GKE support](./networking/shared-vpc-gke/), [GKE Autopilot](./gke/autopilot)
- **networking** - [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [HA VPN over Interconnect](./networking/ha-vpn-over-interconnect/), [GLB and multi-regional daisy-chaining through hybrid NEGs](./networking/glb-hybrid-neg-internal), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Hub and Spoke via VPN](./networking/hub-and-spoke-vpn), [Hub and Spoke via VPC Peering](./networking/hub-and-spoke-peering), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), On-prem DNS and Google Private Access, [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke), [VPC Connectivity Lab](./networking/vpc-connectivity-lab/)
- **networking** - [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [HA VPN over Interconnect](./networking/ha-vpn-over-interconnect/), [GLB and multi-regional daisy-chaining through hybrid NEGs](./networking/glb-hybrid-neg-internal), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), On-prem DNS and Google Private Access, [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke), [VPC Connectivity Lab](./networking/vpc-connectivity-lab/)
- **serverless** - [Cloud Run series](./serverless/cloud-run-explore)
- **third party solutions** - [OpenShift on GCP user-provisioned infrastructure](./third-party-solutions/openshift), [Wordpress deployment on Cloud Run](./third-party-solutions/wordpress/cloudrun)

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
ludoo

View File

@ -0,0 +1 @@
ludoo

View File

@ -0,0 +1 @@
ludoo

View File

@ -0,0 +1 @@
ludoo

View File

@ -0,0 +1 @@
juliocc

View File

@ -0,0 +1 @@
aurelienlegrand, ludoo

View File

@ -0,0 +1 @@
averbuks

View File

@ -0,0 +1 @@
mikouaj

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
averbuks

View File

@ -0,0 +1 @@
averbuks

View File

@ -0,0 +1 @@
eliamaldini

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
juliocc, lcaggio

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
lcaggio

View File

@ -0,0 +1 @@
rosmo

View File

@ -0,0 +1 @@
javiergp, lcaggio

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
juliocc

View File

@ -0,0 +1 @@
juliocc, ludoo

View File

@ -0,0 +1 @@
apichick, juliocc

View File

@ -0,0 +1 @@
danielmarzini, juliocc

View File

@ -0,0 +1 @@
wiktorn, juliocc

View File

@ -0,0 +1 @@
danielmarzini, juliocc

View File

@ -30,22 +30,6 @@ They are meant to be used as minimal but complete starting points to create actu
<br clear="left">
### Hub and Spoke via Dynamic VPN
<a href="./hub-and-spoke-vpn/" title="Hub and spoke via dynamic VPN"><img src="./hub-and-spoke-vpn/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-vpn/) implements a hub and spoke topology via dynamic VPN tunnels, a common design where peering cannot be used due to limitations on the number of spokes or connectivity to managed services.
The blueprint shows how to implement spoke transitivity via BGP advertisements, how to expose hub DNS zones to spokes via DNS peering, and allows easy testing of different VPN and BGP configurations.
<br clear="left">
### Hub and Spoke via Peering
<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering blueprint"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is connected to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.
The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workaround is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.
<br clear="left">
### Internal Network LB as next hop
<a href="./ilb-next-hop/" title="Internal Network LB as next hop"><img src="./ilb-next-hop/diagram.png" align="left" width="280px"></a> This [blueprint](./ilb-next-hop/) allows testing [Internal Network LB as next hop](https://cloud.google.com/load-balancing/docs/internal/ilb-next-hop-overview) using simple Linux gateway VMS between two VPCs, to emulate virtual appliances. An optional additional Internal Network LB can be enabled to test multiple load balancer configurations and hashing.

View File

@ -1,6 +0,0 @@
# Deprecated or unsupported blueprints
The blueprints in this folder are either deprecated or need work on them.
- nginx reverse proxy cluster needs tests and resolving a cycle
- filtering-proxy needs upstream `cloud-config-container/__need_fixing/squid` to be fixed

View File

@ -1,43 +0,0 @@
# Network filtering with Squid with isolated VPCs using Private Service Connect
This blueprint shows how to deploy a filtering HTTP proxy to restrict Internet access. Here we show one way to do this using isolated VPCs and Private Service Connect:
- The `app` subnet hosts the consumer VMs that will have their Internet access tightly controlled by a non-caching filtering forward proxy.
- The `proxy` subnet hosts a Cloud NAT instance and a [Squid](http://www.squid-cache.org/) server.
- The `psc` subnet is reserved for the Private Service Connect.
The reason for using Privat Service Connect in this setup is to have a common proxy setup between all environments without having to share a VPC between projects. This allows us to enforce the `compute.vmExternalIpAccess` [organization policy](https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints), which prevents the service projects from having external IPs, thus forcing all outbound Internet connections through the proxy.
To allow Internet connectivity to the proxy subnet, a Cloud NAT instance is configured to allow usage from [that subnet only](https://cloud.google.com/nat/docs/set-up-manage-network-address-translation#specify_subnet_ranges_for_nat). All other subnets are not allowed to use the Cloud NAT instance.
To simplify the usage of the proxy, a Cloud DNS private zone is created in each consumer VPC and the IP address of the proxy is exposed with the FQDN `proxy.internal`. In addition, system-wide `http_proxy` and `https_proxy` environment variables and an APT configuration are rolled out via a [startup script](startup.sh).
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [prefix](variables.tf#L44) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L70) | Project id used for all resources. | <code>string</code> | ✓ | |
| [allowed_domains](variables.tf#L17) | List of domains allowed by the squid proxy. | <code>list&#40;string&#41;</code> | | <code title="&#91;&#10; &#34;.google.com&#34;,&#10; &#34;.github.com&#34;,&#10; &#34;.fastlydns.net&#34;,&#10; &#34;.debian.org&#34;&#10;&#93;">&#91;&#8230;&#93;</code> |
| [cidrs](variables.tf#L28) | CIDR ranges for subnets. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; app &#61; &#34;10.0.0.0&#47;24&#34;&#10; proxy &#61; &#34;10.0.2.0&#47;28&#34;&#10; psc &#61; &#34;10.0.3.0&#47;28&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [nat_logging](variables.tf#L38) | Enables Cloud NAT logging if not null, value is one of 'ERRORS_ONLY', 'TRANSLATIONS_ONLY', 'ALL'. | <code>string</code> | | <code>&#34;ERRORS_ONLY&#34;</code> |
| [project_create](variables.tf#L53) | Set to non null if project needs to be created. | <code title="object&#40;&#123;&#10; billing_account &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L75) | Default region for resources. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
<!-- END TFDOC -->
## Test
```hcl
module "test" {
source = "./fabric/blueprints/networking/__need_fixing/filtering-proxy-psc"
prefix = "fabric"
project_create = {
billing_account = "123456-ABCDEF-123456"
parent = "folders/1234567890"
}
project_id = "test-project"
}
# tftest modules=13 resources=41
```

View File

@ -1,105 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
###############################################################################
# Consumer project and VPC #
###############################################################################
module "vpc-consumer" {
source = "../../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-app"
subnets = [
{
name = "${var.prefix}-app"
ip_cidr_range = var.cidrs.app
region = var.region
}
]
}
###############################################################################
# Test VM #
###############################################################################
module "test-vm-consumer" {
source = "../../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-test-vm"
instance_type = "e2-micro"
tags = ["ssh"]
network_interfaces = [{
network = module.vpc-consumer.self_link
subnetwork = module.vpc-consumer.subnet_self_links["${var.region}/${var.prefix}-app"]
nat = false
addresses = null
}]
service_account = {
auto_create = true
}
metadata = {
startup-script = templatefile("${path.module}/startup.sh", { proxy_url = "http://proxy.internal:3128" })
}
}
###############################################################################
# PSC Consumer #
###############################################################################
resource "google_compute_address" "psc_endpoint_address" {
name = "${var.prefix}-psc-proxy-address"
project = module.project.project_id
address_type = "INTERNAL"
subnetwork = module.vpc-consumer.subnet_self_links["${var.region}/${var.prefix}-app"]
region = var.region
}
resource "google_compute_forwarding_rule" "psc_ilb_consumer" {
name = "${var.prefix}-psc-proxy-fw-rule"
project = module.project.project_id
region = var.region
target = google_compute_service_attachment.service_attachment.id
load_balancing_scheme = ""
network = module.vpc-consumer.self_link
ip_address = google_compute_address.psc_endpoint_address.id
}
###############################################################################
# DNS and Firewall #
###############################################################################
module "private-dns" {
source = "../../../../modules/dns"
project_id = module.project.project_id
name = "${var.prefix}-internal"
zone_config = {
domain = "internal."
private = {
client_networks = [module.vpc-consumer.self_link]
}
}
recordsets = {
"A squid" = { ttl = 60, records = [google_compute_address.psc_endpoint_address.address] }
"CNAME proxy" = { ttl = 3600, records = ["squid.internal."] }
}
}
module "firewall-consumer" {
source = "../../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-consumer.name
}

View File

@ -1,229 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
###############################################################################
# Host project and VPC resources #
###############################################################################
module "project" {
source = "../../../../modules/project"
project_create = var.project_create != null
billing_account = try(var.project_create.billing_account, null)
parent = try(var.project_create.parent, null)
name = var.project_id
services = [
"dns.googleapis.com",
"compute.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com"
]
}
module "vpc" {
source = "../../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-vpc"
subnets = [
{
name = "proxy"
ip_cidr_range = var.cidrs.proxy
region = var.region
}
]
subnets_psc = [
{
name = "psc"
ip_cidr_range = var.cidrs.psc
region = var.region
}
]
}
module "firewall" {
source = "../../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc.name
ingress_rules = {
allow-ingress-squid = {
description = "Allow squid ingress traffic"
source_ranges = [
var.cidrs.psc, "35.191.0.0/16", "130.211.0.0/22"
]
targets = [module.service-account-squid.email]
use_service_accounts = true
rules = [{
protocol = "tcp"
ports = [3128]
}]
}
}
}
module "nat" {
source = "../../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "default"
router_network = module.vpc.name
config_source_subnets = "LIST_OF_SUBNETWORKS"
config_port_allocation = {
enable_endpoint_independent_mapping = false
enable_dynamic_port_allocation = true
# 64512/11 = 5864 . 11 is the number of usable IPs in the proxy subnet
min_ports_per_vm = 5864
}
subnetworks = [
{
self_link = module.vpc.subnet_self_links["${var.region}/proxy"]
config_source_ranges = ["ALL_IP_RANGES"]
secondary_ranges = null
}
]
logging_filter = var.nat_logging
}
###############################################################################
# PSC resources #
###############################################################################
resource "google_compute_service_attachment" "service_attachment" {
name = "psc"
project = module.project.project_id
region = var.region
enable_proxy_protocol = true
connection_preference = "ACCEPT_MANUAL"
nat_subnets = [module.vpc.subnets_psc["${var.region}/psc"].self_link]
target_service = module.squid-ilb.forwarding_rule_self_links[""]
consumer_accept_lists {
project_id_or_num = module.project.project_id
connection_limit = 10
}
}
###############################################################################
# Squid resources #
###############################################################################
module "service-account-squid" {
source = "../../../../modules/iam-service-account"
project_id = module.project.project_id
name = "svc-squid"
iam_project_roles = {
(module.project.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}
module "cos-squid" {
source = "../../../../modules/cloud-config-container/__need_fixing/squid"
allow = var.allowed_domains
clients = [var.cidrs.app]
squid_config = "${path.module}/squid.conf"
config_variables = {
psc_cidr = var.cidrs.psc
}
}
module "squid-vm" {
source = "../../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "squid-vm"
instance_type = "e2-medium"
create_template = true
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/proxy"]
}]
boot_disk = {
initialize_params = {
image = "cos-cloud/cos-stable"
}
}
service_account = {
email = module.service-account-squid.email
}
metadata = {
user-data = module.cos-squid.cloud_config
google-logging-enabled = true
}
}
module "squid-mig" {
source = "../../../../modules/compute-mig"
project_id = module.project.project_id
location = "${var.region}-b"
name = "squid-mig"
instance_template = module.squid-vm.template.self_link
target_size = 1
auto_healing_policies = {
initial_delay_sec = 60
}
autoscaler_config = {
max_replicas = 10
min_replicas = 1
cooldown_period = 30
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
health_check_config = {
enable_logging = true
tcp = {
port = 3128
proxy_header = "PROXY_V1"
}
}
update_policy = {
minimal_action = "REPLACE"
type = "PROACTIVE"
max_surge = {
fixed = 3
}
min_ready_sec = 60
}
}
module "squid-ilb" {
source = "../../../../modules/net-lb-int"
project_id = module.project.project_id
region = var.region
name = "squid-ilb"
service_label = "squid-ilb"
forwarding_rules_config = {
"" = {
ports = [3128]
}
}
vpc_config = {
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/proxy"]
}
backends = [{
group = module.squid-mig.group_manager.instance_group
}]
health_check_config = {
enable_logging = true
tcp = {
port = 3128
proxy_header = "PROXY_V1"
}
}
}

View File

@ -1,60 +0,0 @@
# bind to port 3128 and require PROXY protocol
http_port 0.0.0.0:3128 require-proxy-header
# only proxy, don't cache
cache deny all
# redirect all logs to /dev/stdout
logfile_rotate 0
cache_log stdio:/dev/stdout
access_log stdio:/dev/stdout
cache_store_log stdio:/dev/stdout
pid_filename /var/run/squid/squid.pid
acl ssl_ports port 443
acl safe_ports port 80
acl safe_ports port 443
acl CONNECT method CONNECT
acl to_metadata dst 169.254.169.254
acl from_healthchecks src 130.211.0.0/22 35.191.0.0/16
acl psc src ${psc_cidr}
# read client CIDR ranges from clients.txt
acl clients src "/etc/squid/clients.txt"
# read allowed domains from allowlist.txt
acl allowlist dstdomain "/etc/squid/allowlist.txt"
# read denied domains from denylist.txt
acl denylist dstdomain "/etc/squid/denylist.txt"
# allow PROXY protocol from the PSC subnet
proxy_protocol_access allow psc
# allow PROXY protocol from the LB health checks
proxy_protocol_access allow from_healthchecks
# deny access to anything other than ports 80 and 443
http_access deny !safe_ports
# deny CONNECT if connection is not using ssl
http_access deny CONNECT !ssl_ports
# deny access to cachemgr
http_access deny manager
# deny access to localhost through the proxy
http_access deny to_localhost
# deny access to the local metadata server through the proxy
http_access deny to_metadata
# deny connection from allowed clients to any denied domains
http_access deny clients denylist
# allow connection from allowed clients only to the allowed domains
http_access allow clients allowlist
# deny everything else
http_access ${default_action} all

View File

@ -1,26 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cat <<EOF > /etc/apt/apt.conf.d/proxy.conf
Acquire {
HTTP::proxy "${proxy_url}";
HTTPS::proxy "${proxy_url}";
}
EOF
cat <<EOF > /etc/profile.d/proxy.sh
export http_proxy="${proxy_url}"
export https_proxy="${proxy_url}"
export no_proxy="127.0.0.1,localhost"
EOF

View File

@ -1,79 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "allowed_domains" {
description = "List of domains allowed by the squid proxy."
type = list(string)
default = [
".google.com",
".github.com",
".fastlydns.net",
".debian.org"
]
}
variable "cidrs" {
description = "CIDR ranges for subnets."
type = map(string)
default = {
app = "10.0.0.0/24"
proxy = "10.0.2.0/28"
psc = "10.0.3.0/28"
}
}
variable "nat_logging" {
description = "Enables Cloud NAT logging if not null, value is one of 'ERRORS_ONLY', 'TRANSLATIONS_ONLY', 'ALL'."
type = string
default = "ERRORS_ONLY"
}
variable "prefix" {
description = "Prefix used for resource names."
type = string
validation {
condition = var.prefix != ""
error_message = "Prefix cannot be empty."
}
}
variable "project_create" {
description = "Set to non null if project needs to be created."
type = object({
billing_account = string
parent = string
})
default = null
validation {
condition = (
var.project_create == null
? true
: can(regex("(organizations|folders)/[0-9]+", var.project_create.parent))
)
error_message = "Project parent must be of the form folders/folder_id or organizations/organization_id."
}
}
variable "project_id" {
description = "Project id used for all resources."
type = string
}
variable "region" {
description = "Default region for resources."
type = string
default = "europe-west1"
}

View File

@ -1,62 +0,0 @@
# Network filtering with Squid
This blueprint shows how to deploy a filtering HTTP proxy to restrict Internet access. Here we show one way to do this using a VPC with two subnets:
- The `apps` subnet hosts the VMs that will have their Internet access tightly controlled by a non-caching filtering forward proxy.
- The `proxy` subnet hosts a Cloud NAT instance and a [Squid](http://www.squid-cache.org/) server.
The VPC is a Shared VPC and all the service projects will be located under a folder enforcing the `compute.vmExternalIpAccess` [organization policy](https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints). This prevents the service projects from having external IPs, thus forcing all outbound Internet connections through the proxy.
To allow Internet connectivity to the proxy subnet, a Cloud NAT instance is configured to allow usage from [that subnet only](https://cloud.google.com/nat/docs/set-up-manage-network-address-translation#specify_subnet_ranges_for_nat). All other subnets are not allowed to use the Cloud NAT instance.
To simplify the usage of the proxy, a Cloud DNS private zone is created and the IP address of the proxy is exposed with the FQDN `proxy.internal`.
You can optionally deploy the Squid server as [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups) by setting the `mig` option to `true`. This option defaults to `false` which results in a standalone VM.
![High-level diagram](squid.png "High-level diagram")
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [billing_account](variables.tf#L26) | Billing account id used as default for new projects. | <code>string</code> | ✓ | |
| [prefix](variables.tf#L52) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [root_node](variables.tf#L67) | Root node for the new hierarchy, either 'organizations/org_id' or 'folders/folder_id'. | <code>string</code> | ✓ | |
| [allowed_domains](variables.tf#L17) | List of domains allowed by the squid proxy. | <code>list&#40;string&#41;</code> | | <code title="&#91;&#10; &#34;.google.com&#34;,&#10; &#34;.github.com&#34;&#10;&#93;">&#91;&#8230;&#93;</code> |
| [cidrs](variables.tf#L31) | CIDR ranges for subnets. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; apps &#61; &#34;10.0.0.0&#47;24&#34;&#10; proxy &#61; &#34;10.0.1.0&#47;28&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [mig](variables.tf#L40) | Enables the creation of an autoscaling managed instance group of squid instances. | <code>bool</code> | | <code>false</code> |
| [nat_logging](variables.tf#L46) | Enables Cloud NAT logging if not null, value is one of 'ERRORS_ONLY', 'TRANSLATIONS_ONLY', 'ALL'. | <code>string</code> | | <code>&#34;ERRORS_ONLY&#34;</code> |
| [region](variables.tf#L61) | Default region for resources. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [squid-address](outputs.tf#L17) | IP address of the Squid proxy. | |
<!-- END TFDOC -->
## Test
```hcl
module "test1" {
source = "./fabric/blueprints/networking/__need_fixing/filtering-proxy"
billing_account = "123456-123456-123456"
mig = true
prefix = "fabric"
root_node = "folders/123456789"
}
# tftest modules=14 resources=38
```
```hcl
module "test2" {
source = "./fabric/blueprints/networking/__need_fixing/filtering-proxy"
billing_account = "123456-123456-123456"
mig = false
prefix = "fabric"
root_node = "folders/123456789"
}
# tftest modules=12 resources=32
```

View File

@ -1,281 +0,0 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
squid_address = (
var.mig
? module.squid-ilb.0.forwarding_rule_addresses[""]
: module.squid-vm.internal_ip
)
}
###############################################################################
# Folder with network-related resources #
###############################################################################
module "folder-netops" {
source = "../../../../modules/folder"
parent = var.root_node
name = "netops"
}
###############################################################################
# Host project and shared VPC resources #
###############################################################################
module "project-host" {
source = "../../../../modules/project"
billing_account = var.billing_account
name = "host"
parent = module.folder-netops.id
prefix = var.prefix
services = [
"compute.googleapis.com",
"dns.googleapis.com",
"logging.googleapis.com"
]
shared_vpc_host_config = {
enabled = true
}
}
module "vpc" {
source = "../../../../modules/net-vpc"
project_id = module.project-host.project_id
name = "vpc"
subnets = [
{
name = "apps"
ip_cidr_range = var.cidrs.apps
region = var.region
},
{
name = "proxy"
ip_cidr_range = var.cidrs.proxy
region = var.region
}
]
}
module "firewall" {
source = "../../../../modules/net-vpc-firewall"
project_id = module.project-host.project_id
network = module.vpc.name
ingress_rules = {
allow-ingress-squid = {
description = "Allow squid ingress traffic"
source_ranges = [
var.cidrs.apps, "35.191.0.0/16", "130.211.0.0/22"
]
targets = [module.service-account-squid.email]
use_service_accounts = true
rules = [{
protocol = "tcp"
ports = [3128]
}]
}
}
}
module "nat" {
source = "../../../../modules/net-cloudnat"
project_id = module.project-host.project_id
region = var.region
name = "default"
router_network = module.vpc.name
config_source_subnets = "LIST_OF_SUBNETWORKS"
# 64512/11 = 5864 . 11 is the number of usable IPs in the proxy subnet
config_port_allocation = {
enable_dynamic_port_allocation = true
enable_endpoint_independent_mapping = false
min_ports_per_vm = 5864
}
subnetworks = [
{
self_link = module.vpc.subnet_self_links["${var.region}/proxy"]
config_source_ranges = ["ALL_IP_RANGES"]
secondary_ranges = null
}
]
logging_filter = var.nat_logging
}
module "private-dns" {
source = "../../../../modules/dns"
project_id = module.project-host.project_id
name = "internal"
zone_config = {
domain = "internal."
private = {
client_networks = [module.vpc.self_link]
}
}
recordsets = {
"A squid" = { ttl = 60, records = [local.squid_address] }
"CNAME proxy" = { ttl = 3600, records = ["squid.internal."] }
}
}
###############################################################################
# Squid resources #
###############################################################################
module "service-account-squid" {
source = "../../../../modules/iam-service-account"
project_id = module.project-host.project_id
name = "svc-squid"
iam_project_roles = {
(module.project-host.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}
module "cos-squid" {
source = "../../../../modules/cloud-config-container/__need_fixing/squid"
allow = var.allowed_domains
clients = [var.cidrs.apps]
}
module "squid-vm" {
source = "../../../../modules/compute-vm"
project_id = module.project-host.project_id
zone = "${var.region}-b"
name = "squid-vm"
instance_type = "e2-medium"
create_template = var.mig
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/proxy"]
}]
boot_disk = {
initialize_params = {
image = "cos-cloud/cos-stable"
}
}
service_account = {
email = module.service-account-squid.email
}
metadata = {
user-data = module.cos-squid.cloud_config
}
}
module "squid-mig" {
count = var.mig ? 1 : 0
source = "../../../../modules/compute-mig"
project_id = module.project-host.project_id
location = "${var.region}-b"
name = "squid-mig"
instance_template = module.squid-vm.template.self_link
target_size = 1
auto_healing_policies = {
initial_delay_sec = 60
}
autoscaler_config = {
max_replicas = 10
min_replicas = 1
cooldown_period = 30
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
health_check_config = {
enable_logging = true
tcp = {
port = 3128
}
}
}
module "squid-ilb" {
count = var.mig ? 1 : 0
source = "../../../../modules/net-lb-int"
project_id = module.project-host.project_id
region = var.region
name = "squid-ilb"
service_label = "squid-ilb"
forwarding_rules_config = {
"" = {
ports = [3128]
}
}
vpc_config = {
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/proxy"]
}
backends = [{
group = module.squid-mig.0.group_manager.instance_group
}]
health_check_config = {
enable_logging = true
tcp = {
port = 3128
}
}
}
###############################################################################
# Service project #
###############################################################################
module "folder-apps" {
source = "../../../../modules/folder"
parent = var.root_node
name = "apps"
org_policies = {
# prevent VMs with public IPs in the apps folder
"compute.vmExternalIpAccess" = {
rules = [{ deny = { all = true } }]
}
}
}
module "project-app" {
source = "../../../../modules/project"
billing_account = var.billing_account
name = "app1"
parent = module.folder-apps.id
prefix = var.prefix
services = ["compute.googleapis.com"]
shared_vpc_service_config = {
host_project = module.project-host.project_id
service_identity_iam = {
"roles/compute.networkUser" = ["cloudservices"]
}
}
}
module "test-vm" {
source = "../../../../modules/compute-vm"
project_id = module.project-app.project_id
zone = "${var.region}-b"
name = "test-vm"
instance_type = "e2-micro"
tags = ["ssh"]
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/apps"]
nat = false
addresses = null
}]
service_account = {
auto_create = true
}
}

View File

@ -1,20 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "squid-address" {
description = "IP address of the Squid proxy."
value = local.squid_address
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

View File

@ -1,70 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "allowed_domains" {
description = "List of domains allowed by the squid proxy."
type = list(string)
default = [
".google.com",
".github.com"
]
}
variable "billing_account" {
description = "Billing account id used as default for new projects."
type = string
}
variable "cidrs" {
description = "CIDR ranges for subnets."
type = map(string)
default = {
apps = "10.0.0.0/24"
proxy = "10.0.1.0/28"
}
}
variable "mig" {
description = "Enables the creation of an autoscaling managed instance group of squid instances."
type = bool
default = false
}
variable "nat_logging" {
description = "Enables Cloud NAT logging if not null, value is one of 'ERRORS_ONLY', 'TRANSLATIONS_ONLY', 'ALL'."
type = string
default = "ERRORS_ONLY"
}
variable "prefix" {
description = "Prefix used for resource names."
type = string
validation {
condition = var.prefix != ""
error_message = "Prefix cannot be empty."
}
}
variable "region" {
description = "Default region for resources."
type = string
default = "europe-west1"
}
variable "root_node" {
description = "Root node for the new hierarchy, either 'organizations/org_id' or 'folders/folder_id'."
type = string
}

View File

@ -1,28 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM marketplace.gcr.io/google/debian11
RUN apt-get update && apt-get dist-upgrade -y && apt-get install -y curl gnupg2
RUN curl -sSO https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh
RUN bash add-google-cloud-ops-agent-repo.sh --also-install
RUN rm -f add-google-cloud-ops-agent-repo.sh
RUN echo '#!/bin/bash' > /entrypoint.sh
RUN echo 'cd /tmp' >> /entrypoint.sh
RUN echo '/opt/google-cloud-ops-agent/libexec/google_cloud_ops_agent_engine -service=otel -in /etc/google-cloud-ops-agent/config.yaml' >> /entrypoint.sh
RUN echo '/opt/google-cloud-ops-agent/subagents/opentelemetry-collector/otelopscol --config=/tmp/otel.yaml --feature-gates=exporter.googlecloud.OTLPDirect' >> /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT /entrypoint.sh
CMD []

View File

@ -1,41 +0,0 @@
# Nginx-based reverse proxy cluster
This blueprint shows how to deploy an autoscaling reverse proxy cluster using Nginx, based on regional Managed Instance Groups.
![High-level diagram](reverse-proxy.png "High-level diagram")
The autoscaling is driven by Nginx current connections metric, sent by Cloud Ops Agent.
The example is for Nginx, but it could be easily adapted to any other reverse proxy software (eg. Squid, Varnish, etc).
## Ops Agent image
There is a simple [`Dockerfile`](Dockerfile) available for building Ops Agent to be run inside the ContainerOS instance. Build the container, push it to your Container/Artifact Repository and set the `ops_agent_image` to point to the image you built.
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [autoscaling_metric](variables.tf#L31) | Definition of metric to use for scaling. | <code title="object&#40;&#123;&#10; name &#61; string&#10; single_instance_assignment &#61; number&#10; target &#61; number&#10; type &#61; string &#35; GAUGE, DELTA_PER_SECOND, DELTA_PER_MINUTE&#10; filter &#61; string&#10;&#125;&#41;&#10;&#10;&#10;default &#61; &#123;&#10; name &#61; &#34;workload.googleapis.com&#47;nginx.connections_current&#34;&#10; single_instance_assignment &#61; null&#10; target &#61; 10 &#35; Target 10 connections per instance, just for demonstration purposes&#10; type &#61; &#34;GAUGE&#34;&#10; filter &#61; null&#10;&#125;">object&#40;&#123;&#8230;&#125;</code> | ✓ | |
| [prefix](variables.tf#L94) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_name](variables.tf#L112) | Name of an existing project or of the new project. | <code>string</code> | ✓ | |
| [autoscaling](variables.tf#L17) | Autoscaling configuration for the instance group. | <code title="object&#40;&#123;&#10; min_replicas &#61; number&#10; max_replicas &#61; number&#10; cooldown_period &#61; number&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; min_replicas &#61; 1&#10; max_replicas &#61; 10&#10; cooldown_period &#61; 30&#10;&#125;">&#123;&#8230;&#125;</code> |
| [backends](variables.tf#L50) | Nginx locations configurations to proxy traffic to. | <code>string</code> | | <code title="&#34;&#60;&#60;-EOT&#10; location &#47; &#123;&#10; proxy_pass http:&#47;&#47;10.0.16.58:80;&#10; proxy_http_version 1.1;&#10; proxy_set_header Connection &#34;&#34;;&#10; &#125;&#10;EOT&#34;">&#34;&#60;&#60;-EOT&#8230;EOT&#34;</code> |
| [cidrs](variables.tf#L62) | Subnet IP CIDR ranges. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; gce &#61; &#34;10.0.16.0&#47;24&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [network](variables.tf#L70) | Network name. | <code>string</code> | | <code>&#34;reverse-proxy-vpc&#34;</code> |
| [network_create](variables.tf#L76) | Create network or use existing one. | <code>bool</code> | | <code>true</code> |
| [nginx_image](variables.tf#L82) | Nginx container image to use. | <code>string</code> | | <code>&#34;gcr.io&#47;cloud-marketplace&#47;google&#47;nginx1:latest&#34;</code> |
| [ops_agent_image](variables.tf#L88) | Google Cloud Ops Agent container image to use. | <code>string</code> | | <code>&#34;gcr.io&#47;sfans-hub-project-d647&#47;ops-agent:latest&#34;</code> |
| [project_create](variables.tf#L103) | Parameters for the creation of the new project. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L117) | Default region for resources. | <code>string</code> | | <code>&#34;europe-west4&#34;</code> |
| [subnetwork](variables.tf#L123) | Subnetwork name. | <code>string</code> | | <code>&#34;gce&#34;</code> |
| [tls](variables.tf#L129) | Also offer reverse proxying with TLS (self-signed certificate). | <code>bool</code> | | <code>false</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [load_balancer_url](outputs.tf#L17) | Load balancer for the reverse proxy instance group. | |
<!-- END TFDOC -->

View File

@ -1,330 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
monitoring_agent_unit = <<-EOT
[Unit]
Description=Start monitoring agent container
After=gcr-online.target docker.socket
Wants=gcr-online.target docker.socket docker-events-collector.service
[Service]
Environment="HOME=/home/opsagent"
ExecStartPre=/usr/bin/docker-credential-gcr configure-docker
ExecStart=/usr/bin/docker run --rm --name=monitoring-agent \
--network host \
-v /etc/google-cloud-ops-agent/config.yaml:/etc/google-cloud-ops-agent/config.yaml \
${var.ops_agent_image}
ExecStop=/usr/bin/docker stop monitoring-agent
EOT
monitoring_agent_config = <<-EOT
logging:
service:
pipelines:
default_pipeline:
receivers: []
metrics:
receivers:
hostmetrics:
type: hostmetrics
nginx:
type: nginx
collection_interval: 10s
stub_status_url: http://localhost/healthz
service:
pipelines:
default_pipeline:
receivers:
- hostmetrics
- nginx
EOT
nginx_config = <<-EOT
server {
listen 80;
server_name HOSTNAME localhost;
%{if var.tls}
listen 443 ssl;
ssl_certificate /etc/ssl/self-signed.crt;
ssl_certificate_key /etc/ssl/self-signed.key;
%{endif}
keepalive_timeout 650s;
keepalive_requests 10000;
proxy_connect_timeout 60s;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
error_log stderr;
access_log /dev/stdout combined;
set_real_ip_from ${module.glb.address}/32;
set_real_ip_from 35.191.0.0/16;
set_real_ip_from 130.211.0.0/22;
real_ip_header X-Forwarded-For;
real_ip_recursive off;
location /healthz {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 35.191.0.0/16;
allow 130.211.0.0/22;
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
${var.backends}
}
EOT
nginx_files = {
"/etc/systemd/system/monitoring-agent.service" = {
content = local.monitoring_agent_unit
owner = "root"
permissions = "0644"
}
"/etc/nginx/conf.d/default.conf" = {
content = local.nginx_config
owner = "root"
permissions = "0644"
}
"/etc/google-cloud-ops-agent/config.yaml" = {
content = local.monitoring_agent_config
owner = "root"
permissions = "0644"
}
}
users = [
{
username = "opsagent"
uid = 2001
}
]
}
module "project" {
source = "../../../modules/project"
billing_account = (
var.project_create != null
? var.project_create.billing_account_id
: null
)
name = var.project_name
parent = (var.project_create != null
? var.project_create.parent
: null
)
project_create = var.project_create != null
services = [
"cloudresourcemanager.googleapis.com",
"compute.googleapis.com",
"iam.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com",
]
}
module "vpc" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = var.network
subnets = [{
name = var.subnetwork
ip_cidr_range = var.cidrs[var.subnetwork]
region = var.region
}]
vpc_create = var.network_create
}
module "firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc.name
ingress_rules = {
"${var.prefix}-allow-http-to-proxy-cluster" = {
description = "Allow Nginx HTTP(S) ingress traffic"
source_ranges = [
var.cidrs[var.subnetwork], "35.191.0.0/16", "130.211.0.0/22"
]
targets = [module.service-account-proxy.email]
use_service_accounts = true
rules = [{ protocol = "tcp", ports = [80, 443] }]
}
"${var.prefix}-allow-iap-ssh" = {
description = "Allow Nginx SSH traffic from IAP"
source_ranges = ["35.235.240.0/20"]
targets = [module.service-account-proxy.email]
use_service_accounts = true
rules = [{ protocol = "tcp", ports = [22] }]
}
}
}
module "nat" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-nat"
config_min_ports_per_vm = 4000
config_source_subnets = "LIST_OF_SUBNETWORKS"
logging_filter = "ALL"
router_network = module.vpc.name
subnetworks = [{
self_link = (
module.vpc.subnet_self_links[format("%s/%s", var.region, var.subnetwork)]
)
config_source_ranges = ["ALL_IP_RANGES"]
secondary_ranges = null
}]
}
###############################################################################
# Proxy resources #
###############################################################################
module "service-account-proxy" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "${var.prefix}-reverse-proxy"
iam_project_roles = {
(module.project.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
"roles/storage.objectViewer", // For pulling the Ops Agent image
]
}
}
module "cos-nginx" {
count = !var.tls ? 1 : 0
source = "../../../modules/cloud-config-container/nginx"
image = var.nginx_image
files = local.nginx_files
users = local.users
runcmd_pre = ["sed -i \"s/HOSTNAME/$${HOSTNAME}/\" /etc/nginx/conf.d/default.conf"]
runcmd_post = ["systemctl start monitoring-agent"]
}
module "cos-nginx-tls" {
count = var.tls ? 1 : 0
source = "../../../modules/cloud-config-container/nginx-tls"
nginx_image = var.nginx_image
files = local.nginx_files
users = local.users
runcmd_post = ["systemctl start monitoring-agent"]
}
module "mig-proxy" {
source = "../../../modules/compute-mig"
project_id = module.project.project_id
location = var.region
name = "${var.prefix}-proxy-cluster"
named_ports = {
http = "80"
https = "443"
}
autoscaler_config = var.autoscaling == null ? null : {
min_replicas = var.autoscaling.min_replicas
max_replicas = var.autoscaling.max_replicas
cooldown_period = var.autoscaling.cooldown_period
cpu_utilization_target = null
load_balancing_utilization_target = null
metric = var.autoscaling_metric
}
update_policy = {
minimal_action = "REPLACE"
type = "PROACTIVE"
min_ready_sec = 30
max_surge = {
fixed = 1
}
}
instance_template = module.proxy-vm.template.self_link
health_check_config = {
type = "http"
check = {
port = 80
request_path = "/healthz"
}
config = {
check_interval_sec = 10
healthy_threshold = 1
unhealthy_threshold = 1
timeout_sec = 10
}
logging = true
}
auto_healing_policies = {
health_check = module.mig-proxy.health_check.self_link
initial_delay_sec = 60
}
}
module "proxy-vm" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = format("%s-c", var.region)
name = "nginx-test-vm"
instance_type = "e2-standard-2"
tags = ["proxy-cluster"]
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links[format("%s/%s", var.region, var.subnetwork)]
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
create_template = true
metadata = {
user-data = !var.tls ? module.cos-nginx.0.cloud_config : module.cos-nginx-tls.0.cloud_config
google-logging-enabled = true
}
service_account = {
email = module.service-account-proxy.email
}
}
module "glb" {
source = "../../../modules/net-lb-app-ext"
project_id = module.project.project_id
name = "${var.prefix}-reverse-proxy-glb"
health_check_configs = {
default = {
check_interval_sec = 10
healthy_threshold = 1
unhealthy_threshold = 1
timeout_sec = 10
http = {
port_specification = "USE_NAMED_PORT"
port_name = "http"
request_path = "/healthz"
}
}
}
backend_service_configs = {
default = {
backends = [{ backend = module.mig-proxy.group_manager.instance_group }]
port_name = !var.tls ? "http" : "https"
protocol = !var.tls ? "HTTP" : "HTTPS"
}
}
}

View File

@ -1,20 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "load_balancer_url" {
description = "Load balancer for the reverse proxy instance group."
value = format("http%s://%s/", var.tls ? "s" : "", module.glb.address)
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

View File

@ -1,133 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "autoscaling" {
description = "Autoscaling configuration for the instance group."
type = object({
min_replicas = number
max_replicas = number
cooldown_period = number
})
default = {
min_replicas = 1
max_replicas = 10
cooldown_period = 30
}
}
variable "autoscaling_metric" {
description = "Definition of metric to use for scaling."
type = object({
name = string
single_instance_assignment = number
target = number
type = string # GAUGE, DELTA_PER_SECOND, DELTA_PER_MINUTE
filter = string
})
default = {
name = "workload.googleapis.com/nginx.connections_current"
single_instance_assignment = null
target = 10 # Target 10 connections per instance, just for demonstration purposes
type = "GAUGE"
filter = null
}
}
variable "backends" {
description = "Nginx locations configurations to proxy traffic to."
type = string
default = <<-EOT
location / {
proxy_pass http://10.0.16.58:80;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
EOT
}
variable "cidrs" {
description = "Subnet IP CIDR ranges."
type = map(string)
default = {
gce = "10.0.16.0/24"
}
}
variable "network" {
description = "Network name."
type = string
default = "reverse-proxy-vpc"
}
variable "network_create" {
description = "Create network or use existing one."
type = bool
default = true
}
variable "nginx_image" {
description = "Nginx container image to use."
type = string
default = "gcr.io/cloud-marketplace/google/nginx1:latest"
}
variable "ops_agent_image" {
description = "Google Cloud Ops Agent container image to use."
type = string
default = "gcr.io/sfans-hub-project-d647/ops-agent:latest"
}
variable "prefix" {
description = "Prefix used for resource names."
type = string
validation {
condition = var.prefix != ""
error_message = "Prefix cannot be empty."
}
}
variable "project_create" {
description = "Parameters for the creation of the new project."
type = object({
billing_account_id = string
parent = string
})
default = null
}
variable "project_name" {
description = "Name of an existing project or of the new project."
type = string
}
variable "region" {
description = "Default region for resources."
type = string
default = "europe-west4"
}
variable "subnetwork" {
description = "Subnetwork name."
type = string
default = "gce"
}
variable "tls" {
description = "Also offer reverse proxying with TLS (self-signed certificate)."
type = bool
default = false
}

View File

@ -1,226 +0,0 @@
# On-prem DNS and Google Private Access
This blueprint leverages the on prem in a box module to bootstrap an emulated on-premises environment on GCP, then connects it via VPN and sets up BGP and DNS so that several specific features can be tested:
- [Cloud DNS forwarding zone](https://cloud.google.com/dns/docs/overview#fz-targets) to on-prem
- DNS forwarding from on-prem via a [Cloud DNS inbound policy](https://cloud.google.com/dns/docs/policies#create-in)
- [Private Access for on-premises hosts](https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid)
The blueprint has been purposefully kept simple to show how to use and wire the on-prem module, but it lends itself well to experimenting and can be combined with the other [infrastructure blueprints](../) in this repository to test different GCP networking patterns in connection to on-prem. This is the high level diagram:
![High-level diagram](diagram.png "High-level diagram")
## Managed resources and services
This sample creates several distinct groups of resources:
- one VPC with two regions
- one set of firewall rules
- one Cloud NAT configuration per region
- one test instance on each region
- one service account for the test instances
- one service account for the onprem instance
- two dynamic VPN gateways in each of the regions with a single tunnel
- two DNS zones (private and forwarding) and a DNS inbound policy
- one emulated on-premises environment in a single GCP instance
## Cloud DNS inbound forwarder entry point
The Cloud DNS inbound policy reserves an IP address in the VPC, which is used by the on-prem DNS server to forward queries to Cloud DNS. This address needs of course to be explicitly set in the on-prem DNS configuration (see below for details), but since there's currently no way for Terraform to find the exact address (cf [Google provider issue](https://github.com/terraform-providers/terraform-provider-google/issues/3753)), the following manual workaround needs to be applied.
### Find out the forwarder entry point address
Run this gcloud command to [find out the address assigned to the inbound forwarder](https://cloud.google.com/dns/docs/policies#list-in-entrypoints):
```bash
gcloud compute addresses list --project [your project id]
```
In the list of addresses, look for the address with purpose `DNS_RESOLVER` in the subnet `to-onprem-default`. If its IP address is `10.0.0.2` it matches the default value in the Terraform `forwarder_address` variable, which means you're all set. If it's different, proceed to the next step.
### Update the forwarder address variable and recreate on-prem
If the forwarder address does not match the Terraform variable, add the correct value in your `terraform.tfvars` (or change the default value in `variables.tf`), then taint the onprem instance and apply to recreate it with the correct value in the DNS configuration:
```bash
tf apply
tf taint 'module.vm-onprem.google_compute_instance.default["onprem-1"]'
tf apply
```
## CoreDNS configuration for on-premises
The on-prem module uses a CoreDNS container to expose its DNS service, configured with foru distinct blocks:
- the onprem block serving static records for the `onprem.example.com` zone that map to each of the on-prem containers
- the forwarding block for the `gcp.example.com` zone and for Google Private Access, that map to the IP address of the Cloud DNS inbound policy
- the `google.internal` block that exposes to containers a name for the instance metadata address
- the default block that forwards to Google public DNS resolvers
This is the CoreDNS configuration:
```coredns
onprem.example.com {
root /etc/coredns
hosts onprem.hosts
log
errors
}
gcp.example.com googleapis.com {
forward . ${resolver_address}
log
errors
}
google.internal {
hosts {
169.254.169.254 metadata.google.internal
}
}
. {
forward . 8.8.8.8
log
errors
}
```
## Testing
### Onprem to cloud
```bash
# check containers are running
sudo docker ps
# connect to the onprem instance
gcloud compute ssh onprem-1
# check that the VPN tunnels are up
sudo docker exec -it onprem_vpn_1 ipsec statusall
Status of IKE charon daemon (strongSwan 5.8.1, Linux 5.4.0-1029-gcp, x86_64):
uptime: 6 minutes, since Nov 30 08:42:08 2020
worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 8
loaded plugins: charon aesni mgf1 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl fips-prf gmp curve25519 xcbc cmac curl sqlite attr kernel-netlink resolve socket-default farp stroke vici updown eap-identity eap-sim eap-aka eap-aka-3gpp2 eap-simaka-pseudonym eap-simaka-reauth eap-md5 eap-mschapv2 eap-radius eap-tls xauth-generic xauth-eap dhcp unity counters
Listening IP addresses:
10.0.16.2
169.254.1.2
169.254.2.2
Connections:
gcp: %any...35.233.104.67,0.0.0.0/0,::/0 IKEv2, dpddelay=30s
gcp: local: uses pre-shared key authentication
gcp: remote: [35.233.104.67] uses pre-shared key authentication
gcp: child: 0.0.0.0/0 === 0.0.0.0/0 TUNNEL, dpdaction=restart
gcp2: %any...35.246.101.51,0.0.0.0/0,::/0 IKEv2, dpddelay=30s
gcp2: local: uses pre-shared key authentication
gcp2: remote: [35.246.101.51] uses pre-shared key authentication
gcp2: child: 0.0.0.0/0 === 0.0.0.0/0 TUNNEL, dpdaction=restart
Security Associations (2 up, 0 connecting):
gcp2[4]: ESTABLISHED 6 minutes ago, 10.0.16.2[34.76.57.103]...35.246.101.51[35.246.101.51]
gcp2[4]: IKEv2 SPIs: 227cb2c52085a743_i 13b18b0ad5d4de2b_r*, pre-shared key reauthentication in 9 hours
gcp2[4]: IKE proposal: AES_GCM_16_256/PRF_HMAC_SHA2_512/MODP_2048
gcp2{4}: INSTALLED, TUNNEL, reqid 2, ESP in UDP SPIs: cb6fdb84_i eea28dee_o
gcp2{4}: AES_GCM_16_256, 3298 bytes_i, 3051 bytes_o (48 pkts, 3s ago), rekeying in 2 hours
gcp2{4}: 0.0.0.0/0 === 0.0.0.0/0
gcp[3]: ESTABLISHED 6 minutes ago, 10.0.16.2[34.76.57.103]...35.233.104.67[35.233.104.67]
gcp[3]: IKEv2 SPIs: e2cffed5395b63dd_i 99f343468625507c_r*, pre-shared key reauthentication in 9 hours
gcp[3]: IKE proposal: AES_GCM_16_256/PRF_HMAC_SHA2_512/MODP_2048
gcp{3}: INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: c3f09701_i 4e8cc8d5_o
gcp{3}: AES_GCM_16_256, 3438 bytes_i, 3135 bytes_o (49 pkts, 8s ago), rekeying in 2 hours
gcp{3}: 0.0.0.0/0 === 0.0.0.0/0
# check that the BGP sessions works and the advertised routes are set
sudo docker exec -it onprem_bird_1 ip route
default via 10.0.16.1 dev eth0
10.0.0.0/24 proto bird src 10.0.16.2
nexthop via 169.254.1.1 dev vti0 weight 1
nexthop via 169.254.2.1 dev vti1 weight 1
10.0.16.0/24 dev eth0 proto kernel scope link src 10.0.16.2
10.10.0.0/24 proto bird src 10.0.16.2
nexthop via 169.254.1.1 dev vti0 weight 1
nexthop via 169.254.2.1 dev vti1 weight 1
35.199.192.0/19 proto bird src 10.0.16.2
nexthop via 169.254.1.1 dev vti0 weight 1
nexthop via 169.254.2.1 dev vti1 weight 1
169.254.1.0/30 dev vti0 proto kernel scope link src 169.254.1.2
169.254.2.0/30 dev vti1 proto kernel scope link src 169.254.2.2
199.36.153.4/30 proto bird src 10.0.16.2
nexthop via 169.254.1.1 dev vti0 weight 1
nexthop via 169.254.2.1 dev vti1 weight 1
199.36.153.8/30 proto bird src 10.0.16.2
nexthop via 169.254.1.1 dev vti0 weight 1
nexthop via 169.254.2.1 dev vti1 weight 1
# get a shell on the toolbox container
sudo docker exec -it onprem_toolbox_1 sh
# test pinging the IP address of the test instances (check outputs for it)
ping 10.0.0.3
ping 10.10.0.3
# note: if you are able to ping the IP but the DNS tests below do not work,
# refer to the sections above on configuring the DNS inbound fwd IP
# test forwarding from CoreDNS via the Cloud DNS inbound policy
dig test-1-1.gcp.example.org +short
10.0.0.3
dig test-2-1.gcp.example.org +short
10.10.0.3
# test that Private Access is configured correctly
dig compute.googleapis.com +short
private.googleapis.com.
199.36.153.8
199.36.153.9
199.36.153.10
199.36.153.11
# issue an API call via Private Access
gcloud config set project [your project id]
gcloud compute instances list
```
### Cloud to onprem
```bash
# connect to the test instance
gcloud compute ssh test-1
# test forwarding from Cloud DNS to onprem CoreDNS (address may differ)
dig gw.onprem.example.org +short
10.0.16.1
# test a request to the onprem web server
curl www.onprem.example.org -s |grep h1
<h1>On Prem in a Box</h1>
```
## Operational considerations
A single pre-existing project is used in this blueprint to keep variables and complexity to a minimum, in a real world scenarios each spoke would probably use a separate project.
The VPN-s used to connect to the on-premises environment do not account for HA, upgrading to use HA VPN is reasonably simple by using the relevant [module](../../../../modules/net-vpn-ha).
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [project_id](variables.tf#L59) | Project id for all resources. | <code>string</code> | ✓ | |
| [bgp_asn](variables.tf#L17) | BGP ASNs. | <code>map&#40;number&#41;</code> | | <code title="&#123;&#10; gcp1 &#61; 64513&#10; gcp2 &#61; 64520&#10; onprem1 &#61; 64514&#10; onprem2 &#61; 64514&#10;&#125;">&#123;&#8230;&#125;</code> |
| [bgp_interface_ranges](variables.tf#L28) | BGP interface IP CIDR ranges. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; gcp1 &#61; &#34;169.254.1.0&#47;30&#34;&#10; gcp2 &#61; &#34;169.254.2.0&#47;30&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [dns_forwarder_address](variables.tf#L37) | Address of the DNS server used to forward queries from on-premises. | <code>string</code> | | <code>&#34;10.0.0.2&#34;</code> |
| [forwarder_address](variables.tf#L43) | GCP DNS inbound policy forwarder address. | <code>string</code> | | <code>&#34;10.0.0.2&#34;</code> |
| [ip_ranges](variables.tf#L49) | IP CIDR ranges. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; gcp1 &#61; &#34;10.0.0.0&#47;24&#34;&#10; gcp2 &#61; &#34;10.10.0.0&#47;24&#34;&#10; onprem &#61; &#34;10.0.16.0&#47;24&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [region](variables.tf#L64) | VPC region. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; gcp1 &#61; &#34;europe-west1&#34;&#10; gcp2 &#61; &#34;europe-west2&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [ssh_source_ranges](variables.tf#L73) | IP CIDR ranges that will be allowed to connect via SSH to the onprem instance. | <code>list&#40;string&#41;</code> | | <code>&#91;&#34;0.0.0.0&#47;0&#34;&#93;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [onprem-instance](outputs.tf#L17) | Onprem instance details. | |
| [test-instance1](outputs.tf#L26) | Test instance details. | |
| [test-instance2](outputs.tf#L33) | Test instance details. | |
<!-- END TFDOC -->

View File

@ -1,21 +0,0 @@
onprem.example.org {
root /etc/coredns
hosts onprem.hosts
log
errors
}
gcp.example.org googleapis.com {
forward . ${dns_forwarder_address}
log
errors
}
google.internal {
hosts {
169.254.169.254 metadata.google.internal
}
}
. {
forward . 8.8.8.8
log
errors
}

View File

@ -1,20 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
backend "gcs" {
bucket = ""
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

View File

@ -1,331 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
bgp_interface_gcp1 = cidrhost(var.bgp_interface_ranges.gcp1, 1)
bgp_interface_onprem1 = cidrhost(var.bgp_interface_ranges.gcp1, 2)
bgp_interface_gcp2 = cidrhost(var.bgp_interface_ranges.gcp2, 1)
bgp_interface_onprem2 = cidrhost(var.bgp_interface_ranges.gcp2, 2)
netblocks = {
dns = data.google_netblock_ip_ranges.dns-forwarders.cidr_blocks_ipv4.0
private = data.google_netblock_ip_ranges.private-googleapis.cidr_blocks_ipv4.0
restricted = data.google_netblock_ip_ranges.restricted-googleapis.cidr_blocks_ipv4.0
}
vips = {
private = [for i in range(4) : cidrhost(local.netblocks.private, i)]
restricted = [for i in range(4) : cidrhost(local.netblocks.restricted, i)]
}
vm-startup-script = join("\n", [
"#! /bin/bash",
"apt-get update && apt-get install -y bash-completion dnsutils kubectl"
])
}
data "google_netblock_ip_ranges" "dns-forwarders" {
range_type = "dns-forwarders"
}
data "google_netblock_ip_ranges" "private-googleapis" {
range_type = "private-googleapis"
}
data "google_netblock_ip_ranges" "restricted-googleapis" {
range_type = "restricted-googleapis"
}
################################################################################
# Networking #
################################################################################
module "vpc" {
source = "../../../modules/net-vpc"
project_id = var.project_id
name = "to-onprem"
subnets = [
{
ip_cidr_range = var.ip_ranges.gcp1
name = "subnet1"
region = var.region.gcp1
},
{
ip_cidr_range = var.ip_ranges.gcp2
name = "subnet2"
region = var.region.gcp2
}
]
}
module "vpc-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.vpc.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
ssh_ranges = var.ssh_source_ranges
}
}
module "vpn1" {
source = "../../../modules/net-vpn-dynamic"
project_id = var.project_id
region = var.region.gcp1
network = module.vpc.name
name = "to-onprem1"
router_config = { asn = var.bgp_asn.gcp1 }
tunnels = {
onprem = {
bgp_peer = {
address = local.bgp_interface_onprem1
asn = var.bgp_asn.onprem1
custom_advertise = {
all_subnets = true
all_vpc_subnets = false
all_peer_vpc_subnets = false
ip_ranges = {
(local.netblocks.dns) = "DNS resolvers"
(local.netblocks.private) = "private.gooogleapis.com"
(local.netblocks.restricted) = "restricted.gooogleapis.com"
} }
}
bgp_session_range = "${local.bgp_interface_gcp1}/30"
peer_ip = module.vm-onprem.external_ip
}
}
}
module "vpn2" {
source = "../../../modules/net-vpn-dynamic"
project_id = var.project_id
region = var.region.gcp2
network = module.vpc.name
name = "to-onprem2"
router_config = { asn = var.bgp_asn.gcp2 }
tunnels = {
onprem = {
bgp_peer = {
address = local.bgp_interface_onprem2
asn = var.bgp_asn.onprem2
custom_advertise = {
all_subnets = true
all_vpc_subnets = false
all_peer_vpc_subnets = false
ip_ranges = {
(local.netblocks.dns) = "DNS resolvers"
(local.netblocks.private) = "private.gooogleapis.com"
(local.netblocks.restricted) = "restricted.gooogleapis.com"
}
}
}
bgp_session_range = "${local.bgp_interface_gcp2}/30"
peer_ip = module.vm-onprem.external_ip
}
}
}
module "nat1" {
source = "../../../modules/net-cloudnat"
project_id = var.project_id
region = var.region.gcp1
name = "default"
router_create = false
router_name = module.vpn1.router_name
}
module "nat2" {
source = "../../../modules/net-cloudnat"
project_id = var.project_id
region = var.region.gcp2
name = "default"
router_create = false
router_name = module.vpn2.router_name
}
################################################################################
# DNS #
################################################################################
module "dns-gcp" {
source = "../../../modules/dns"
project_id = var.project_id
name = "gcp-example"
zone_config = {
domain = "gcp.example.org."
private = {
client_networks = [module.vpc.self_link]
}
}
recordsets = {
"A localhost" = { records = ["127.0.0.1"] }
"A test-1" = { records = [module.vm-test1.internal_ip] }
"A test-2" = { records = [module.vm-test2.internal_ip] }
}
}
module "dns-api" {
source = "../../../modules/dns"
project_id = var.project_id
name = "googleapis"
zone_config = {
domain = "googleapis.com."
private = {
client_networks = [module.vpc.self_link]
}
}
recordsets = {
"CNAME *" = { records = ["private.googleapis.com."] }
"A private" = { records = local.vips.private }
"A restricted" = { records = local.vips.restricted }
}
}
module "dns-onprem" {
source = "../../../modules/dns"
project_id = var.project_id
name = "onprem-example"
zone_config = {
domain = "onprem.example.org."
forwarding = {
client_networks = [module.vpc.self_link]
forwarders = {
"${cidrhost(var.ip_ranges.onprem, 3)}" = null
}
}
}
}
resource "google_dns_policy" "inbound" {
provider = google-beta
project = var.project_id
name = "gcp-inbound"
enable_inbound_forwarding = true
networks {
network_url = module.vpc.self_link
}
}
################################################################################
# Test instance #
################################################################################
module "service-account-gce" {
source = "../../../modules/iam-service-account"
project_id = var.project_id
name = "gce-test"
iam_project_roles = {
(var.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}
module "vm-test1" {
source = "../../../modules/compute-vm"
project_id = var.project_id
zone = "${var.region.gcp1}-b"
name = "test-1"
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region.gcp1}/subnet1"]
}]
metadata = { startup-script = local.vm-startup-script }
service_account = {
email = module.service-account-gce.email
}
tags = ["ssh"]
}
module "vm-test2" {
source = "../../../modules/compute-vm"
project_id = var.project_id
zone = "${var.region.gcp2}-b"
name = "test-2"
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region.gcp2}/subnet2"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = {
email = module.service-account-gce.email
}
tags = ["ssh"]
}
################################################################################
# On prem #
################################################################################
module "config-onprem" {
source = "../../../modules/cloud-config-container/onprem"
config_variables = { dns_forwarder_address = var.dns_forwarder_address }
coredns_config = "${path.module}/assets/Corefile"
local_ip_cidr_range = var.ip_ranges.onprem
vpn_config = {
peer_ip = module.vpn1.address
peer_ip2 = module.vpn2.address
shared_secret = module.vpn1.random_secret
shared_secret2 = module.vpn2.random_secret
type = "dynamic"
}
vpn_dynamic_config = {
local_bgp_asn = var.bgp_asn.onprem1
local_bgp_address = local.bgp_interface_onprem1
peer_bgp_asn = var.bgp_asn.gcp1
peer_bgp_address = local.bgp_interface_gcp1
local_bgp_asn2 = var.bgp_asn.onprem2
local_bgp_address2 = local.bgp_interface_onprem2
peer_bgp_asn2 = var.bgp_asn.gcp2
peer_bgp_address2 = local.bgp_interface_gcp2
}
}
module "service-account-onprem" {
source = "../../../modules/iam-service-account"
project_id = var.project_id
name = "gce-onprem"
iam_project_roles = {
(var.project_id) = [
"roles/compute.viewer",
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}
module "vm-onprem" {
source = "../../../modules/compute-vm"
project_id = var.project_id
zone = "${var.region.gcp1}-b"
instance_type = "f1-micro"
name = "onprem"
boot_disk = {
initialize_params = {
image = "ubuntu-os-cloud/ubuntu-1804-lts"
}
}
metadata = {
user-data = module.config-onprem.cloud_config
}
network_interfaces = [{
network = module.vpc.name
subnetwork = module.vpc.subnet_self_links["${var.region.gcp1}/subnet1"]
}]
service_account = {
email = module.service-account-onprem.email
}
tags = ["ssh"]
}

View File

@ -1,39 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "onprem-instance" {
description = "Onprem instance details."
value = {
external_ip = module.vm-onprem.external_ip
internal_ip = module.vm-onprem.internal_ip
name = module.vm-onprem.instance.name
}
}
output "test-instance1" {
description = "Test instance details."
value = join(" ", [
module.vm-test1.instance.name,
module.vm-test1.internal_ip
])
}
output "test-instance2" {
description = "Test instance details."
value = join(" ", [
module.vm-test2.instance.name,
module.vm-test2.internal_ip
])
}

View File

@ -1,77 +0,0 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "bgp_asn" {
description = "BGP ASNs."
type = map(number)
default = {
gcp1 = 64513
gcp2 = 64520
onprem1 = 64514
onprem2 = 64514
}
}
variable "bgp_interface_ranges" {
description = "BGP interface IP CIDR ranges."
type = map(string)
default = {
gcp1 = "169.254.1.0/30"
gcp2 = "169.254.2.0/30"
}
}
variable "dns_forwarder_address" {
description = "Address of the DNS server used to forward queries from on-premises."
type = string
default = "10.0.0.2"
}
variable "forwarder_address" {
description = "GCP DNS inbound policy forwarder address."
type = string
default = "10.0.0.2"
}
variable "ip_ranges" {
description = "IP CIDR ranges."
type = map(string)
default = {
gcp1 = "10.0.0.0/24"
gcp2 = "10.10.0.0/24"
onprem = "10.0.16.0/24"
}
}
variable "project_id" {
description = "Project id for all resources."
type = string
}
variable "region" {
description = "VPC region."
type = map(string)
default = {
gcp1 = "europe-west1"
gcp2 = "europe-west2"
}
}
variable "ssh_source_ranges" {
description = "IP CIDR ranges that will be allowed to connect via SSH to the onprem instance."
type = list(string)
default = ["0.0.0.0/0"]
}

View File

@ -0,0 +1 @@
apichick

View File

@ -0,0 +1 @@
LucaPrete

View File

@ -0,0 +1 @@
sruffilli

View File

@ -1,126 +0,0 @@
# Hub and Spoke using VPC Network Peering
This blueprint creates a simple **Hub and Spoke** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [VPC Network Peering](https://cloud.google.com/vpc/docs/vpc-peering).
Since VPC Network Peering does not provide transitive routing, some things
don't work without additional configuration. By default, spokes cannot
talk with other spokes, and managed services in tenent networks can only be
reached from the attached spoke.
To get around these limitations, this blueprint uses [Cloud
VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview)
to provide transitive routing and to establish connectivity to the Google Kubernetes Engine (GKE)
masters in the tenant project ([courtesy of @drebes](https://github.com/drebes/tf-samples/blob/master/gke-master-from-hub/main.tf#L10)). Other solutions typically involve the use of proxies, as [described in this GKE article](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
One other topic that needs to be considered when using peering is the limit of 25 peerings in each peering group, which constrains the scalability of design like the one presented here.
The blueprint has been purposefully kept simple to show how to use and wire the VPC modules together, and so that it can be used as a basis for more complex scenarios. This is the high level diagram:
![High-level diagram](diagram.png "High-level diagram")
## Managed resources and services
This sample creates several distinct groups of resources:
- three VPC networks, one each for the hub and spokes, each with one subnet
- VPC Network Peering configurations between the hub network and each spoke
- a Compute Engine VM instance for each VPC network. The VMs are created
using an accompanying service account
- private GKE cluster with a single node pool in the spoke-2 VPC network. The GKE nodes have an accompanying service account.
- one set of firewall rules for each VPC network
- one Cloud NAT configuration for each network
- one test instance for each spoke
- VPN gateways in the hub and spoke-2 networks with accompanying tunnels. These tunnels allow the Cloud Routers to exchange transitive routes so that resources in spoke-1 and spoke-2 can reach each other, and so that resources in the hub network can reach the control plane of the GKE cluster hosted in spoke-2.
## Testing GKE access from spoke 1
As mentioned above, VPN tunnels are to provide transitive routing so that
the hub network can connect to the GKE master. This diagram illustrates the solution
![Network-level diagram](diagram-network.png "Network-level diagram")
To test cluster access, first log on to the spoke 2 instance and confirm cluster and IAM roles are set up correctly:
```bash
gcloud container clusters get-credentials cluster-1 --zone europe-west1-b
kubectl get all
```
The blueprint configures the peering with the GKE master VPC network to export routes for you, so that VPN routes are passed through the peering. You can disable by hand in the console or by editing the `peering_config` variable in the `gke-cluster` module, to test non-working configurations or switch to using the [GKE proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
### Export routes via Terraform (recommended)
Change the GKE cluster module and add a new variable after `private_cluster_config`:
```tfvars
peering_config = {
export_routes = true
import_routes = false
}
```
If you added the variable after applying, simply apply Terraform again.
### Export routes via gcloud (alternative)
If you prefer to use `gcloud` to export routes on the peering, first identify the peering (it has a name like `gke-xxxxxxxxxxxxxxxxxxxx-xxxx-xxxx-peer`) in the Cloud Console from the *VPC network peering* page, or using `gcloud`, then configure it to export routes:
```
gcloud compute networks peerings list
# find the gke-xxxxxxxxxxxxxxxxxxxx-xxxx-xxxx-peer in the spoke-2 network
gcloud compute networks peerings update [peering name from above] \
--network spoke-2 --export-custom-routes
```
### Test routes
Then connect via SSH to the hub VM instance and run the same commands you ran on the spoke 2 instance above. You should be able to run `kubectl` commands against the cluster. To test the default situation with no supporting VPN, just comment out the two VPN modules in `main.tf` and run `terraform apply` to bring down the VPN gateways and tunnels. GKE should only become accessible from spoke 2.
## Operational considerations
A single pre-existing project is used in this blueprint to keep variables and complexity to a minimum, in a real world scenario each spoke would use a separate project (and Shared VPC).
A few APIs need to be enabled in the project, if `apply` fails due to a service not being enabled just click on the link in the error message to enable it for the project, then resume `apply`.
You can connect your hub to on-premises using Cloud Interconnect or HA VPN. On-premises networks would be able to reach the hub and all spokes, and the hub and all spokes would be able to reach on-premises, assuming the on-premises network is configured to allow access.
You can add additional spoke to the architecture. All of these spokes have networking similar to spoke-1: They will have connectivity to the hub and to spoke-2, but not to each other unless you also create VPN tunnels for the new spokes.
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [prefix](variables.tf#L41) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L76) | Project id used for all resources. | <code>string</code> | ✓ | |
| [deletion_protection](variables.tf#L15) | Prevent Terraform from destroying data storage resources (storage buckets, GKE clusters, CloudSQL instances) in this blueprint. When this field is set in Terraform state, a terraform destroy or terraform apply that would delete data storage resources will fail. | <code>bool</code> | | <code>false</code> |
| [ip_ranges](variables.tf#L22) | IP CIDR ranges. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; hub &#61; &#34;10.0.0.0&#47;24&#34;&#10; spoke-1 &#61; &#34;10.0.16.0&#47;24&#34;&#10; spoke-2 &#61; &#34;10.0.32.0&#47;24&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [ip_secondary_ranges](variables.tf#L32) | Secondary IP CIDR ranges. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; spoke-2-pods &#61; &#34;10.128.0.0&#47;18&#34;&#10; spoke-2-services &#61; &#34;172.16.0.0&#47;24&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [private_service_ranges](variables.tf#L50) | Private service IP CIDR ranges. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; spoke-2-cluster-1 &#61; &#34;192.168.0.0&#47;28&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [project_create](variables.tf#L58) | Set to non null if project needs to be created. | <code title="object&#40;&#123;&#10; billing_account &#61; string&#10; oslogin &#61; bool&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L81) | VPC region. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [project](outputs.tf#L15) | Project id. | |
| [vms](outputs.tf#L20) | GCE VMs. | |
<!-- END TFDOC -->
## Test
```hcl
module "test" {
source = "./fabric/blueprints/networking/hub-and-spoke-peering"
prefix = "prefix"
project_create = {
billing_account = "123456-123456-123456"
oslogin = true
parent = "folders/123456789"
}
project_id = "project-1"
}
# tftest modules=22 resources=69
```

View File

@ -1,20 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
backend "gcs" {
bucket = ""
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

View File

@ -1,391 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
locals {
vm-instances = [
module.vm-hub.instance,
module.vm-spoke-1.instance,
module.vm-spoke-2.instance
]
vm-startup-script = join("\n", [
"#! /bin/bash",
"apt-get update && apt-get install -y bash-completion dnsutils kubectl"
])
}
###############################################################################
# project #
###############################################################################
module "project" {
source = "../../../modules/project"
project_create = var.project_create != null
billing_account = try(var.project_create.billing_account, null)
compute_metadata = var.project_create.oslogin != true ? {} : {
enable-oslogin = "true"
}
parent = try(var.project_create.parent, null)
name = var.project_id
services = [
"compute.googleapis.com",
"container.googleapis.com"
]
}
################################################################################
# Hub networking #
################################################################################
module "vpc-hub" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-hub"
subnets = [
{
ip_cidr_range = var.ip_ranges.hub
name = "${var.prefix}-hub-1"
region = var.region
}
]
}
module "nat-hub" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-hub"
router_name = "${var.prefix}-hub"
router_network = module.vpc-hub.self_link
}
module "vpc-hub-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.vpc-hub.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
################################################################################
# Spoke 1 networking #
################################################################################
module "vpc-spoke-1" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-spoke-1"
subnets = [
{
ip_cidr_range = var.ip_ranges.spoke-1
name = "${var.prefix}-spoke-1-1"
region = var.region
}
]
}
module "vpc-spoke-1-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-spoke-1.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
module "nat-spoke-1" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-spoke-1"
router_name = "${var.prefix}-spoke-1"
router_network = module.vpc-spoke-1.self_link
}
module "hub-to-spoke-1-peering" {
source = "../../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-1.self_link
routes_config = {
local = { export = true, import = false }
peer = { export = false, import = true }
}
}
################################################################################
# Spoke 2 networking #
################################################################################
module "vpc-spoke-2" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-spoke-2"
subnets = [
{
ip_cidr_range = var.ip_ranges.spoke-2
name = "${var.prefix}-spoke-2-1"
region = var.region
secondary_ip_ranges = {
pods = var.ip_secondary_ranges.spoke-2-pods
services = var.ip_secondary_ranges.spoke-2-services
}
}
]
}
module "vpc-spoke-2-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-spoke-2.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
module "nat-spoke-2" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-spoke-2"
router_name = "${var.prefix}-spoke-2"
router_network = module.vpc-spoke-2.self_link
}
module "hub-to-spoke-2-peering" {
source = "../../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-2.self_link
routes_config = {
local = { export = true, import = false }
peer = { export = false, import = true }
}
depends_on = [module.hub-to-spoke-1-peering]
}
################################################################################
# Test VMs #
################################################################################
module "vm-hub" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-hub"
network_interfaces = [{
network = module.vpc-hub.self_link
subnetwork = module.vpc-hub.subnet_self_links["${var.region}/${var.prefix}-hub-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = {
email = module.service-account-gce.email
}
tags = ["ssh"]
}
module "vm-spoke-1" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-spoke-1"
network_interfaces = [{
network = module.vpc-spoke-1.self_link
subnetwork = module.vpc-spoke-1.subnet_self_links["${var.region}/${var.prefix}-spoke-1-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = {
email = module.service-account-gce.email
}
tags = ["ssh"]
}
module "vm-spoke-2" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-spoke-2"
network_interfaces = [{
network = module.vpc-spoke-2.self_link
subnetwork = module.vpc-spoke-2.subnet_self_links["${var.region}/${var.prefix}-spoke-2-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = {
email = module.service-account-gce.email
}
tags = ["ssh"]
}
module "service-account-gce" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "${var.prefix}-gce-test"
iam_project_roles = {
(var.project_id) = [
"roles/container.developer",
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}
################################################################################
# GKE #
################################################################################
module "cluster-1" {
source = "../../../modules/gke-cluster-standard"
name = "${var.prefix}-cluster-1"
project_id = module.project.project_id
location = "${var.region}-b"
vpc_config = {
network = module.vpc-spoke-2.self_link
subnetwork = module.vpc-spoke-2.subnet_self_links["${var.region}/${var.prefix}-spoke-2-1"]
master_authorized_ranges = {
for name, range in var.ip_ranges : name => range
}
master_ipv4_cidr_block = var.private_service_ranges.spoke-2-cluster-1
}
max_pods_per_node = 32
labels = {
environment = "test"
}
private_cluster_config = {
enable_private_endpoint = true
master_global_access = true
peering_config = {
export_routes = true
import_routes = false
}
}
deletion_protection = var.deletion_protection
}
module "cluster-1-nodepool-1" {
source = "../../../modules/gke-nodepool"
name = "${var.prefix}-nodepool-1"
project_id = module.project.project_id
location = module.cluster-1.location
cluster_name = module.cluster-1.name
service_account = {
email = module.service-account-gke-node.email
}
}
# roles assigned via this module use non-authoritative IAM bindings at the
# project level, with no risk of conflicts with pre-existing roles
module "service-account-gke-node" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "${var.prefix}-gke-node"
iam_project_roles = {
(var.project_id) = [
"roles/logging.logWriter", "roles/monitoring.metricWriter",
]
}
}
################################################################################
# GKE peering VPN #
################################################################################
module "vpn-hub" {
source = "../../../modules/net-vpn-ha"
project_id = module.project.project_id
region = var.region
network = module.vpc-hub.name
name = "${var.prefix}-hub"
peer_gateways = {
default = { gcp = module.vpn-spoke-2.self_link }
}
router_config = {
asn = 64516
custom_advertise = {
all_subnets = true
all_vpc_subnets = true
all_peer_vpc_subnets = true
ip_ranges = {
"10.0.0.0/8" = "default"
}
}
}
tunnels = {
remote-0 = {
bgp_peer = {
address = "169.254.1.1"
asn = 64515
}
bgp_session_range = "169.254.1.2/30"
vpn_gateway_interface = 0
}
remote-1 = {
bgp_peer = {
address = "169.254.2.1"
asn = 64515
}
bgp_session_range = "169.254.2.2/30"
vpn_gateway_interface = 1
}
}
}
module "vpn-spoke-2" {
source = "../../../modules/net-vpn-ha"
project_id = module.project.project_id
region = var.region
network = module.vpc-spoke-2.name
name = "${var.prefix}-spoke-2"
router_config = {
asn = 64515
custom_advertise = {
all_subnets = true
all_vpc_subnets = true
all_peer_vpc_subnets = true
ip_ranges = {
"10.0.0.0/8" = "default"
"${var.private_service_ranges.spoke-2-cluster-1}" = "access to control plane"
}
}
}
peer_gateways = {
default = { gcp = module.vpn-hub.self_link }
}
tunnels = {
remote-0 = {
bgp_peer = {
address = "169.254.1.2"
asn = 64516
}
bgp_session_range = "169.254.1.1/30"
shared_secret = module.vpn-hub.random_secret
vpn_gateway_interface = 0
}
remote-1 = {
bgp_peer = {
address = "169.254.2.2"
asn = 64516
}
bgp_session_range = "169.254.2.1/30"
shared_secret = module.vpn-hub.random_secret
vpn_gateway_interface = 1
}
}
}

View File

@ -1,26 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
output "project" {
description = "Project id."
value = module.project.project_id
}
output "vms" {
description = "GCE VMs."
value = {
for instance in local.vm-instances :
instance.name => instance.network_interface.0.network_ip
}
}

View File

@ -1,85 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
variable "deletion_protection" {
description = "Prevent Terraform from destroying data storage resources (storage buckets, GKE clusters, CloudSQL instances) in this blueprint. When this field is set in Terraform state, a terraform destroy or terraform apply that would delete data storage resources will fail."
type = bool
default = false
nullable = false
}
variable "ip_ranges" {
description = "IP CIDR ranges."
type = map(string)
default = {
hub = "10.0.0.0/24"
spoke-1 = "10.0.16.0/24"
spoke-2 = "10.0.32.0/24"
}
}
variable "ip_secondary_ranges" {
description = "Secondary IP CIDR ranges."
type = map(string)
default = {
spoke-2-pods = "10.128.0.0/18"
spoke-2-services = "172.16.0.0/24"
}
}
variable "prefix" {
description = "Prefix used for resource names."
type = string
validation {
condition = var.prefix != ""
error_message = "Prefix cannot be empty."
}
}
variable "private_service_ranges" {
description = "Private service IP CIDR ranges."
type = map(string)
default = {
spoke-2-cluster-1 = "192.168.0.0/28"
}
}
variable "project_create" {
description = "Set to non null if project needs to be created."
type = object({
billing_account = string
oslogin = bool
parent = string
})
default = null
validation {
condition = (
var.project_create == null
? true
: can(regex("(organizations|folders)/[0-9]+", var.project_create.parent))
)
error_message = "Project parent must be of the form folders/folder_id or organizations/organization_id."
}
}
variable "project_id" {
description = "Project id used for all resources."
type = string
}
variable "region" {
description = "VPC region."
type = string
default = "europe-west1"
}

View File

@ -1,114 +0,0 @@
# Hub and Spoke via VPN
This blueprint creates a simple **Hub and Spoke VPN** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [IPsec HA VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview#ha-vpn).
A few additional features are also shown:
- [custom BGP advertisements](https://cloud.google.com/network-connectivity/docs/router/how-to/advertising-overview) to implement transitivity between spokes
- [VPC Global Routing](https://cloud.google.com/network-connectivity/docs/router/how-to/configuring-routing-mode) to leverage a regional set of VPN gateways in different regions as next hops (used here for illustrative/study purpose, not usually done in real life)
The blueprint has been purposefully kept simple to show how to use and wire the VPC and VPN-HA modules together, and so that it can be used as a basis for experimentation. For a more complex scenario that better reflects real-life usage, including [Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) and [DNS cross-project binding](https://cloud.google.com/dns/docs/zones/cross-project-binding) please refer to the [FAST network stage](../../../fast/stages/2-networking-b-vpn/).
This is the high level diagram of this blueprint:
![High-level diagram](diagram.png "High-level diagram")
## Managed resources and services
This sample creates several distinct groups of resources:
- one VPC for each hub and each spoke
- one set of firewall rules for each VPC
- one HA VPN gateway with two tunnels and one Cloud Router for each spoke
- two HA VPN gateways with two tunnels and a shared Cloud Routers for the hub
- one DNS private zone in the hub
- one DNS peering zone and one DNS private zone in each spoke
- one test instance for the hub each spoke
## Prerequisites
A single pre-existing project is used in this blueprint to keep variables and complexity to a minimum, in a real world scenarios each spoke would probably use a separate project.
The provided project needs a valid billing account, the Compute and DNS APIs are enabled by the blueprint.
You can easily create such a project by commenting turning on project creation in the project module contained in `main.tf`, as shown in this snippet:
```hcl
module "project" {
source = "../../../modules/project"
name = var.project_id
# comment or remove this line to enable project creation
# project_create = false
# add the following line with your billing account id value
billing_account = "12345-ABCD-12345"
services = [
"compute.googleapis.com",
"dns.googleapis.com"
]
}
# tftest skip
```
## Testing
Once the blueprint is up, you can quickly test features by logging in to one of the test VMs:
```bash
gcloud compute ssh hs-ha-lnd-test-r1
# test DNS resolution of the landing zone
ping test-r1.example.com
# test DNS resolution of the prod zone, and prod reachability
ping test-r1.prod.example.com
# test DNS resolution of the dev zone, and dev reachability via global routing
ping test-r2.dev.example.com
```
<!-- TFDOC OPTS files:1 -->
<!-- BEGIN TFDOC -->
## Files
| name | description | modules |
|---|---|---|
| [main.tf](./main.tf) | Module-level locals and resources. | <code>compute-vm</code> · <code>project</code> |
| [net-dev.tf](./net-dev.tf) | Development spoke VPC. | <code>dns</code> · <code>net-vpc</code> · <code>net-vpc-firewall</code> |
| [net-landing.tf](./net-landing.tf) | Landing hub VPC. | <code>dns</code> · <code>net-vpc</code> · <code>net-vpc-firewall</code> |
| [net-prod.tf](./net-prod.tf) | Production spoke VPC. | <code>dns</code> · <code>net-vpc</code> · <code>net-vpc-firewall</code> |
| [outputs.tf](./outputs.tf) | Module outputs. | |
| [variables.tf](./variables.tf) | Module variables. | |
| [vpn-dev-r1.tf](./vpn-dev-r1.tf) | Landing to Development VPN for region 1. | <code>net-vpn-ha</code> |
| [vpn-prod-r1.tf](./vpn-prod-r1.tf) | Landing to Production VPN for region 1. | <code>net-vpn-ha</code> |
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [prefix](variables.tf#L34) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L52) | Project id for all resources. | <code>string</code> | ✓ | |
| [ip_ranges](variables.tf#L15) | Subnet IP CIDR ranges. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; land-0-r1 &#61; &#34;10.0.0.0&#47;24&#34;&#10; land-0-r2 &#61; &#34;10.0.8.0&#47;24&#34;&#10; dev-0-r1 &#61; &#34;10.0.16.0&#47;24&#34;&#10; dev-0-r2 &#61; &#34;10.0.24.0&#47;24&#34;&#10; prod-0-r1 &#61; &#34;10.0.32.0&#47;24&#34;&#10; prod-0-r2 &#61; &#34;10.0.40.0&#47;24&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [ip_secondary_ranges](variables.tf#L28) | Subnet secondary ranges. | <code>map&#40;map&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [project_create_config](variables.tf#L43) | Populate with billing account id to trigger project creation. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent_id &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [regions](variables.tf#L57) | VPC regions. | <code>map&#40;string&#41;</code> | | <code title="&#123;&#10; r1 &#61; &#34;europe-west1&#34;&#10; r2 &#61; &#34;europe-west4&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [vpn_configs](variables.tf#L66) | VPN configurations. | <code title="map&#40;object&#40;&#123;&#10; asn &#61; number&#10; custom_ranges &#61; map&#40;string&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code title="&#123;&#10; land-r1 &#61; &#123;&#10; asn &#61; 64513&#10; custom_ranges &#61; &#123;&#10; &#34;10.0.0.0&#47;8&#34; &#61; &#34;internal default&#34;&#10; &#125;&#10; &#125;&#10; dev-r1 &#61; &#123;&#10; asn &#61; 64514&#10; custom_ranges &#61; null&#10; &#125;&#10; prod-r1 &#61; &#123;&#10; asn &#61; 64515&#10; custom_ranges &#61; null&#10; &#125;&#10;&#125;">&#123;&#8230;&#125;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [subnets](outputs.tf#L15) | Subnet details. | |
| [vms](outputs.tf#L39) | GCE VMs. | |
<!-- END TFDOC -->
## Test
```hcl
module "test" {
source = "./fabric/blueprints/networking/hub-and-spoke-vpn"
prefix = "prefix"
project_create_config = {
billing_account_id = "123456-123456-123456"
parent_id = "folders/123456789"
}
project_id = "project-1"
}
# tftest modules=20 resources=79
```

View File

@ -1,20 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
backend "gcs" {
bucket = ""
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

View File

@ -1,75 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# enable services in the project used
module "project" {
source = "../../..//modules/project"
name = var.project_id
parent = try(var.project_create_config.parent, null)
billing_account = try(var.project_create_config.billing_account_id, null)
project_create = try(var.project_create_config.billing_account_id, null) != null
services = [
"compute.googleapis.com",
"dns.googleapis.com"
]
}
# test VM in landing region 1
module "landing-r1-vm" {
source = "../../../modules/compute-vm"
project_id = var.project_id
name = "${var.prefix}-lnd-test-r1"
zone = "${var.regions.r1}-b"
network_interfaces = [{
network = module.landing-vpc.self_link
subnetwork = module.landing-vpc.subnet_self_links["${var.regions.r1}/${var.prefix}-lnd-0"]
nat = false
addresses = null
}]
tags = ["ssh"]
}
# test VM in prod region 1
module "prod-r1-vm" {
source = "../../../modules/compute-vm"
project_id = var.project_id
name = "${var.prefix}-prd-test-r1"
zone = "${var.regions.r1}-b"
network_interfaces = [{
network = module.prod-vpc.self_link
subnetwork = module.prod-vpc.subnet_self_links["${var.regions.r1}/${var.prefix}-prd-0"]
nat = false
addresses = null
}]
tags = ["ssh"]
}
# test VM in dev region 1
module "dev-r2-vm" {
source = "../../../modules/compute-vm"
project_id = var.project_id
name = "${var.prefix}-dev-test-r2"
zone = "${var.regions.r2}-b"
network_interfaces = [{
network = module.dev-vpc.self_link
subnetwork = module.dev-vpc.subnet_self_links["${var.regions.r2}/${var.prefix}-dev-0"]
nat = false
addresses = null
}]
tags = ["ssh"]
}

View File

@ -1,77 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Development spoke VPC.
module "dev-vpc" {
source = "../../../modules/net-vpc"
project_id = var.project_id
name = "${var.prefix}-dev"
subnets = [
{
ip_cidr_range = var.ip_ranges.dev-0-r1
name = "${var.prefix}-dev-0"
region = var.regions.r1
secondary_ip_ranges = try(
var.ip_secondary_ranges.dev-0-r1, {}
)
},
{
ip_cidr_range = var.ip_ranges.dev-0-r2
name = "${var.prefix}-dev-0"
region = var.regions.r2
secondary_ip_ranges = try(
var.ip_secondary_ranges.dev-0-r2, {}
)
}
]
}
module "dev-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.dev-vpc.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
module "dev-dns-peering" {
source = "../../../modules/dns"
project_id = var.project_id
name = "${var.prefix}-example-com-dev-peering"
zone_config = {
domain = "example.com."
peering = {
client_networks = [module.dev-vpc.self_link]
peer_network = module.landing-vpc.self_link
}
}
}
module "dev-dns-zone" {
source = "../../../modules/dns"
project_id = var.project_id
name = "${var.prefix}-dev-example-com"
zone_config = {
domain = "dev.example.com."
private = {
client_networks = [module.landing-vpc.self_link]
}
}
recordsets = {
"A localhost" = { records = ["127.0.0.1"] }
"A test-r2" = { records = [module.dev-r2-vm.internal_ip] }
}
}

View File

@ -1,64 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Landing hub VPC.
module "landing-vpc" {
source = "../../../modules/net-vpc"
project_id = var.project_id
name = "${var.prefix}-lnd"
subnets = [
{
ip_cidr_range = var.ip_ranges.land-0-r1
name = "${var.prefix}-lnd-0"
region = var.regions.r1
secondary_ip_ranges = try(
var.ip_secondary_ranges.land-0-r1, {}
)
},
{
ip_cidr_range = var.ip_ranges.land-0-r2
name = "${var.prefix}-lnd-0"
region = var.regions.r2
secondary_ip_ranges = try(
var.ip_secondary_ranges.land-0-r2, {}
)
}
]
}
module "landing-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.landing-vpc.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
module "landing-dns-zone" {
source = "../../../modules/dns"
project_id = var.project_id
name = "${var.prefix}-example-com"
zone_config = {
domain = "example.com."
private = {
client_networks = [module.landing-vpc.self_link]
}
}
recordsets = {
"A localhost" = { records = ["127.0.0.1"] }
"A test-r1" = { records = [module.landing-r1-vm.internal_ip] }
}
}

View File

@ -1,77 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Production spoke VPC.
module "prod-vpc" {
source = "../../../modules/net-vpc"
project_id = var.project_id
name = "${var.prefix}-prd"
subnets = [
{
ip_cidr_range = var.ip_ranges.prod-0-r1
name = "${var.prefix}-prd-0"
region = var.regions.r1
secondary_ip_ranges = try(
var.ip_secondary_ranges.prod-0-r1, {}
)
},
{
ip_cidr_range = var.ip_ranges.prod-0-r2
name = "${var.prefix}-prd-0"
region = var.regions.r2
secondary_ip_ranges = try(
var.ip_secondary_ranges.prod-0-r2, {}
)
}
]
}
module "prod-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.prod-vpc.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
module "prod-dns-peering" {
source = "../../../modules/dns"
project_id = var.project_id
name = "${var.prefix}-example-com-prd-peering"
zone_config = {
domain = "example.com."
peering = {
client_networks = [module.prod-vpc.self_link]
peer_network = module.landing-vpc.self_link
}
}
}
module "prod-dns-zone" {
source = "../../../modules/dns"
project_id = var.project_id
name = "${var.prefix}-prd-example-com"
zone_config = {
domain = "prd.example.com."
private = {
client_networks = [module.landing-vpc.self_link]
}
}
recordsets = {
"A localhost" = { records = ["127.0.0.1"] }
"A test-r1" = { records = [module.prod-r1-vm.internal_ip] }
}
}

View File

@ -1,45 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
output "subnets" {
description = "Subnet details."
value = {
dev = {
for k, v in module.dev-vpc.subnets : k => {
id = v.id
ip_cidr_range = v.ip_cidr_range
}
}
landing = {
for k, v in module.landing-vpc.subnets : k => {
id = v.id
ip_cidr_range = v.ip_cidr_range
}
}
prod = {
for k, v in module.prod-vpc.subnets : k => {
id = v.id
ip_cidr_range = v.ip_cidr_range
}
}
}
}
output "vms" {
description = "GCE VMs."
value = {
for mod in [module.landing-r1-vm, module.dev-r2-vm, module.prod-r1-vm] :
mod.instance.name => mod.internal_ip
}
}

View File

@ -1,88 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
variable "ip_ranges" {
description = "Subnet IP CIDR ranges."
type = map(string)
default = {
land-0-r1 = "10.0.0.0/24"
land-0-r2 = "10.0.8.0/24"
dev-0-r1 = "10.0.16.0/24"
dev-0-r2 = "10.0.24.0/24"
prod-0-r1 = "10.0.32.0/24"
prod-0-r2 = "10.0.40.0/24"
}
}
variable "ip_secondary_ranges" {
description = "Subnet secondary ranges."
type = map(map(string))
default = {}
}
variable "prefix" {
description = "Prefix used for resource names."
type = string
validation {
condition = var.prefix != ""
error_message = "Prefix cannot be empty."
}
}
variable "project_create_config" {
description = "Populate with billing account id to trigger project creation."
type = object({
billing_account_id = string
parent_id = string
})
default = null
}
variable "project_id" {
description = "Project id for all resources."
type = string
}
variable "regions" {
description = "VPC regions."
type = map(string)
default = {
r1 = "europe-west1"
r2 = "europe-west4"
}
}
variable "vpn_configs" {
description = "VPN configurations."
type = map(object({
asn = number
custom_ranges = map(string)
}))
default = {
land-r1 = {
asn = 64513
custom_ranges = {
"10.0.0.0/8" = "internal default"
}
}
dev-r1 = {
asn = 64514
custom_ranges = null
}
prod-r1 = {
asn = 64515
custom_ranges = null
}
}
}

View File

@ -1,91 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Landing to Development VPN for region 1.
module "landing-to-dev-vpn-r1" {
source = "../../../modules/net-vpn-ha"
project_id = var.project_id
network = module.landing-vpc.self_link
region = var.regions.r1
name = "${var.prefix}-lnd-to-dev-r1"
# router is created and managed by the production VPN module
# so we don't configure advertisements here
router_config = {
create = false
name = "${var.prefix}-lnd-vpn-r1"
asn = 64514
}
peer_gateways = {
default = { gcp = module.dev-to-landing-vpn-r1.self_link }
}
tunnels = {
0 = {
bgp_peer = {
address = "169.254.2.2"
asn = var.vpn_configs.dev-r1.asn
}
bgp_session_range = "169.254.2.1/30"
vpn_gateway_interface = 0
}
1 = {
bgp_peer = {
address = "169.254.2.6"
asn = var.vpn_configs.dev-r1.asn
}
bgp_session_range = "169.254.2.5/30"
vpn_gateway_interface = 1
}
}
}
module "dev-to-landing-vpn-r1" {
source = "../../../modules/net-vpn-ha"
project_id = var.project_id
network = module.dev-vpc.self_link
region = var.regions.r1
name = "${var.prefix}-dev-to-lnd-r1"
router_config = {
name = "${var.prefix}-dev-vpn-r1"
asn = var.vpn_configs.dev-r1.asn
custom_advertise = {
all_subnets = false
ip_ranges = coalesce(var.vpn_configs.dev-r1.custom_ranges, {})
mode = "CUSTOM"
}
}
peer_gateways = {
default = { gcp = module.landing-to-dev-vpn-r1.self_link }
}
tunnels = {
0 = {
bgp_peer = {
address = "169.254.2.1"
asn = var.vpn_configs.land-r1.asn
}
bgp_session_range = "169.254.2.2/30"
shared_secret = module.landing-to-dev-vpn-r1.random_secret
vpn_gateway_interface = 0
}
1 = {
bgp_peer = {
address = "169.254.2.5"
asn = var.vpn_configs.land-r1.asn
}
bgp_session_range = "169.254.2.6/30"
shared_secret = module.landing-to-dev-vpn-r1.random_secret
vpn_gateway_interface = 1
}
}
}

View File

@ -1,92 +0,0 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Landing to Production VPN for region 1.
module "landing-to-prod-vpn-r1" {
source = "../../../modules/net-vpn-ha"
project_id = var.project_id
network = module.landing-vpc.self_link
region = var.regions.r1
name = "${var.prefix}-lnd-to-prd-r1"
router_config = {
name = "${var.prefix}-lnd-vpn-r1"
asn = var.vpn_configs.land-r1.asn
custom_advertise = {
all_subnets = false
ip_ranges = coalesce(var.vpn_configs.land-r1.custom_ranges, {})
}
}
peer_gateways = {
default = { gcp = module.prod-to-landing-vpn-r1.self_link }
}
tunnels = {
0 = {
bgp_peer = {
address = "169.254.0.2"
asn = var.vpn_configs.prod-r1.asn
}
bgp_session_range = "169.254.0.1/30"
vpn_gateway_interface = 0
}
1 = {
bgp_peer = {
address = "169.254.0.6"
asn = var.vpn_configs.prod-r1.asn
}
bgp_session_range = "169.254.0.5/30"
vpn_gateway_interface = 1
}
}
}
module "prod-to-landing-vpn-r1" {
source = "../../../modules/net-vpn-ha"
project_id = var.project_id
network = module.prod-vpc.self_link
region = var.regions.r1
name = "${var.prefix}-prd-to-lnd-r1"
router_config = {
name = "${var.prefix}-prd-vpn-r1"
asn = var.vpn_configs.prod-r1.asn
# the router is managed here but shared with the dev VPN
custom_advertise = {
all_subnets = false
ip_ranges = coalesce(var.vpn_configs.prod-r1.custom_ranges, {})
}
}
peer_gateways = {
default = { gcp = module.landing-to-prod-vpn-r1.self_link }
}
tunnels = {
0 = {
bgp_peer = {
address = "169.254.0.1"
asn = var.vpn_configs.land-r1.asn
}
bgp_session_range = "169.254.0.2/30"
shared_secret = module.landing-to-prod-vpn-r1.random_secret
vpn_gateway_interface = 0
}
1 = {
bgp_peer = {
address = "169.254.0.5"
asn = var.vpn_configs.land-r1.asn
}
bgp_session_range = "169.254.0.6/30"
shared_secret = module.landing-to-prod-vpn-r1.random_secret
vpn_gateway_interface = 1
}
}
}

View File

@ -0,0 +1 @@
ludoo

View File

@ -0,0 +1 @@
andgandolfi

View File

@ -0,0 +1 @@
cgrotz

View File

@ -0,0 +1 @@
LucaPrete

View File

@ -0,0 +1 @@
juliocc

View File

@ -0,0 +1 @@
sruffilli

View File

@ -0,0 +1 @@
juliodiez

View File

@ -0,0 +1 @@
juliodiez

View File

@ -0,0 +1 @@
juliodiez

View File

@ -0,0 +1 @@
LucaPrete

View File

@ -0,0 +1 @@
simonebruzzechesse

View File

@ -0,0 +1 @@
ludoo

View File

@ -0,0 +1 @@
simonebruzzechesse

View File

@ -0,0 +1 @@
skalolazka