Merge branch 'master' into net-dash-psa

This commit is contained in:
Aurélien Legrand 2022-11-03 14:03:50 +01:00 committed by GitHub
commit 5f6eb135c1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
272 changed files with 6473 additions and 3155 deletions

View File

@ -53,12 +53,12 @@ jobs:
run: |
terraform fmt -recursive -check -diff $GITHUB_WORKSPACE
- name: Check documentation (fabric)
- name: Check documentation
id: documentation-fabric
run: |
python3 tools/check_documentation.py examples modules fast
python3 tools/check_documentation.py modules fast blueprints
- name: Check documentation links (fabric)
- name: Check documentation links
id: documentation-links-fabric
run: |
python3 tools/check_links.py .

View File

@ -33,7 +33,7 @@ env:
TF_VERSION: 1.3.2
jobs:
doc-examples:
examples:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
@ -68,7 +68,7 @@ jobs:
pip install -r tests/requirements.txt
pytest -vv tests/examples
examples:
blueprints:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

1
.gitignore vendored
View File

@ -36,3 +36,4 @@ examples/cloud-operations/adfs/ansible/vars/vars.yaml
examples/cloud-operations/adfs/ansible/gssh.sh
examples/cloud-operations/multi-cluster-mesh-gke-fleet-api/ansible/vars.yaml
examples/cloud-operations/multi-cluster-mesh-gke-fleet-api/ansible/gssh.sh
blueprints/cloud-operations/network-dashboard/cloud-function.zip

View File

@ -5,9 +5,21 @@ All notable changes to this project will be documented in this file.
## [Unreleased]
<!-- None < 2022-09-09 18:02:15+00:00 -->
- [[#939](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/939)] Temporarily duplicate cloud armor example ([ludoo](https://github.com/ludoo)) <!-- 2022-11-02 09:36:04+00:00 -->
### BLUEPRINTS
- [[#941](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/941)] **incompatible change:** Refactor ILB module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-11-02 17:05:21+00:00 -->
- [[#936](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/936)] Enable org policy service and add README notice to modules ([ludoo](https://github.com/ludoo)) <!-- 2022-11-01 13:25:08+00:00 -->
- [[#931](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/931)] **incompatible change:** Refactor compute-mig module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-11-01 08:39:00+00:00 -->
- [[#932](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/932)] feat(project-factory): introduce additive iam bindings to project-fac… ([Malet](https://github.com/Malet)) <!-- 2022-10-31 17:24:25+00:00 -->
- [[#925](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/925)] Network dashboard: update main.tf and README following #922 ([brianhmj](https://github.com/brianhmj)) <!-- 2022-10-28 15:49:12+00:00 -->
- [[#924](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/924)] Fix formatting for gcloud dataflow job launch command ([aymanfarhat](https://github.com/aymanfarhat)) <!-- 2022-10-27 14:07:25+00:00 -->
- [[#921](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/921)] Align documentation, move glb blueprint ([ludoo](https://github.com/ludoo)) <!-- 2022-10-26 12:31:04+00:00 -->
- [[#915](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/915)] TFE OIDC with GCP WIF blueprint added ([averbuks](https://github.com/averbuks)) <!-- 2022-10-25 19:06:43+00:00 -->
- [[#899](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/899)] Static routes monitoring metrics added to network dashboard BP ([maunope](https://github.com/maunope)) <!-- 2022-10-25 11:36:39+00:00 -->
- [[#909](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/909)] GCS2BQ: Move images and templates in sub-folders ([lcaggio](https://github.com/lcaggio)) <!-- 2022-10-25 08:31:25+00:00 -->
- [[#907](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/907)] Fix CloudSQL blueprint ([lcaggio](https://github.com/lcaggio)) <!-- 2022-10-25 07:08:08+00:00 -->
- [[#897](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/897)] Project-factory: allow folder_id to be defined in defaults_file ([Malet](https://github.com/Malet)) <!-- 2022-10-21 08:20:06+00:00 -->
- [[#900](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/900)] Improve net dashboard variables ([juliocc](https://github.com/juliocc)) <!-- 2022-10-20 20:59:31+00:00 -->
- [[#896](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/896)] Network Dashboard: CFv2 and performance improvements ([aurelienlegrand](https://github.com/aurelienlegrand)) <!-- 2022-10-19 16:59:29+00:00 -->
@ -37,6 +49,8 @@ All notable changes to this project will be documented in this file.
### DOCUMENTATION
- [[#937](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/937)] Fix typos in blueprints README.md ([kumar-dhanagopal](https://github.com/kumar-dhanagopal)) <!-- 2022-11-02 07:39:26+00:00 -->
- [[#921](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/921)] Align documentation, move glb blueprint ([ludoo](https://github.com/ludoo)) <!-- 2022-10-26 12:31:04+00:00 -->
- [[#898](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/898)] Update FAST bootstrap README.md ([juliocc](https://github.com/juliocc)) <!-- 2022-10-19 15:15:36+00:00 -->
- [[#878](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/878)] chore: update cft and fabric ([bharathkkb](https://github.com/bharathkkb)) <!-- 2022-10-12 15:38:06+00:00 -->
- [[#863](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/863)] Fabric vs CFT doc ([ludoo](https://github.com/ludoo)) <!-- 2022-10-07 12:47:51+00:00 -->
@ -44,6 +58,11 @@ All notable changes to this project will be documented in this file.
### FAST
- [[#941](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/941)] **incompatible change:** Refactor ILB module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-11-02 17:05:21+00:00 -->
- [[#935](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/935)] FAST: enable org policy API, fix run.allowedIngress value ([ludoo](https://github.com/ludoo)) <!-- 2022-11-01 08:52:03+00:00 -->
- [[#931](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/931)] **incompatible change:** Refactor compute-mig module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-11-01 08:39:00+00:00 -->
- [[#930](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/930)] **incompatible change:** Update organization/folder/project modules to use new org policies API and tf1.3 optionals ([juliocc](https://github.com/juliocc)) <!-- 2022-10-28 16:21:06+00:00 -->
- [[#911](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/911)] FAST: Additional PGA DNS records ([sruffilli](https://github.com/sruffilli)) <!-- 2022-10-25 12:28:29+00:00 -->
- [[#903](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/903)] Initial replacement for CI/CD stage ([ludoo](https://github.com/ludoo)) <!-- 2022-10-23 17:52:46+00:00 -->
- [[#898](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/898)] Update FAST bootstrap README.md ([juliocc](https://github.com/juliocc)) <!-- 2022-10-19 15:15:36+00:00 -->
- [[#880](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/880)] **incompatible change:** Refactor net-vpc module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 09:02:34+00:00 -->
@ -63,6 +82,19 @@ All notable changes to this project will be documented in this file.
### MODULES
- [[#941](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/941)] **incompatible change:** Refactor ILB module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-11-02 17:05:21+00:00 -->
- [[#940](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/940)] Ensure the implementation of org policies is consistent ([juliocc](https://github.com/juliocc)) <!-- 2022-11-02 09:55:21+00:00 -->
- [[#936](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/936)] Enable org policy service and add README notice to modules ([ludoo](https://github.com/ludoo)) <!-- 2022-11-01 13:25:08+00:00 -->
- [[#931](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/931)] **incompatible change:** Refactor compute-mig module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-11-01 08:39:00+00:00 -->
- [[#930](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/930)] **incompatible change:** Update organization/folder/project modules to use new org policies API and tf1.3 optionals ([juliocc](https://github.com/juliocc)) <!-- 2022-10-28 16:21:06+00:00 -->
- [[#926](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/926)] Fix backwards compatibility for vpc subnet descriptions ([ludoo](https://github.com/ludoo)) <!-- 2022-10-28 06:13:04+00:00 -->
- [[#927](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/927)] Add support for deployment type and api proxy type for Apigee org ([kmucha555](https://github.com/kmucha555)) <!-- 2022-10-27 19:56:41+00:00 -->
- [[#923](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/923)] Fix service account creation error in gke nodepool module ([ludoo](https://github.com/ludoo)) <!-- 2022-10-27 15:12:05+00:00 -->
- [[#908](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/908)] GKE module: autopilot fixes ([ludoo](https://github.com/ludoo)) <!-- 2022-10-25 21:33:49+00:00 -->
- [[#906](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/906)] GKE module: add managed_prometheus to features ([apichick](https://github.com/apichick)) <!-- 2022-10-25 21:18:50+00:00 -->
- [[#916](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/916)] Add support for DNS routing policies ([juliocc](https://github.com/juliocc)) <!-- 2022-10-25 14:20:53+00:00 -->
- [[#918](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/918)] Fix race condition in SimpleNVA ([sruffilli](https://github.com/sruffilli)) <!-- 2022-10-25 13:04:38+00:00 -->
- [[#914](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/914)] **incompatible change:** Update DNS module ([juliocc](https://github.com/juliocc)) <!-- 2022-10-25 10:31:11+00:00 -->
- [[#904](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/904)] Add missing description field ([dsbutler101](https://github.com/dsbutler101)) <!-- 2022-10-21 15:05:11+00:00 -->
- [[#891](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/891)] Add internal_ips output to compute-vm module ([LucaPrete](https://github.com/LucaPrete)) <!-- 2022-10-21 08:38:27+00:00 -->
- [[#890](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/890)] Add auto_delete and instance_redistribution_type to compute-vm and compute-mig modules. ([giovannibaratta](https://github.com/giovannibaratta)) <!-- 2022-10-16 19:19:46+00:00 -->
@ -95,6 +127,7 @@ All notable changes to this project will be documented in this file.
### TOOLS
- [[#919](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/919)] Rename workflow names ([juliocc](https://github.com/juliocc)) <!-- 2022-10-25 15:22:51+00:00 -->
- [[#902](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/902)] Bring back sorted variables check ([juliocc](https://github.com/juliocc)) <!-- 2022-10-20 17:08:17+00:00 -->
- [[#887](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/887)] Disable parallel execution of tests and plugin cache ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 17:52:38+00:00 -->
- [[#886](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/886)] Revert "Improve handling of tf plugin cache in tests" ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 17:35:31+00:00 -->

View File

@ -11,7 +11,7 @@ This repository provides **end-to-end blueprints** and a **suite of Terraform mo
- organization-wide [landing zone blueprint](fast/) used to bootstrap real-world cloud foundations
- reference [blueprints](./blueprints/) used to deep dive on network patterns or product features
- a comprehensive source of lean [modules](./modules/dns) that lend themselves well to changes
- a comprehensive source of lean [modules](./modules/) that lend themselves well to changes
The whole repository is meant to be cloned as a single unit, and then forked into separate owned repositories to seed production usage, or used as-is and periodically updated as a complete toolkit for prototyping. You can read more on this approach in our [contributing guide](./CONTRIBUTING.md), and a comparison against similar toolkits [here](./FABRIC-AND-CFT.md).
@ -29,16 +29,16 @@ The current list of modules supports most of the core foundational and networkin
Currently available modules:
- **foundational** - [folder](./modules/folder), [organization](./modules/organization), [project](./modules/project), [service accounts](./modules/iam-service-account), [logging bucket](./modules/logging-bucket), [billing budget](./modules/billing-budget), [projects-data-source](./modules/projects-data-source), [organization-policy](./modules/organization-policy)
- **networking** - [VPC](./modules/net-vpc), [VPC firewall](./modules/net-vpc-firewall), [VPC peering](./modules/net-vpc-peering), [VPN static](./modules/net-vpn-static), [VPN dynamic](./modules/net-vpn-dynamic), [HA VPN](./modules/net-vpn-ha), [NAT](./modules/net-cloudnat), [address reservation](./modules/net-address), [DNS](./modules/dns), [L4 ILB](./modules/net-ilb), [L7 ILB](./modules/net-ilb-l7), [Service Directory](./modules/service-directory), [Cloud Endpoints](./modules/endpoints)
- **compute** - [VM/VM group](./modules/compute-vm), [MIG](./modules/compute-mig), [GKE cluster](./modules/gke-cluster), [GKE nodepool](./modules/gke-nodepool), [GKE hub](./modules/gke-hub), [COS container](./modules/cloud-config-container/cos-generic-metadata/) (coredns, mysql, onprem, squid)
- **data** - [GCS](./modules/gcs), [BigQuery dataset](./modules/bigquery-dataset), [Pub/Sub](./modules/pubsub), [Datafusion](./modules/datafusion), [Bigtable instance](./modules/bigtable-instance), [Cloud SQL instance](./modules/cloudsql-instance), [Data Catalog Policy Tag](./modules/data-catalog-policy-tag)
- **development** - [Cloud Source Repository](./modules/source-repository), [Container Registry](./modules/container-registry), [Artifact Registry](./modules/artifact-registry), [Apigee Organization](./modules/apigee-organization), [Apigee X Instance](./modules/apigee-x-instance), [API Gateway](./modules/api-gateway)
- **security** - [KMS](./modules/kms), [SecretManager](./modules/secret-manager), [VPC Service Control](./modules/vpc-sc)
- **foundational** - [billing budget](./modules/billing-budget), [Cloud Identity group](./modules/cloud-identity-group/), [folder](./modules/folder), [service accounts](./modules/iam-service-account), [logging bucket](./modules/logging-bucket), [organization](./modules/organization), [project](./modules/project), [projects-data-source](./modules/projects-data-source)
- **networking** - [DNS](./modules/dns), [Cloud Endpoints](./modules/endpoints), [address reservation](./modules/net-address), [NAT](./modules/net-cloudnat), [Global Load Balancer (classic)](./modules/net-glb/), [L4 ILB](./modules/net-ilb), [L7 ILB](./modules/net-ilb-l7), [VPC](./modules/net-vpc), [VPC firewall](./modules/net-vpc-firewall), [VPC peering](./modules/net-vpc-peering), [VPN dynamic](./modules/net-vpn-dynamic), [HA VPN](./modules/net-vpn-ha), [VPN static](./modules/net-vpn-static), [Service Directory](./modules/service-directory)
- **compute** - [VM/VM group](./modules/compute-vm), [MIG](./modules/compute-mig), [COS container](./modules/cloud-config-container/cos-generic-metadata/) (coredns, mysql, onprem, squid), [GKE cluster](./modules/gke-cluster), [GKE hub](./modules/gke-hub), [GKE nodepool](./modules/gke-nodepool)
- **data** - [BigQuery dataset](./modules/bigquery-dataset), [Bigtable instance](./modules/bigtable-instance), [Cloud SQL instance](./modules/cloudsql-instance), [Data Catalog Policy Tag](./modules/data-catalog-policy-tag), [Datafusion](./modules/datafusion), [GCS](./modules/gcs), [Pub/Sub](./modules/pubsub)
- **development** - [API Gateway](./modules/api-gateway), [Apigee Organization](./modules/apigee-organization), [Apigee X Instance](./modules/apigee-x-instance), [Artifact Registry](./modules/artifact-registry), [Container Registry](./modules/container-registry), [Cloud Source Repository](./modules/source-repository)
- **security** - [Binauthz](./modules/binauthz/), [KMS](./modules/kms), [SecretManager](./modules/secret-manager), [VPC Service Control](./modules/vpc-sc)
- **serverless** - [Cloud Function](./modules/cloud-function), [Cloud Run](./modules/cloud-run)
For more information and usage examples see each module's README file.
## End-to-end blueprints
The [blueprints](./blueprints/) in this repository are split in several main sections: **[networking blueprints](./blueprints/networking/)** that implement core patterns or features, **[data solutions blueprints](./blueprints/data-solutions/)** that demonstrate how to integrate data services in complete scenarios, **[cloud operations blueprints](./blueprints/cloud-operations/)** that leverage specific products to meet specific operational needs and **[factories](./blueprints/factories/)** that implement resource factories for the repetitive creation of specific resources, and finally **[GKE](./blueprints/gke)** and **[serverless](./blueprints/serverless)** design blueprints.
The [blueprints](./blueprints/) in this repository are split in several main sections: **[networking blueprints](./blueprints/networking/)** that implement core patterns or features, **[data solutions blueprints](./blueprints/data-solutions/)** that demonstrate how to integrate data services in complete scenarios, **[cloud operations blueprints](./blueprints/cloud-operations/)** that leverage specific products to meet specific operational needs and **[factories](./blueprints/factories/)** that implement resource factories for the repetitive creation of specific resources, and finally **[GKE](./blueprints/gke)**, **[serverless](./blueprints/serverless)**, and **[third-party solutions](./blueprints/third-party-solutions/)** design blueprints.

View File

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -1,15 +1,15 @@
# Terraform end-to-end blueprints for Google Cloud
This section **[networking blueprints](./networking/)** that implement core patterns or features, **[data solutions blueprints](./data-solutions/)** that demonstrate how to integrate data services in complete scenarios, **[cloud operations blueprints](./cloud-operations/)** that leverage specific products to meet specific operational needs, **[GKE](./gke/)** and **[Serverless](./serverless/)** blueprints, and **[factories](./factories/)** that implement resource factories for the repetitive creation of specific resources.
This section provides **[networking blueprints](./networking/)** that implement core patterns or features, **[data solutions blueprints](./data-solutions/)** that demonstrate how to integrate data services in complete scenarios, **[cloud operations blueprints](./cloud-operations/)** that leverage specific products to meet specific operational needs, **[GKE](./gke/)** and **[Serverless](./serverless/)** blueprints, and **[factories](./factories/)** that implement resource factories for the repetitive creation of specific resources.
Currently available blueprints:
- **cloud operations** - [Resource tracking and remediation via Cloud Asset feeds](./cloud-operations/asset-inventory-feed-remediation), [Granular Cloud DNS IAM via Service Directory](./cloud-operations/dns-fine-grained-iam), [Granular Cloud DNS IAM for Shared VPC](./cloud-operations/dns-shared-vpc), [Compute Engine quota monitoring](./cloud-operations/quota-monitoring), [Scheduled Cloud Asset Inventory Export to Bigquery](./cloud-operations/scheduled-asset-inventory-export-bq), [Packer image builder](./cloud-operations/packer-image-builder), [On-prem SA key management](./cloud-operations/onprem-sa-key-management), [TCP healthcheck for unmanaged GCE instances](./cloud-operations/unmanaged-instances-healthcheck), [HTTP Load Balancer with Cloud Armor](./cloud-operations/glb_and_armor)
- **data solutions** - [GCE/GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms/), [Cloud Storage to Bigquery with Cloud Dataflow with least privileges](./data-solutions/gcs-to-bq-with-least-privileges/), [Data Platform Foundations](./data-solutions/data-platform-foundations/), [SQL Server AlwaysOn availability groups blueprint](./data-solutions/sqlserver-alwayson), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion/), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2/)
- **factories** - [The why and the how of resource factories](./factories/README.md)
- **GKE** - [GKE multitenant fleet](./gke/multitenant-fleet/), [Shared VPC with GKE support](./networking/shared-vpc-gke/), [Binary Authorization Pipeline](./gke/binauthz/), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api/)
- **networking** - [hub and spoke via peering](./networking/hub-and-spoke-peering/), [hub and spoke via VPN](./networking/hub-and-spoke-vpn/), [DNS and Google Private Access for on-premises](./networking/onprem-google-access-dns/), [Shared VPC with GKE support](./networking/shared-vpc-gke/), [ILB as next hop](./networking/ilb-next-hop), [Connecting to on-premise services leveraging PSC and hybrid NEGs](./networking/psc-hybrid/), [decentralized firewall](./networking/decentralized-firewall)
- **serverless** - [Multi-region deployments for API Gateway](./serverless/api-gateway/)
- **third party solutions** - [OpenShift cluster on Shared VPC](./third-party-solutions/openshift)
- **cloud operations** - [Active Directory Federation Services](./cloud-operations/adfs), [Cloud Asset Inventory feeds for resource change tracking and remediation](./cloud-operations/asset-inventory-feed-remediation), [Fine-grained Cloud DNS IAM via Service Directory](./cloud-operations/dns-fine-grained-iam), [Cloud DNS & Shared VPC design](./cloud-operations/dns-shared-vpc), [Delegated Role Grants](./cloud-operations/iam-delegated-role-grants), [Networking Dashboard](./cloud-operations/network-dashboard), [Managing on-prem service account keys by uploading public keys](./cloud-operations/onprem-sa-key-management), [Compute Image builder with Hashicorp Packer](./cloud-operations/packer-image-builder), [Packer example](./cloud-operations/packer-image-builder/packer), [Compute Engine quota monitoring](./cloud-operations/quota-monitoring), [Scheduled Cloud Asset Inventory Export to Bigquery](./cloud-operations/scheduled-asset-inventory-export-bq), [Configuring workload identity federation for Terraform Cloud/Enterprise workflow](./cloud-operations/terraform-enterprise-wif), [TCP healthcheck and restart for unmanaged GCE instances](./cloud-operations/unmanaged-instances-healthcheck), [Migrate for Compute Engine (v5) blueprints](./cloud-operations/vm-migration), [Configuring workload identity federation to access Google Cloud resources from apps running on Azure](./cloud-operations/workload-identity-federation)
- **data solutions** - [GCE and GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion), [Data Platform](./data-solutions/data-platform-foundations), [Spinning up a foundation data pipeline on Google Cloud using Cloud Storage, Dataflow and BigQuery](./data-solutions/gcs-to-bq-with-least-privileges), [#SQL Server Always On Groups blueprint](./data-solutions/sqlserver-alwayson), [Data Playground](./data-solutions/data-playground)
- **factories** - [The why and the how of Resource Factories](./factories), [Google Cloud Identity Group Factory](./factories/cloud-identity-group-factory), [Google Cloud BQ Factory](./factories/bigquery-factory), [Google Cloud VPC Firewall Factory](./factories/net-vpc-firewall-yaml), [Minimal Project Factory](./factories/project-factory)
- **GKE** - [Binary Authorization Pipeline Blueprint](./gke/binauthz), [Storage API](./gke/binauthz/image), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api), [GKE Multitenant Blueprint](./gke/multitenant-fleet), [Shared VPC with GKE support](./networking/shared-vpc-gke/)
- **networking** - [Decentralized firewall management](./networking/decentralized-firewall), [Decentralized firewall validator](./networking/decentralized-firewall/validator), [Network filtering with Squid](./networking/filtering-proxy), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Hub and Spoke via VPN](./networking/hub-and-spoke-vpn), [Hub and Spoke via VPC Peering](./networking/hub-and-spoke-peering), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), [Nginx-based reverse proxy cluster](./networking/nginx-reverse-proxy-cluster), [On-prem DNS and Google Private Access](./networking/onprem-google-access-dns), [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke)
- **serverless** - [Creating multi-region deployments for API Gateway](./serverless/api-gateway)
- **third party solutions** - [OpenShift on GCP user-provisioned infrastructure](./third-party-solutions/openshift), [Wordpress deployment on Cloud Run](./third-party-solutions/wordpress/cloudrun)
For more information see the individual README files in each section.

View File

@ -2,6 +2,12 @@
The blueprints in this folder show how to wire together different Google Cloud services to simplify operations, and are meant for testing, or as minimal but sufficiently complete starting points for actual use.
## Active Directory Federation Services
<a href="./adfs/" title="Active Directory Federation Services"><img src="./adfs/architecture.png" align="left" width="280px"></a> This [blueprint](./adfs/) Sets up managed AD, creates a server where AD FS will be installed which will also act as admin workstation for AD, and exposes ADFS using GLB. It can also optionally set up a GCP project and VPC if needed
<br clear="left">
## Resource tracking and remediation via Cloud Asset feeds
<a href="./asset-inventory-feed-remediation" title="Resource tracking and remediation via Cloud Asset feeds"><img src="./asset-inventory-feed-remediation/diagram.png" align="left" width="280px"></a> This [blueprint](./asset-inventory-feed-remediation) shows how to leverage [Cloud Asset Inventory feeds](https://cloud.google.com/asset-inventory/docs/monitoring-asset-changes) to stream resource changes in real time, and how to programmatically use the feed change notifications for alerting or remediation, via a Cloud Function wired to the feed PubSub queue.
@ -10,12 +16,6 @@ The blueprint's feed tracks changes to Google Compute instances, and the Cloud F
<br clear="left">
## Scheduled Cloud Asset Inventory Export to Bigquery
<a href="./scheduled-asset-inventory-export-bq" title="Scheduled Cloud Asset Inventory Export to Bigquery"><img src="./scheduled-asset-inventory-export-bq/diagram.png" align="left" width="280px"></a> This [blueprint](./scheduled-asset-inventory-export-bq) shows how to leverage the [Cloud Asset Inventory Exporting to Bigquery](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery) feature, to keep track of your organization's assets over time storing information in Bigquery. Data stored in Bigquery can then be used for different purposes like dashboarding or analysis.
<br clear="left">
## Granular Cloud DNS IAM via Service Directory
<a href="./dns-fine-grained-iam" title="Fine-grained Cloud DNS IAM with Service Directory"><img src="./dns-fine-grained-iam/diagram.png" align="left" width="280px"></a> This [blueprint](./dns-fine-grained-iam) shows how to leverage [Service Directory](https://cloud.google.com/blog/products/networking/introducing-service-directory) and Cloud DNS Service Directory private zones, to implement fine-grained IAM controls on DNS. The blueprint creates a Service Directory namespace, a Cloud DNS private zone that uses it as its authoritative source, service accounts with different levels of permissions, and VMs to test them.
@ -28,37 +28,62 @@ The blueprint's feed tracks changes to Google Compute instances, and the Cloud F
<br clear="left">
## Compute Engine quota monitoring
<a href="./quota-monitoring" title="Compute Engine quota monitoring"><img src="./quota-monitoring/diagram.png" align="left" width="280px"></a> This [blueprint](./quota-monitoring) shows a practical way of collecting and monitoring [Compute Engine resource quotas](https://cloud.google.com/compute/quotas) via Cloud Monitoring metrics as an alternative to the recently released [built-in quota metrics](https://cloud.google.com/monitoring/alerts/using-quota-metrics). A simple alert on quota thresholds is also part of the blueprint.
<br clear="left">
## Delegated Role Grants
<a href="./iam-delegated-role-grants" title="Delegated Role Grants"><img src="./iam-delegated-role-grants/diagram.png" align="left" width="280px"></a> This [blueprint](./iam-delegated-role-grants) shows how to use delegated role grants to restrict service usage.
<br clear="left">
## Network Dashboard
<a href="./network-dashboard" title="Network Dashboard"><img src="./network-dashboard/metric.png" align="left" width="280px"></a> This [blueprint](./network-dashboard/) provides an end-to-end solution to gather some GCP Networking quotas and limits (that cannot be seen in the GCP console today) and display them in a dashboard. The goal is to allow for better visibility of these limits, facilitating capacity planning and avoiding hitting these limits..
<br clear="left">
## On-prem Service Account key management
This [blueprint](./onprem-sa-key-management) shows how to manage IAM Service Account Keys by manually generating a key pair and uploading the public part of the key to GCP.
<br clear="left">
## Packer image builder
<a href="./packer-image-builder" title="Packer image builder"><img src="./packer-image-builder/diagram.png" align="left" width="280px"></a> This [blueprint](./packer-image-builder) shows how to deploy infrastructure for a Compute Engine image builder based on [Hashicorp's Packer tool](https://www.packer.io).
<br clear="left">
## On-prem Service Account key management
## Compute Engine quota monitoring
This [blueprint](./onprem-sa-key-management) shows how to manage IAM Service Account Keys by manually generating a key pair and uploading the public part of the key to GCP.
<a href="./quota-monitoring" title="Compute Engine quota monitoring"><img src="./quota-monitoring/diagram.png" align="left" width="280px"></a> This [blueprint](./quota-monitoring) shows a practical way of collecting and monitoring [Compute Engine resource quotas](https://cloud.google.com/compute/quotas) via Cloud Monitoring metrics as an alternative to the recently released [built-in quota metrics](https://cloud.google.com/monitoring/alerts/using-quota-metrics). A simple alert on quota thresholds is also part of the blueprint.
<br clear="left">
## Migrate for Compute Engine (v5)
<a href="./vm-migration" title="Packer image builder"><img src="./vm-migration/host-target-projects/diagram.png" align="left" width="280px"></a> This set of [blueprints](./vm-migration) shows how to deploy Migrate for Compute Engine (v5) on top of existing Cloud Foundations on different scenarios. An blueprint on how to deploy the M4CE connector on VMWare ESXi is also part of the blueprints.
## Scheduled Cloud Asset Inventory Export to Bigquery
<a href="./scheduled-asset-inventory-export-bq" title="Scheduled Cloud Asset Inventory Export to Bigquery"><img src="./scheduled-asset-inventory-export-bq/diagram.png" align="left" width="280px"></a> This [blueprint](./scheduled-asset-inventory-export-bq) shows how to leverage the [Cloud Asset Inventory Exporting to Bigquery](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery) feature, to keep track of your organization's assets over time storing information in Bigquery. Data stored in Bigquery can then be used for different purposes like dashboarding or analysis.
<br clear="left">
## Workload identity federation for Terraform Enterprise workflow
<a href="./terraform-enterprise-wif" title="Workload identity federation for Terraform Cloud/Enterprise workflow"><img src="./terraform-enterprise-wif/diagram.png" align="left" width="280px"></a> This [blueprint](./terraform-enterprise-wif) shows how to configure [Wokload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation) between [Terraform Cloud/Enterprise](https://developer.hashicorp.com/terraform/enterprise) instance and Google Cloud.
<br clear="left">
## TCP healthcheck for unmanaged GCE instances
<a href="./unmanaged-instances-healthcheck" title="Unmanaged GCE Instance healthchecker"><img src="./unmanaged-instances-healthcheck/diagram.png" align="left" width="280px"></a> This [blueprint](./unmanaged-instances-healthcheck) shows how to leverage [Serverless VPC Access](https://cloud.google.com/vpc/docs/configure-serverless-vpc-access) and Cloud Functions to organize a highly performant TCP healtheck for unmanaged GCE instances.
<br clear="left">
## Migrate for Compute Engine (v5)
<a href="./vm-migration" title="Packer image builder"><img src="./vm-migration/host-target-projects/diagram.png" align="left" width="280px"></a> This set of [blueprints](./vm-migration) shows how to deploy Migrate for Compute Engine (v5) on top of existing Cloud Foundations on different scenarios. An blueprint on how to deploy the M4CE connector on VMWare ESXi is also part of the blueprints.
<br clear="left">
## Configuring Workload Identity Federation from apps running on Azure
<a href="./workload-identity-federation" title="Configuring Workload Identity Federation from apps running on Azure"><img src="./workload-identity-federation/host-target-projects/architecture.png" align="left" width="280px"></a> This [blueprint](./workload-identity-federation) shows how to set up everything, both in Azure and Google Cloud, so a workload in Azure can access Google Cloud resources without a service account key. This will be possible by configuring workload identity federation to trust access tokens generated for a specific application in an Azure Active Directory (AAD) tenant.
<br clear="left">

View File

@ -1,19 +1,19 @@
# AD FS
# Active Directory Federation Services
This blueprint does the following:
This blueprint does the following:
Terraform:
- (Optional) Creates a project.
- (Optional) Creates a VPC.
- Sets up managed AD
- Creates a server where AD FS will be installed. This machine will also act as admin workstation for AD.
- Creates a server where AD FS will be installed. This machine will also act as admin workstation for AD.
- Exposes AD FS using GLB.
Ansible:
- Installs the required Windows features and joins the computer to the AD domain.
- Provisions some tests users, groups and group memberships in AD. The data to provision is in the files directory of the ad-provisioning ansible role. There is script available in the scripts/ad-provisioning folder that you can use to generate an alternative users or memberships file.
- Provisions some tests users, groups and group memberships in AD. The data to provision is in the files directory of the ad-provisioning ansible role. There is script available in the scripts/ad-provisioning folder that you can use to generate an alternative users or memberships file.
- Installs AD FS
In addition to this, we also include a Powershell script that facilitates the configuration required for Anthos when authenticating users with AD FS as IdP.
@ -26,8 +26,8 @@ The diagram below depicts the architecture of the blueprint:
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fadfs), then go through the following steps to create resources:
* `terraform init`
* `terraform apply -var project_id=my-project-id -var ad_dns_domain_name=my-domain.org -var adfs_dns_domain_name=adfs.my-domain.org`
- `terraform init`
- `terraform apply -var project_id=my-project-id -var ad_dns_domain_name=my-domain.org -var adfs_dns_domain_name=adfs.my-domain.org`
Once the resources have been created, do the following:

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -2,7 +2,7 @@
## Introduction
This repository contains all necessary Terraform modules to build a multi-regional infrastructure with horizontally scalable managed instance group backends, HTTP load balancing and Googles advanced WAF security tool (Cloud Armor) on top to securely deploy an application at global scale.
This blueprint contains all necessary Terraform modules to build a multi-regional infrastructure with horizontally scalable managed instance group backends, HTTP load balancing and Googles advanced WAF security tool (Cloud Armor) on top to securely deploy an application at global scale.
This tutorial is general enough to fit in a variety of use-cases, from hosting a mobile app's backend to deploy proprietary workloads at scale.
@ -62,7 +62,7 @@ Note: To grant a user a role, take a look at the [Granting and Revoking Access](
Click on the button below, sign in if required and when the prompt appears, click on “confirm”.
[![Open Cloudshell](shell_button.png)](https://goo.gle/GoCloudArmor)
[![Open Cloudshell](../../../assets/images/cloud-shell-button.png)](https://goo.gle/GoCloudArmor)
This will clone the repository to your cloud shell and a screen like this one will appear:

View File

@ -153,22 +153,20 @@ module "vm_siege" {
}
module "mig_ew1" {
source = "../../../modules/compute-mig"
project_id = module.project.project_id
location = "europe-west1"
name = "${local.prefix}europe-west1-mig"
regional = true
default_version = {
instance_template = module.instance_template_ew1.template.self_link
name = "default"
}
source = "../../../modules/compute-mig"
project_id = module.project.project_id
location = "europe-west1"
name = "${local.prefix}europe-west1-mig"
instance_template = module.instance_template_ew1.template.self_link
autoscaler_config = {
max_replicas = 5
min_replicas = 1
cooldown_period = 45
cpu_utilization_target = 0.8
load_balancing_utilization_target = null
metric = null
max_replicas = 5
min_replicas = 1
cooldown_period = 45
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
named_ports = {
http = 80
@ -179,22 +177,20 @@ module "mig_ew1" {
}
module "mig_ue1" {
source = "../../../modules/compute-mig"
project_id = module.project.project_id
location = "us-east1"
name = "${local.prefix}us-east1-mig"
regional = true
default_version = {
instance_template = module.instance_template_ue1.template.self_link
name = "default"
}
source = "../../../modules/compute-mig"
project_id = module.project.project_id
location = "us-east1"
name = "${local.prefix}us-east1-mig"
instance_template = module.instance_template_ue1.template.self_link
autoscaler_config = {
max_replicas = 5
min_replicas = 1
cooldown_period = 45
cpu_utilization_target = 0.8
load_balancing_utilization_target = null
metric = null
max_replicas = 5
min_replicas = 1
cooldown_period = 45
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
named_ports = {
http = 80

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -15,15 +15,19 @@ Three metric descriptors are created for each monitored resource: usage, limit a
Clone this repository, then go through the following steps to create resources:
- Create a terraform.tfvars file with the following content:
- organization_id = "<YOUR-ORG-ID>"
- billing_account = "<YOUR-BILLING-ACCOUNT>"
- monitoring_project_id = "project-0" # Monitoring project where the dashboard will be created and the solution deployed
- monitored_projects_list = ["project-1", "project2"] # Projects to be monitored by the solution
- monitored_folders_list = ["folder_id"] # Folders to be monitored by the solution
- v2 = true|false # Set to true to use V2 Cloud Functions environment
```tfvars
organization_id = "<YOUR-ORG-ID>"
billing_account = "<YOUR-BILLING-ACCOUNT>"
monitoring_project_id = "project-0" # Monitoring project where the dashboard will be created and the solution deployed
monitored_projects_list = ["project-1", "project2"] # Projects to be monitored by the solution
monitored_folders_list = ["folder_id"] # Folders to be monitored by the solution
v2 = false # Set to true to use V2 Cloud Functions environment
```
- `terraform init`
- `terraform apply`
Note: Org level viewing permission is required for some metrics such as firewall policies.
Once the resources are deployed, go to the following page to see the dashboard: https://console.cloud.google.com/monitoring/dashboards?project=<YOUR-MONITORING-PROJECT>.
A dashboard called "quotas-utilization" should be created.
@ -46,22 +50,33 @@ The Cloud Function currently tracks usage, limit and utilization of:
- internal forwarding rules for internal L7 load balancers per VPC
- internal forwarding rules for internal L4 load balancers per VPC peering group
- internal forwarding rules for internal L7 load balancers per VPC peering group
- Dynamic routes per VPC
- Dynamic routes per VPC peering group
- Dynamic routes per VPC
- Dynamic routes per VPC peering group
- Static routes per project (VPC drill down is available for usage)
- Static routes per VPC peering group
- IP utilization per subnet (% of IP addresses used in a subnet)
- VPC firewall rules per project (VPC drill down is available for usage)
- Tuples per Firewall Policy
It writes this values to custom metrics in Cloud Monitoring and creates a dashboard to visualize the current utilization of these metrics in Cloud Monitoring.
Note that metrics are created in the cloud-function/metrics.yaml file.
Note that metrics are created in the cloud-function/metrics.yaml file. You can also edit default limits for a specific network in that file. See the example for `vpc_peering_per_network`.
## Assumptions and limitations
- The CF assumes that all VPCs in peering groups are within the same organization, except for PSA peerings
- The CF will only fetch subnet utilization data from the PSA peerings (not the VMs, ILB or routes usage)
- The CF assumes global routing is ON, this impacts dynamic routes usage calculation
- The CF assumes custom routes importing/exporting is ON, this impacts static and dynamic routes usage calculation
- The CF assumes all networks in peering groups have the same global routing and custom routes sharing configuration
You can also edit default limits for a specific network in that file. See the example for `vpc_peering_per_network`.
## Next steps and ideas
In a future release, we could support:
- Static routes per VPC / per VPC peering group
- Google managed VPCs that are peered with PSA (such as Cloud SQL or Memorystore)
- Dynamic routes calculation for VPCs/PPGs with "global routing" set to OFF
- Static routes calculation for projects/PPGs with "custom routes importing/exporting" set to OFF
- Calculations for cross Organization peering groups
- Support different scopes (reduced and fine-grained)
If you are interested in this and/or would like to contribute, please contact legranda@google.com.
<!-- BEGIN TFDOC -->

View File

@ -163,6 +163,9 @@ def main(event, context=None):
l4_forwarding_rules_dict = ilb_fwrules.get_forwarding_rules_dict(config, "L4")
l7_forwarding_rules_dict = ilb_fwrules.get_forwarding_rules_dict(config, "L7")
subnet_range_dict = networks.get_subnet_ranges_dict(config)
static_routes_dict = routes.get_static_routes_dict(config)
dynamic_routes_dict = routes.get_dynamic_routes(
config, metrics_dict, limits_dict['dynamic_routes_per_network_limit'])
try:
@ -181,10 +184,12 @@ def main(event, context=None):
ilb_fwrules.get_forwarding_rules_data(
config, metrics_dict, l7_forwarding_rules_dict,
limits_dict['internal_forwarding_rules_l7_limit'], "L7")
routes.get_static_routes_data(config, metrics_dict, static_routes_dict,
project_quotas_dict)
peerings.get_vpc_peering_data(config, metrics_dict,
limits_dict['number_of_vpc_peerings_limit'])
dynamic_routes_dict = routes.get_dynamic_routes(
config, metrics_dict, limits_dict['dynamic_routes_per_network_limit'])
# Per VPC peering group metrics
metrics.get_pgg_data(
@ -207,7 +212,13 @@ def main(event, context=None):
["subnet_ranges_per_peering_group"], subnet_range_dict,
config["limit_names"]["SUBNET_RANGES"],
limits_dict['number_of_subnet_IP_ranges_ppg_limit'])
routes.get_dynamic_routes_ppg(
#static
routes.get_routes_ppg(
config, metrics_dict["metrics_per_peering_group"]
["static_routes_per_peering_group"], static_routes_dict,
limits_dict['static_routes_per_peering_group_limit'])
#dynamic
routes.get_routes_ppg(
config, metrics_dict["metrics_per_peering_group"]
["dynamic_routes_per_peering_group"], dynamic_routes_dict,
limits_dict['dynamic_routes_per_peering_group_limit'])

View File

@ -99,6 +99,19 @@ metrics_per_network:
utilization:
name: dynamic_routes_per_network_utilization
description: Number of Dynamic routes per network - utilization.
#static routes limit is per project, but usage is per network
static_routes_per_project:
usage:
name: static_routes_per_project_vpc_usage
description: Number of Static routes per project and network - usage.
limit:
name: static_routes_per_project_limit
description: Number of Static routes per project - limit.
values:
default_value: 250
utilization:
name: static_routes_per_project_utilization
description: Number of Static routes per project - utilization.
metrics_per_peering_group:
l4_forwarding_rules_per_peering_group:
usage:
@ -160,6 +173,18 @@ metrics_per_peering_group:
utilization:
name: dynamic_routes_per_peering_group_utilization
description: Number of Dynamic routes per peering group - utilization.
static_routes_per_peering_group:
usage:
name: static_routes_per_peering_group_usage
description: Number of Static routes per peering group - usage.
limit:
name: static_routes_per_peering_group_limit
description: Number of Static routes per peering group - limit.
values:
default_value: 300
utilization:
name: static_routes_per_peering_group_utilization
description: Number of Static routes per peering group - utilization.
metrics_per_project:
firewalls:
usage:

View File

@ -26,8 +26,8 @@ from . import metrics, networks, limits
def get_firewall_policies_dict(config: dict):
'''
Calls the Asset Inventory API to get all Firewall Policies under the GCP organization
Calls the Asset Inventory API to get all Firewall Policies under the GCP organization, including children
Ignores monitored projects list: returns all policies regardless of their parent resource
Parameters:
config (dict): The dict containing config like clients and limits
Returns:
@ -55,8 +55,8 @@ def get_firewall_policies_dict(config: dict):
def get_firewal_policies_data(config, metrics_dict, firewall_policies_dict):
'''
Gets the data for VPC Firewall lorem ipsum
Gets the data for VPC Firewall Policies in an organization, including children. All folders are considered,
only projects in the monitored projects list are considered.
Parameters:
config (dict): The dict containing config like clients and limits
metrics_dict (dictionary of dictionary of string: string): metrics names and descriptions.
@ -91,6 +91,9 @@ def get_firewal_policies_data(config, metrics_dict, firewall_policies_dict):
parent_type = re.search("(^\w+)", firewall_policy["parent"]).group(
1) if "parent" in firewall_policy else "projects"
if parent_type == "projects" and parent not in config["monitored_projects"]:
continue
metric_labels = {'parent': parent, 'parent_type': parent_type}
metric_labels["name"] = firewall_policy[

View File

@ -42,7 +42,7 @@ def get_quotas_dict(quotas_list):
def get_quota_project_limit(config, regions=["global"]):
'''
Retrieves limit for a specific project quota
Retrieves quotas for all monitored project in selected regions, default 'global'
Parameters:
project_link (string): Project link.
Returns:
@ -158,7 +158,7 @@ def get_quota_current_limit(config, project_link, metric_name):
def count_effective_limit(config, project_id, network_dict, usage_metric_name,
limit_metric_name, utilization_metric_name,
limit_dict):
limit_dict, timestamp=None):
'''
Calculates the effective limits (using algorithm in the link below) for peering groups and writes data (usage, limit, utilization) to the custom metrics.
Source: https://cloud.google.com/vpc/docs/quota#vpc-peering-effective-limit
@ -171,11 +171,13 @@ def count_effective_limit(config, project_id, network_dict, usage_metric_name,
limit_metric_name (string): Name of the custom metric to be populated for limit per VPC peering group.
utilization_metric_name (string): Name of the custom metric to be populated for utilization per VPC peering group.
limit_dict (dictionary of string:int): Dictionary containing the limit per peering group (either VPC specific or default limit).
timestamp (time): timestamp to be recorded for all points
Returns:
None
'''
timestamp = time.time()
if timestamp == None:
timestamp = time.time()
if network_dict['peerings'] == []:
return

View File

@ -91,7 +91,8 @@ def create_metric(metric_name, description, monitoring_project, config):
def append_data_to_series_buffer(config, metric_name, metric_value,
metric_labels, timestamp=None):
'''
Writes data to Cloud Monitoring custom metrics.
Appends data to Cloud Monitoring custom metrics, using a buffer. buffer is flushed every BUFFER_LEN elements,
any unflushed series is discarded upon function closure
Parameters:
config (dict): The dict containing config like clients and limits
metric_name (string): Name of the metric
@ -139,7 +140,7 @@ def append_data_to_series_buffer(config, metric_name, metric_value,
def flush_series_buffer(config):
'''
writes buffered metrics to Google Cloud Monitoring, empties buffer upon failure
writes buffered metrics to Google Cloud Monitoring, empties buffer upon both failure/success
config (dict): The dict containing config like clients and limits
'''
try:
@ -188,6 +189,7 @@ def get_pgg_data(config, metric_dict, usage_dict, limit_metric, limit_dict):
current_quota_limit_view = customize_quota_view(current_quota_limit)
timestamp = time.time()
# For each network in this GCP project
for network_dict in network_dict_list:
if network_dict['network_id'] == 0:
@ -238,7 +240,7 @@ def get_pgg_data(config, metric_dict, usage_dict, limit_metric, limit_dict):
metric_dict["usage"]["name"],
metric_dict["limit"]["name"],
metric_dict["utilization"]["name"],
limit_dict)
limit_dict, timestamp)
print(
f"Buffered {metric_dict['usage']['name']} for peering group {network_dict['network_name']} in {project_id}"
)

View File

@ -17,6 +17,7 @@
import time
from collections import defaultdict
from google.protobuf import field_mask_pb2
from . import metrics, networks, limits, peerings, routers
@ -78,8 +79,8 @@ def get_routes_for_network(config, network_link, project_id, routers_dict):
def get_dynamic_routes(config, metrics_dict, limits_dict):
'''
Writes all dynamic routes per VPC to custom metrics.
This function gets the usage, limit and utilization for the dynamic routes per VPC
note: assumes global routing is ON for all VPCs
Parameters:
config (dict): The dict containing config like clients and limits
metrics_dict (dictionary of dictionary of string: string): metrics names and descriptions.
@ -128,10 +129,10 @@ def get_dynamic_routes(config, metrics_dict, limits_dict):
return dynamic_routes_dict
def get_dynamic_routes_ppg(config, metric_dict, usage_dict, limit_dict):
def get_routes_ppg(config, metric_dict, usage_dict, limit_dict):
'''
This function gets the usage, limit and utilization for the dynamic routes per VPC peering group.
This function gets the usage, limit and utilization for the static or dynamic routes per VPC peering group.
note: assumes global routing is ON for all VPCs for dynamic routes, assumes share custom routes is on for all peered networks
Parameters:
config (dict): The dict containing config like clients and limits
metric_dict (dictionary of string: string): Dictionary with the metric names and description, that will be used to populate the metrics
@ -140,11 +141,12 @@ def get_dynamic_routes_ppg(config, metric_dict, usage_dict, limit_dict):
Returns:
None
'''
for project in config["monitored_projects"]:
network_dict_list = peerings.gather_peering_data(config, project)
timestamp = time.time()
for project_id in config["monitored_projects"]:
network_dict_list = peerings.gather_peering_data(config, project_id)
for network_dict in network_dict_list:
network_link = f"https://www.googleapis.com/compute/v1/projects/{project}/global/networks/{network_dict['network_name']}"
network_link = f"https://www.googleapis.com/compute/v1/projects/{project_id}/global/networks/{network_dict['network_name']}"
limit = limits.get_ppg(network_link, limit_dict)
@ -169,11 +171,119 @@ def get_dynamic_routes_ppg(config, metric_dict, usage_dict, limit_dict):
peered_network_dict["usage"] = peered_usage
peered_network_dict["limit"] = peered_limit
limits.count_effective_limit(config, project, network_dict,
limits.count_effective_limit(config, project_id, network_dict,
metric_dict["usage"]["name"],
metric_dict["limit"]["name"],
metric_dict["utilization"]["name"],
limit_dict)
limit_dict, timestamp)
print(
f"Wrote {metric_dict['usage']['name']} for peering group {network_dict['network_name']} in {project}"
f"Buffered {metric_dict['usage']['name']} for peering group {network_dict['network_name']} in {project_id}"
)
def get_static_routes_dict(config):
'''
Calls the Asset Inventory API to get all static custom routes under the GCP organization.
Parameters:
config (dict): The dict containing config like clients and limits
Returns:
routes_per_vpc_dict (dictionary of string: int): Keys are the network links and values are the number of custom static routes per network.
'''
routes_per_vpc_dict = defaultdict()
usage_dict = defaultdict()
read_mask = field_mask_pb2.FieldMask()
read_mask.FromJsonString('name,versionedResources')
response = config["clients"]["asset_client"].search_all_resources(
request={
"scope": f"organizations/{config['organization']}",
"asset_types": ["compute.googleapis.com/Route"],
"read_mask": read_mask
})
for resource in response:
for versioned in resource.versioned_resources:
static_route = dict()
for field_name, field_value in versioned.resource.items():
static_route[field_name] = field_value
static_route["project_id"] = static_route["network"].split('/')[6]
static_route["network_name"] = static_route["network"].split('/')[-1]
network_link = f"https://www.googleapis.com/compute/v1/projects/{static_route['project_id']}/global/networks/{static_route['network_name']}"
#exclude default vpc and peering routes, dynamic routes are not in Cloud Asset Inventory
if "nextHopPeering" not in static_route and "nextHopNetwork" not in static_route:
if network_link not in routes_per_vpc_dict:
routes_per_vpc_dict[network_link] = dict()
routes_per_vpc_dict[network_link]["project_id"] = static_route[
"project_id"]
routes_per_vpc_dict[network_link]["network_name"] = static_route[
"network_name"]
if static_route["destRange"] not in routes_per_vpc_dict[network_link]:
routes_per_vpc_dict[network_link][static_route["destRange"]] = {}
if "usage" not in routes_per_vpc_dict[network_link]:
routes_per_vpc_dict[network_link]["usage"] = 0
routes_per_vpc_dict[network_link][
"usage"] = routes_per_vpc_dict[network_link]["usage"] + 1
#output a dict with network links and usage only
return {
network_link_out: routes_per_vpc_dict[network_link_out]["usage"]
for network_link_out in routes_per_vpc_dict
}
def get_static_routes_data(config, metrics_dict, static_routes_dict,
project_quotas_dict):
'''
Determines and writes the number of static routes for each VPC in monitored projects, the per project limit and the per project utilization
note: assumes custom routes sharing is ON for all VPCs
Parameters:
config (dict): The dict containing config like clients and limits
metric_dict (dictionary of string: string): Dictionary with the metric names and description, that will be used to populate the metrics
static_routes_dict (dictionary of dictionary: int): Keys are the network links and values are the number of custom static routes per network.
project_quotas_dict (dictionary of string:int): Dictionary with the network link as key and the limit as value.
Returns:
None
'''
timestamp = time.time()
project_usage = {project: 0 for project in config["monitored_projects"]}
#usage is drilled down by network
for network_link in static_routes_dict:
project_id = network_link.split('/')[6]
if (project_id not in config["monitored_projects"]):
continue
network_name = network_link.split('/')[-1]
project_usage[project_id] = project_usage[project_id] + static_routes_dict[
network_link]
metric_labels = {"project": project_id, "network_name": network_name}
metrics.append_data_to_series_buffer(
config, metrics_dict["metrics_per_network"]["static_routes_per_project"]
["usage"]["name"], static_routes_dict[network_link], metric_labels,
timestamp=timestamp)
#limit and utilization are calculated by project
for project_id in project_usage:
current_quota_limit = project_quotas_dict[project_id]['global']["routes"][
"limit"]
if current_quota_limit is None:
print(
f"Could not determine static routes metric for projects/{project_id} due to missing quotas"
)
continue
# limit and utilization are calculted by project
metric_labels = {"project": project_id}
metrics.append_data_to_series_buffer(
config, metrics_dict["metrics_per_network"]["static_routes_per_project"]
["limit"]["name"], current_quota_limit, metric_labels,
timestamp=timestamp)
metrics.append_data_to_series_buffer(
config, metrics_dict["metrics_per_network"]["static_routes_per_project"]
["utilization"]["name"],
project_usage[project_id] / current_quota_limit, metric_labels,
timestamp=timestamp)
return

View File

@ -84,7 +84,7 @@ def get_firewalls_data(config, metrics_dict, project_quotas_dict,
current_quota_limit = project_quotas_dict[project_id]['global']["firewalls"]
if current_quota_limit is None:
print(
f"Could not determine VPC firewal rules to metric for projects/{project_id} due to missing quotas"
f"Could not determine VPC firewal rules metric for projects/{project_id} due to missing quotas"
)
continue

View File

@ -1,6 +1,6 @@
{
"category": "CUSTOM",
"displayName": "quotas_utilization_updated",
"displayName": "quotas_utilization",
"mosaicLayout": {
"columns": 12,
"tiles": [
@ -22,13 +22,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l4_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "1800s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
}
}
@ -64,13 +62,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l7_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
}
}
@ -106,13 +102,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_instances_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
}
}
@ -148,13 +142,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_vpc_peerings_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
}
}
@ -190,13 +182,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_active_vpc_peerings_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_INTERPOLATE"
}
}
@ -232,13 +222,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_subnet_IP_ranges_ppg_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
}
}
@ -274,13 +262,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l4_ppg_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
}
}
@ -316,13 +302,11 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l7_ppg_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
}
}
@ -358,7 +342,6 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_instances_ppg_utilization\" resource.type=\"global\""
@ -395,7 +378,6 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
},
"filter": "metric.type=\"custom.googleapis.com/dynamic_routes_per_network_utilization\" resource.type=\"global\""
@ -452,7 +434,7 @@
},
"width": 6,
"xPos": 0,
"yPos": 24
"yPos": 32
},
{
"height": 4,
@ -492,7 +474,7 @@
},
"width": 6,
"xPos": 6,
"yPos": 24
"yPos": 32
},
{
"height": 4,
@ -512,7 +494,6 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
},
"filter": "metric.type=\"custom.googleapis.com/firewall_policy_tuples_per_policy_utilization\" resource.type=\"global\""
@ -528,7 +509,7 @@
}
},
"width": 6,
"xPos": 0,
"xPos": 6,
"yPos": 28
},
{
@ -549,7 +530,6 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
},
"filter": "metric.type=\"custom.googleapis.com/ip_addresses_per_subnet_utilization\" resource.type=\"global\""
@ -586,7 +566,6 @@
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_NONE",
"perSeriesAligner": "ALIGN_MEAN"
},
"filter": "metric.type=\"custom.googleapis.com/dynamic_routes_per_peering_group_utilization\" resource.type=\"global\""
@ -604,6 +583,124 @@
"width": 6,
"xPos": 6,
"yPos": 20
},
{
"height": 4,
"widget": {
"title": "static_routes_per_project_vpc_usage",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "60s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"apiSource": "DEFAULT_CLOUD",
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "60s",
"crossSeriesReducer": "REDUCE_SUM",
"groupByFields": [
"metric.label.\"project\""
],
"perSeriesAligner": "ALIGN_MEAN"
},
"filter": "metric.type=\"custom.googleapis.com/static_routes_per_project_vpc_usage\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_NONE"
}
}
}
}
],
"thresholds": [],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"xPos": 0,
"yPos": 24
},
{
"height": 4,
"widget": {
"title": "static_routes_per_ppg_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "60s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"apiSource": "DEFAULT_CLOUD",
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_MEAN"
},
"filter": "metric.type=\"custom.googleapis.com/static_routes_per_peering_group_utilization\" resource.type=\"global\""
}
}
}
],
"thresholds": [],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"xPos": 0,
"yPos": 28
},
{
"height": 4,
"widget": {
"title": "static_routes_per_project_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "60s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"apiSource": "DEFAULT_CLOUD",
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_MEAN"
},
"filter": "metric.type=\"custom.googleapis.com/static_routes_per_project_utilization\" resource.type=\"global\""
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"xPos": 6,
"yPos": 24
}
]
}

View File

@ -50,7 +50,6 @@ module "service-account-function" {
# Required IAM permissions for this service account are:
# 1) compute.networkViewer on projects to be monitored (I gave it at organization level for now for simplicity)
# 2) monitoring viewer on the projects to be monitored (I gave it at organization level for now for simplicity)
# 3) if you dont have permission to create service account and assign permission at organization Level, move these 3 roles to project level.
iam_organization_roles = {
"${var.organization_id}" = [
@ -184,4 +183,4 @@ module "cloud-function" {
resource "google_monitoring_dashboard" "dashboard" {
dashboard_json = file("${path.module}/dashboards/quotas-utilization.json")
project = local.monitoring_project
}
}

View File

@ -17,13 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -0,0 +1,115 @@
# Configuring workload identity federation for Terraform Cloud/Enterprise workflow
The most common way to use Terraform Cloud for GCP deployments is to store a GCP Service Account Key as a part of TFE Workflow configuration, as we all know there are security risks due to the fact that keys are long term credentials that could be compromised.
Workload identity federation enables applications running outside of Google Cloud to replace long-lived service account keys with short-lived access tokens. This is achieved by configuring Google Cloud to trust an external identity provider, so applications can use the credentials issued by the external identity provider to impersonate a service account.
This blueprint shows how to set up [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation) between [Terraform Cloud/Enterprise](https://developer.hashicorp.com/terraform/enterprise) instance and Google Cloud. This will be possible by configuring workload identity federation to trust oidc tokens generated for a specific workflow in a Terraform Enterprise organization.
The following diagram illustrates how the VM will get a short-lived access token and use it to access a resource:
![Sequence diagram](diagram.png)
## Running the blueprint
### Create Terraform Enterprise Workflow
If you don't have an existing Terraform Enterprise organization you can sign up for a [free trial](https://app.terraform.io/public/signup/account) account.
Create a new Workspace for a `CLI-driven workflow` (Identity Federation will work for any workflow type, but for simplicity of the blueprint we use CLI driven workflow).
Note workspace name and id (id starts with `ws-`), we will use them on a later stage.
Go to the organization settings and note the org name and id (id starts with `org-`).
### Deploy GCP Workload Identity Pool Provider for Terraform Enterprise
> **_NOTE:_** This is a preparation part and should be executed on behalf of a user with enough permissions.
Required permissions when new project is created:
- Project Creator on the parent folder/org.
Required permissions when an existing project is used:
- Workload Identity Admin on the project level
- Project IAM Admin on the project level
Fill out required variables, use TFE Org and Workspace IDs from the previous steps (IDs are not the names).
```bash
cd gcp-workload-identity-provider
mv terraform.auto.tfvars.template terraform.auto.tfvars
vi terraform.auto.tfvars
```
Authenticate using application default credentials, execute terraform code and deploy resources
```
gcloud auth application-default login
terraform init
terraform apply
```
As a result a set of outputs will be provided (your values will be different), note the output since we will use it on the next steps.
```
impersonate_service_account_email = "sa-tfe@fe-test-oidc.iam.gserviceaccount.com"
project_id = "tfe-test-oidc"
workload_identity_audience = "//iam.googleapis.com/projects/476538149566/locations/global/workloadIdentityPools/tfe-pool/providers/tfe-provider"
workload_identity_pool_provider_id = "projects/476538149566/locations/global/workloadIdentityPools/tfe-pool/providers/tfe-provider"
```
### Configure OIDC provider for your TFE Workflow
To enable OIDC for a TFE workflow it's enough to setup an environment variable `TFC_WORKLOAD_IDENTITY_AUDIENCE`.
Go the the Workflow -> Variables and add a new variable `TFC_WORKLOAD_IDENTITY_AUDIENCE` equal to the value of `workload_identity_audience` output, in our example it's:
```
TFC_WORKLOAD_IDENTITY_AUDIENCE = "//iam.googleapis.com/projects/476538149566/locations/global/workloadIdentityPools/tfe-pool/providers/tfe-provider"
```
At that point we setup GCP Identity Federation to trust TFE generated OIDC tokens, so the TFE workflow can use the token to impersonate a GCP Service Account.
## Testing the blueprint
In order to test the setup we will deploy a GCS bucket from TFE Workflow using OIDC token for Service Account Impersonation.
### Configure backend and variables
First, we need to configure TFE Remote backend for our testing terraform code, use TFE Organization name and workspace name (names are not the same as ids)
```
cd ../tfc-workflow-using-wif
mv backend.tf.template backend.tf
vi backend.tf
```
Fill out variables based on the output from the preparation steps:
```
mv terraform.auto.tfvars.template terraform.auto.tfvars
vi terraform.auto.tfvars
```
### Authenticate terraform for triggering CLI-driven workflow
Follow this [documentation](https://learn.hashicorp.com/tutorials/terraform/cloud-login) to login ti terraform cloud from the CLI.
### Trigger the workflow
```
terraform init
terraform apply
```
As a result we have a successfully deployed GCS bucket from Terraform Enterprise workflow using Workload Identity Federation.
Once done testing, you can clean up resources by running `terraform destroy` first in the `tfc-workflow-using-wif` and then `gcp-workload-identity-provider` folders.

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -0,0 +1,33 @@
# GCP Workload Identity Provider for Terraform Enterprise
This terraform code is a part of [GCP Workload Identity Federation for Terraform Enterprise](../) blueprint.
The codebase provisions the following list of resources:
- GCS Bucket
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [billing_account](variables.tf#L16) | Billing account id used as default for new projects. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L43) | Existing project id. | <code>string</code> | ✓ | |
| [tfe_organization_id](variables.tf#L48) | TFE organization id. | <code>string</code> | ✓ | |
| [tfe_workspace_id](variables.tf#L53) | TFE workspace id. | <code>string</code> | ✓ | |
| [issuer_uri](variables.tf#L21) | Terraform Enterprise uri. Replace the uri if a self hosted instance is used. | <code>string</code> | | <code>&#34;https:&#47;&#47;app.terraform.io&#47;&#34;</code> |
| [parent](variables.tf#L27) | Parent folder or organization in 'folders/folder_id' or 'organizations/org_id' format. | <code>string</code> | | <code>null</code> |
| [project_create](variables.tf#L37) | Create project instead of using an existing one. | <code>bool</code> | | <code>true</code> |
| [workload_identity_pool_id](variables.tf#L58) | Workload identity pool id. | <code>string</code> | | <code>&#34;tfe-pool&#34;</code> |
| [workload_identity_pool_provider_id](variables.tf#L64) | Workload identity pool provider id. | <code>string</code> | | <code>&#34;tfe-provider&#34;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [impersonate_service_account_email](outputs.tf#L16) | Service account to be impersonated by workload identity. | |
| [project_id](outputs.tf#L21) | GCP Project ID. | |
| [workload_identity_audience](outputs.tf#L26) | TFC Workload Identity Audience. | |
| [workload_identity_pool_provider_id](outputs.tf#L31) | GCP workload identity pool provider ID. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,91 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
# GCP PROJECT #
###############################################################################
module "project" {
source = "../../../../modules/project"
name = var.project_id
project_create = var.project_create
parent = var.parent
billing_account = var.billing_account
services = [
"iam.googleapis.com",
"cloudresourcemanager.googleapis.com",
"iamcredentials.googleapis.com",
"sts.googleapis.com",
"storage.googleapis.com"
]
}
###############################################################################
# Workload Identity Pool and Provider #
###############################################################################
resource "google_iam_workload_identity_pool" "tfe-pool" {
project = module.project.project_id
workload_identity_pool_id = var.workload_identity_pool_id
display_name = "TFE Pool"
description = "Identity pool for Terraform Enterprise OIDC integration"
}
resource "google_iam_workload_identity_pool_provider" "tfe-pool-provider" {
project = module.project.project_id
workload_identity_pool_id = google_iam_workload_identity_pool.tfe-pool.workload_identity_pool_id
workload_identity_pool_provider_id = var.workload_identity_pool_provider_id
display_name = "TFE Pool Provider"
description = "OIDC identity pool provider for TFE Integration"
# Use condition to make sure only token generated for a specific TFE Org can be used across org workspaces
attribute_condition = "attribute.terraform_organization_id == \"${var.tfe_organization_id}\""
attribute_mapping = {
"google.subject" = "assertion.sub"
"attribute.aud" = "assertion.aud"
"attribute.terraform_run_phase" = "assertion.terraform_run_phase"
"attribute.terraform_workspace_id" = "assertion.terraform_workspace_id"
"attribute.terraform_workspace_name" = "assertion.terraform_workspace_name"
"attribute.terraform_organization_id" = "assertion.terraform_organization_id"
"attribute.terraform_organization_name" = "assertion.terraform_organization_name"
"attribute.terraform_run_id" = "assertion.terraform_run_id"
"attribute.terraform_full_workspace" = "assertion.terraform_full_workspace"
}
oidc {
# Should be different if self hosted TFE instance is used
issuer_uri = var.issuer_uri
}
}
###############################################################################
# Service Account and IAM bindings #
###############################################################################
module "sa-tfe" {
source = "../../../../modules/iam-service-account"
project_id = module.project.project_id
name = "sa-tfe"
iam = {
# We allow only tokens generated by a specific TFE workspace impersonation of the service account,
# that way one identity pool can be used for a TFE Organization, but every workspace will be able to impersonate only a specifc SA
"roles/iam.workloadIdentityUser" = ["principalSet://iam.googleapis.com/${google_iam_workload_identity_pool.tfe-pool.name}/attribute.terraform_workspace_id/${var.tfe_workspace_id}"]
}
iam_project_roles = {
"${module.project.project_id}" = [
"roles/storage.admin"
]
}
}

View File

@ -0,0 +1,34 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
output "impersonate_service_account_email" {
description = "Service account to be impersonated by workload identity."
value = module.sa-tfe.email
}
output "project_id" {
description = "GCP Project ID."
value = module.project.project_id
}
output "workload_identity_audience" {
description = "TFC Workload Identity Audience."
value = "//iam.googleapis.com/${google_iam_workload_identity_pool_provider.tfe-pool-provider.name}"
}
output "workload_identity_pool_provider_id" {
description = "GCP workload identity pool provider ID."
value = google_iam_workload_identity_pool_provider.tfe-pool-provider.name
}

View File

@ -0,0 +1,20 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
parent = "folders/437102807785"
project_id = "my-project-id"
tfe_organization_id = "org-W3bz9neazHrZz99U"
tfe_workspace_id = "ws-DFxEE3NmeMdaAvoK"
billing_account = "015617-1B8CBC-AF10D9"

View File

@ -0,0 +1,68 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
variable "billing_account" {
description = "Billing account id used as default for new projects."
type = string
}
variable "issuer_uri" {
description = "Terraform Enterprise uri. Replace the uri if a self hosted instance is used."
type = string
default = "https://app.terraform.io/"
}
variable "parent" {
description = "Parent folder or organization in 'folders/folder_id' or 'organizations/org_id' format."
type = string
default = null
validation {
condition = var.parent == null || can(regex("(organizations|folders)/[0-9]+", var.parent))
error_message = "Parent must be of the form folders/folder_id or organizations/organization_id."
}
}
variable "project_create" {
description = "Create project instead of using an existing one."
type = bool
default = true
}
variable "project_id" {
description = "Existing project id."
type = string
}
variable "tfe_organization_id" {
description = "TFE organization id."
type = string
}
variable "tfe_workspace_id" {
description = "TFE workspace id."
type = string
}
variable "workload_identity_pool_id" {
description = "Workload identity pool id."
type = string
default = "tfe-pool"
}
variable "workload_identity_pool_provider_id" {
description = "Workload identity pool provider id."
type = string
default = "tfe-provider"
}

View File

@ -0,0 +1,18 @@
# GCP Workload Identity Provider for Terraform Enterprise
This terraform code is a part of [GCP Workload Identity Federation for Terraform Enterprise](../) blueprint. For instructions please refer to the blueprint [readme](../README.md).
The codebase provisions the following list of resources:
- GCS Bucket
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [impersonate_service_account_email](variables.tf#L21) | Service account to be impersonated by workload identity. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L16) | GCP project ID. | <code>string</code> | ✓ | |
| [workload_identity_pool_provider_id](variables.tf#L26) | GCP workload identity pool provider ID. | <code>string</code> | ✓ | |
<!-- END TFDOC -->

View File

@ -0,0 +1,29 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The block below configures Terraform to use the 'remote' backend with Terraform Cloud.
# For more information, see https://www.terraform.io/docs/backends/types/remote.html
terraform {
backend "remote" {
organization = "<TFE-ORG-NAME>"
workspaces {
name = "<TFE-WORKSPACE-NAME>"
}
}
required_version = ">= 0.14.0"
}

View File

@ -0,0 +1,25 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
# TEST RESOURCE TO VALIDATE WIF #
###############################################################################
resource "google_storage_bucket" "test-bucket" {
project = var.project_id
name = "${var.project_id}-tfe-oidc-test-bucket"
location = "US"
force_destroy = true
}

View File

@ -0,0 +1,25 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module "tfe_oidc" {
source = "./tfc-oidc"
workload_identity_pool_provider_id = var.workload_identity_pool_provider_id
impersonate_service_account_email = var.impersonate_service_account_email
}
provider "google" {
credentials = module.tfe_oidc.credentials
}

View File

@ -0,0 +1,17 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
project_id = "tfe-oidc-workflow"
workload_identity_pool_provider_id = "projects/683987109094/locations/global/workloadIdentityPools/tfe-pool/providers/tfe-provider"
impersonate_service_account_email = "sa-tfe@tfe-oidc-workflow2.iam.gserviceaccount.com"

View File

@ -0,0 +1,40 @@
# Terraform Enterprise OIDC Credential for GCP Workload Identity Federation
This is a helper module to prepare GCP Credentials from Terraform Enterprise workload identity token. For more information see [Terraform Enterprise Workload Identity Federation](../) blueprint.
## Example
```hcl
module "tfe_oidc" {
source = "./tfe_oidc"
workload_identity_pool_provider_id = "projects/683987109094/locations/global/workloadIdentityPools/tfe-pool/providers/tfe-provider"
impersonate_service_account_email = "tfe-test@tfe-test-wif.iam.gserviceaccount.com"
}
provider "google" {
credentials = module.tfe_oidc.credentials
}
provider "google-beta" {
credentials = module.tfe_oidc.credentials
}
# tftest skip
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [impersonate_service_account_email](variables.tf#L22) | Service account to be impersonated by workload identity federation. | <code>string</code> | ✓ | |
| [workload_identity_pool_provider_id](variables.tf#L17) | GCP workload identity pool provider ID. | <code>string</code> | ✓ | |
| [tmp_oidc_token_path](variables.tf#L27) | Name of the temporary file where TFC OIDC token will be stored to authentificate terraform provider google. | <code>string</code> | | <code>&#34;.oidc_token&#34;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [credentials](outputs.tf#L17) | | |
<!-- END TFDOC -->

View File

@ -14,9 +14,10 @@
* limitations under the License.
*/
module "org-policy" {
source = "../../../../modules/organization-policy"
config_directory = var.config_directory
policies = var.policies
locals {
audience = "//iam.googleapis.com/${var.workload_identity_pool_provider_id}"
}
data "external" "oidc_token_file" {
program = ["bash", "${path.module}/write_token.sh", "${var.tmp_oidc_token_path}"]
}

View File

@ -0,0 +1,26 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "credentials" {
value = jsonencode({
"type" : "external_account",
"audience" : "${local.audience}",
"subject_token_type" : "urn:ietf:params:oauth:token-type:jwt",
"token_url" : "https://sts.googleapis.com/v1/token",
"credential_source" : data.external.oidc_token_file.result
"service_account_impersonation_url" : "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/${var.impersonate_service_account_email}:generateAccessToken"
})
}

View File

@ -0,0 +1,31 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "workload_identity_pool_provider_id" {
description = "GCP workload identity pool provider ID."
type = string
}
variable "impersonate_service_account_email" {
description = "Service account to be impersonated by workload identity federation."
type = string
}
variable "tmp_oidc_token_path" {
description = "Name of the temporary file where TFC OIDC token will be stored to authentificate terraform provider google."
type = string
default = ".oidc_token"
}

View File

@ -0,0 +1,17 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
required_version = ">= 1.3.1"
}

View File

@ -0,0 +1,23 @@
#!/bin/bash
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Exit if any of the intermediate steps fail
set -e
FILENAME=$@
echo $TFC_WORKLOAD_IDENTITY_TOKEN > $FILENAME
echo -n "{\"file\":\"${FILENAME}\"}"

View File

@ -0,0 +1,29 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
variable "project_id" {
description = "GCP project ID."
type = string
}
variable "impersonate_service_account_email" {
description = "Service account to be impersonated by workload identity."
type = string
}
variable "workload_identity_pool_provider_id" {
description = "GCP workload identity pool provider ID."
type = string
}

View File

@ -1,9 +1,9 @@
# Configuring workload identity federation to access Google Cloud resources from apps running on Azure
# Configuring Workload Identity Federation to access Google Cloud resources from apps running on Azure
The most straightforward way for workloads running outside of Google Cloud to call Google Cloud APIs is by using a downloaded service account key. However, this approach has 2 major pain points:
* A management hassle, keys need to be stored securely and rotated often.
* A security risk, keys are long term credentials that could be compromised.
* A security risk, keys are long term credentials that could be compromised.
Workload identity federation enables applications running outside of Google Cloud to replace long-lived service account keys with short-lived access tokens. This is achieved by configuring Google Cloud to trust an external identity provider, so applications can use the credentials issued by the external identity provider to impersonate a service account.
@ -19,17 +19,17 @@ The provided terraform configuration will set up the following architecture:
* On Azure:
* An Azure Active Directory application and a service principal. By default, the new application grants all users in the Azure AD tenant permission to obtain access tokens. So an app role assignment will be required to restrict which identities can obtain access tokens for the application.
* An Azure Active Directory application and a service principal. By default, the new application grants all users in the Azure AD tenant permission to obtain access tokens. So an app role assignment will be required to restrict which identities can obtain access tokens for the application.
* Optionally, all the resources required to have a VM configured to run with a system-assigned managed identity and accessible via SSH on a public IP using public key authentication, so we can log in to the machine and run the `gcloud` command to verify that everything works as expected.
* Optionally, all the resources required to have a VM configured to run with a system-assigned managed identity and accessible via SSH on a public IP using public key authentication, so we can log in to the machine and run the `gcloud` command to verify that everything works as expected.
* On Google Cloud:
* A Google Cloud project with:
* A Google Cloud project with:
* A workload identity pool and provider configured to trust the AAD application
* A workload identity pool and provider configured to trust the AAD application
* A service account with the Viewer role granted on the project. The external identities in the workload identity pool would be assigned the Workload Identity User role on that service account.
* A service account with the Viewer role granted on the project. The external identities in the workload identity pool would be assigned the Workload Identity User role on that service account.
## Running the blueprint
@ -42,7 +42,7 @@ Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/c
Once the resources have been created, do the following to verify that everything works as expected:
1. Log in to the VM.
1. Log in to the VM.
If you have created the VM using this terraform configuration proceed the following way:
@ -72,7 +72,6 @@ Once the resources have been created, do the following to verify that everything
`gcloud projects describe PROJECT_ID`
Once done testing, you can clean up resources by running `terraform destroy`.
<!-- BEGIN TFDOC -->

View File

@ -6,32 +6,32 @@ They are meant to be used as minimal but complete starting points to create actu
## Blueprints
### Cloud SQL instance with multi-region read replicas
<a href="./cloudsql-multiregion/" title="Cloud SQL instance with multi-region read replicas"><img src="./cloudsql-multiregion/images/diagram.png" align="left" width="280px"></a>
This [blueprint](./cloudsql-multiregion/) creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
<br clear="left">
### GCE and GCS CMEK via centralized Cloud KMS
<a href="./cmek-via-centralized-kms/" title="CMEK on Cloud Storage and Compute Engine via centralized Cloud KMS"><img src="./cmek-via-centralized-kms/diagram.png" align="left" width="280px"></a> This [blueprint](./cmek-via-centralized-kms/) implements [CMEK](https://cloud.google.com/kms/docs/cmek) for GCS and GCE, via keys hosted in KMS running in a centralized project. The blueprint shows the basic resources and permissions for the typical use case of application projects implementing encryption at rest via a centrally managed KMS service.
<br clear="left">
### Cloud Storage to Bigquery with Cloud Dataflow with least privileges
### Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key
<a href="./composer-2/" title="# Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key
"><img src="./composer-2/diagram.png" align="left" width="280px"></a>
This [blueprint](./composer-2/) creates a [Cloud Composer](https://cloud.google.com/composer/) version 2 instance on a VPC with a dedicated service account. The solution supports as inputs: a Shared VPC and Cloud KMS CMEK keys.
<a href="./gcs-to-bq-with-least-privileges/" title="Cloud Storage to Bigquery with Cloud Dataflow with least privileges"><img src="./gcs-to-bq-with-least-privileges/images/diagram.png" align="left" width="280px"></a> This [blueprint](./gcs-to-bq-with-least-privileges/) implements resources required to run GCS to BigQuery Dataflow pipelines. The solution rely on a set of Services account created with the least privileges principle.
<br clear="left">
### Data Platform Foundations
<a href="./data-platform-foundations/" title="Data Platform Foundations"><img src="./data-platform-foundations/images/overview_diagram.png" align="left" width="280px"></a>
This [blueprint](./data-platform-foundations/) implements a robust and flexible Data Foundation on GCP that provides opinionated defaults, allowing customers to build and scale out additional data pipelines quickly and reliably.
<br clear="left">
### SQL Server Always On Availability Groups
<a href="./sqlserver-alwayson/" title="SQL Server Always On Availability Groups"><img src="https://cloud.google.com/compute/images/sqlserver-ag-architecture.svg" align="left" width="280px"></a>
This [blueprint](./data-platform-foundations/) implements SQL Server Always On Availability Groups using Fabric modules. It builds a two node cluster with a fileshare witness instance in an existing VPC and adds the necessary firewalling. The actual setup process (apart from Active Directory operations) has been scripted, so that least amount of manual works needs to performed.
<br clear="left">
### Cloud SQL instance with multi-region read replicas
<a href="./cloudsql-multiregion/" title="Cloud SQL instance with multi-region read replicas"><img src="./cloudsql-multiregion/images/diagram.png" align="left" width="280px"></a>
This [blueprint](./cloudsql-multiregion/) creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
<br clear="left">
### Data Playground starter with Cloud Vertex AI Notebook and GCS
@ -40,11 +40,18 @@ This [blueprint](./cloudsql-multiregion/) creates a [Cloud SQL instance](https:/
This [blueprint](./data-playground/) creates a [Vertex AI
Notebook](https://cloud.google.com/vertex-ai/docs/workbench/introduction)
running on a VPC with a private IP and a dedicated Service Account. A GCS bucket and a BigQuery dataset are created to store inputs and outputs of data experiments.
<br clear="left">
### Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key
### Cloud Storage to Bigquery with Cloud Dataflow with least privileges
<a href="./composer-2/" title="# Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key
"><img src="./composer-2/diagram.png" align="left" width="280px"></a>
This [blueprint](./composer-2/) creates a [Cloud Composer](https://cloud.google.com/composer/) version 2 instance on a VPC with a dedicated service account. The solution supports as inputs: a Shared VPC and Cloud KMS CMEK keys.
<br clear="left">
<a href="./gcs-to-bq-with-least-privileges/" title="Cloud Storage to Bigquery with Cloud Dataflow with least privileges"><img src="./gcs-to-bq-with-least-privileges/images/diagram.png" align="left" width="280px"></a> This [blueprint](./gcs-to-bq-with-least-privileges/) implements resources required to run GCS to BigQuery Dataflow pipelines. The solution rely on a set of Services account created with the least privileges principle.
<br clear="left">
### SQL Server Always On Availability Groups
<a href="./sqlserver-alwayson/" title="SQL Server Always On Availability Groups"><img src="https://cloud.google.com/compute/images/sqlserver-ag-architecture.svg" align="left" width="280px"></a>
This [blueprint](./data-platform-foundations/) implements SQL Server Always On Availability Groups using Fabric modules. It builds a two node cluster with a fileshare witness instance in an existing VPC and adds the necessary firewalling. The actual setup process (apart from Active Directory operations) has been scripted, so that least amount of manual works needs to performed.
<br clear="left">

View File

@ -39,7 +39,7 @@ If `project_create` is left to `null`, the identity performing the deployment ne
Click on the image below, sign in if required and when the prompt appears, click on “confirm”.
[![Open Cloudshell](images/button.png)](https://goo.gle/GoCloudSQL)
[![Open Cloudshell](../../../assets/images/cloud-shell-button.png)](https://goo.gle/GoCloudSQL)
This will clone the repository to your cloud shell and a screen like this one will appear:
@ -81,7 +81,8 @@ This implementation is intentionally minimal and easy to read. A real world use
- Using VPC-SC to mitigate data exfiltration
### Shared VPC
The example supports the configuration of a Shared VPC as an input variable.
The example supports the configuration of a Shared VPC as an input variable.
To deploy the solution on a Shared VPC, you have to configure the `network_config` variable:
```
@ -94,12 +95,14 @@ network_config = {
```
To run this example, the Shared VPC project needs to have:
- A Private Service Connect with a range of `/24` (example: `10.60.0.0/24`) to deploy the Cloud SQL instance.
- Internet access configured (for example Cloud NAT) to let the Test VM download packages.
- A Private Service Connect with a range of `/24` (example: `10.60.0.0/24`) to deploy the Cloud SQL instance.
- Internet access configured (for example Cloud NAT) to let the Test VM download packages.
In order to run the example and deploy Cloud SQL on a shared VPC the identity running Terraform must have the following IAM role on the Shared VPC Host project.
- Compute Network Admin (roles/compute.networkAdmin)
- Compute Shared VPC Admin (roles/compute.xpnAdmin)
- Compute Network Admin (roles/compute.networkAdmin)
- Compute Shared VPC Admin (roles/compute.xpnAdmin)
## Test your environment

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -67,8 +67,10 @@ module "orch-project" {
"roles/storage.objectViewer" = [module.load-sa-df-0.iam_email]
}
oslogin = false
policy_boolean = {
"constraints/compute.requireOsLogin" = false
org_policies = {
"constraints/compute.requireOsLogin" = {
enforce = false
}
}
services = concat(var.project_services, [
"artifactregistry.googleapis.com",
@ -82,6 +84,7 @@ module "orch-project" {
"container.googleapis.com",
"containerregistry.googleapis.com",
"dataflow.googleapis.com",
"orgpolicy.googleapis.com",
"pubsub.googleapis.com",
"servicenetworking.googleapis.com",
"storage.googleapis.com",

View File

@ -160,9 +160,10 @@ You can find more details and best practices on using DLP to De-identification a
[Data Catalog](https://cloud.google.com/data-catalog) helps you to document your data entry at scale. Data Catalog relies on [tags](https://cloud.google.com/data-catalog/docs/tags-and-tag-templates#tags) and [tag template](https://cloud.google.com/data-catalog/docs/tags-and-tag-templates#tag-templates) to manage metadata for all data entries in a unified and centralized service. To implement [column-level security](https://cloud.google.com/bigquery/docs/column-level-security-intro) on BigQuery, we suggest to use `Tags` and `Tag templates`.
The default configuration will implement 3 tags:
- `3_Confidential`: policy tag for columns that include very sensitive information, such as credit card numbers.
- `2_Private`: policy tag for columns that include sensitive personal identifiable information (PII) information, such as a person's first name.
- `1_Sensitive`: policy tag for columns that include data that cannot be made public, such as the credit limit.
- `3_Confidential`: policy tag for columns that include very sensitive information, such as credit card numbers.
- `2_Private`: policy tag for columns that include sensitive personal identifiable information (PII) information, such as a person's first name.
- `1_Sensitive`: policy tag for columns that include data that cannot be made public, such as the credit limit.
Anything that is not tagged is available to all users who have access to the data warehouse.
@ -222,7 +223,7 @@ module "data-platform" {
prefix = "myprefix"
}
# tftest modules=42 resources=315
# tftest modules=42 resources=316
```
## Customizations
@ -238,7 +239,7 @@ To do this, you need to remove IAM binging at project-level for the `data-analys
## Demo pipeline
The application layer is out of scope of this script. As a demo purpuse only, several Cloud Composer DAGs are provided. Demos will import data from the `drop off` area to the `Data Warehouse Confidential` dataset suing different features.
The application layer is out of scope of this script. As a demo purpuse only, several Cloud Composer DAGs are provided. Demos will import data from the `drop off` area to the `Data Warehouse Confidential` dataset suing different features.
You can find examples in the `[demo](./demo)` folder.
<!-- BEGIN TFDOC -->

View File

@ -35,13 +35,16 @@ module "project" {
"dataflow.googleapis.com",
"ml.googleapis.com",
"notebooks.googleapis.com",
"orgpolicy.googleapis.com",
"servicenetworking.googleapis.com",
"stackdriver.googleapis.com",
"storage.googleapis.com",
"storage-component.googleapis.com"
]
policy_boolean = {
# "constraints/compute.requireOsLogin" = false
org_policies = {
# "constraints/compute.requireOsLogin" = {
# enforce = false
# }
# Example of applying a project wide policy, mainly useful for Composer
}
service_encryption_key_ids = {

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -60,8 +60,7 @@ __Note__: To grant a user a role, take a look at the [Granting and Revoking Acce
Click on the button below, sign in if required and when the prompt appears, click on “confirm”.
[![Open Cloudshell](images/shell_button.png)](https://goo.gle/GoDataPipe)
[![Open Cloudshell](../../../assets/images/cloud-shell-button.png)](https://goo.gle/GoDataPipe)
This will clone the repository to your cloud shell and a screen like this one will appear:
@ -146,13 +145,13 @@ Once this is done, the 3 files necessary to run the Dataflow Job will have been
Run the following command to start the dataflow job:
gcloud --impersonate-service-account=orchestrator@$SERVICE_PROJECT_ID.iam.gserviceaccount.com dataflow jobs run test_batch_01 \
gcloud --impersonate-service-account=orchestrator@$SERVICE_PROJECT_ID.iam.gserviceaccount.com dataflow jobs run test_batch_01 \
--gcs-location gs://dataflow-templates/latest/GCS_Text_to_BigQuery \
--project $SERVICE_PROJECT_ID \
--region europe-west1 \
--disable-public-ips \
--subnetwork https://www.googleapis.com/compute/v1/projects/$SERVICE_PROJECT_ID/regions/europe-west1/subnetworks/subnet \
--staging-location gs://$PREFIX-df-tmp\
--staging-location gs://$PREFIX-df-tmp \
--service-account-email df-loading@$SERVICE_PROJECT_ID.iam.gserviceaccount.com \
--parameters \
javascriptTextTransformFunctionName=transform,\

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -13,25 +13,54 @@
# limitations under the License.
locals {
prefix = var.prefix != "" ? format("%s-", var.prefix) : ""
vpc_project = var.shared_vpc_project_id != null ? var.shared_vpc_project_id : module.project.project_id
network = module.vpc.self_link
subnetwork = var.project_create != null ? module.vpc.subnet_self_links[format("%s/%s", var.region, var.subnetwork)] : data.google_compute_subnetwork.subnetwork[0].self_link
node_base = format("%s%s", local.prefix, var.node_name)
node_prefix = length(local.node_base) > 12 ? substr(local.node_base, 0, 12) : local.node_base
node_netbios_names = [for idx in range(1, 3) : format("%s-%02d", local.node_prefix, idx)]
witness_name = format("%s%s", local.prefix, var.witness_name)
witness_netbios_name = length(local.witness_name) > 15 ? substr(local.witness_name, 0, 15) : local.witness_name
zones = var.project_create == null ? data.google_compute_zones.zones[0].names : formatlist("${var.region}-%s", ["a", "b", "c"])
node_zones = merge({ for idx, node_name in local.node_netbios_names : node_name => local.zones[idx] },
{ (local.witness_netbios_name) = local.zones[length(local.zones) - 1] })
cluster_full_name = format("%s%s", local.prefix, var.cluster_name)
cluster_netbios_name = length(local.cluster_full_name) > 15 ? substr(local.cluster_full_name, 0, 15) : local.cluster_full_name
ad_user_password_secret = format("%s%s-password", local.prefix, var.cluster_name)
ad_user_password_secret = "${local.cluster_full_name}-password"
cluster_full_name = "${local.prefix}${var.cluster_name}"
cluster_netbios_name = (
length(local.cluster_full_name) > 15
? substr(local.cluster_full_name, 0, 15)
: local.cluster_full_name
)
network = module.vpc.self_link
node_base = "${local.prefix}${var.node_name}"
node_prefix = (
length(local.node_base) > 12
? substr(local.node_base, 0, 12)
: local.node_base
)
node_netbios_names = [
for idx in range(1, 3) : format("%s-%02d", local.node_prefix, idx)
]
node_zones = merge(
{
for idx, node_name in local.node_netbios_names :
node_name => local.zones[idx]
},
{
(local.witness_netbios_name) = local.zones[length(local.zones) - 1]
}
)
prefix = var.prefix != "" ? "${var.prefix}-" : ""
subnetwork = (
var.project_create != null
? module.vpc.subnet_self_links["${var.region}/${var.subnetwork}"]
: data.google_compute_subnetwork.subnetwork[0].self_link
)
vpc_project = (
var.shared_vpc_project_id != null
? var.shared_vpc_project_id
: module.project.project_id
)
witness_name = "${local.prefix}${var.witness_name}"
witness_netbios_name = (
length(local.witness_name) > 15
? substr(local.witness_name, 0, 15)
: local.witness_name
)
zones = (
var.project_create == null
? data.google_compute_zones.zones[0].names
: formatlist("${var.region}-%s", ["a", "b", "c"])
)
}
module "project" {

View File

@ -15,26 +15,36 @@
# tfdoc:file:description Creates the VPC and manages the firewall rules and ILB.
locals {
listeners = { for aog in var.always_on_groups : format("%slb-%s", local.prefix, aog) => {
region = var.region
subnetwork = local.subnetwork
internal_addresses = merge(
local.listeners,
local.node_ips,
{
"${local.prefix}cluster" = {
region = var.region
subnetwork = local.subnetwork
}
(local.witness_netbios_name) = {
region = var.region
subnetwork = local.subnetwork
}
}
)
internal_address_ips = {
for k, v in module.ip-addresses.internal_addresses :
k => v.address
}
node_ips = { for node_name in local.node_netbios_names : node_name => {
region = var.region
subnetwork = local.subnetwork
}
}
internal_addresses = merge({
format("%scluster", local.prefix) = {
listeners = {
for aog in var.always_on_groups : "${local.prefix}lb-${aog}" => {
region = var.region
subnetwork = local.subnetwork
}
(local.witness_netbios_name) = {
}
node_ips = {
for node_name in local.node_netbios_names : node_name => {
region = var.region
subnetwork = local.subnetwork
}
}, local.listeners, local.node_ips)
}
}
data "google_compute_zones" "zones" {
@ -50,7 +60,6 @@ data "google_compute_subnetwork" "subnetwork" {
region = var.region
}
# Create VPC if required
module "vpc" {
source = "../../../modules/net-vpc"
@ -66,7 +75,6 @@ module "vpc" {
vpc_create = var.project_create != null ? true : false
}
# Firewall rules required for WSFC nodes
module "firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = local.vpc_project
@ -76,7 +84,7 @@ module "firewall" {
https_source_ranges = []
ssh_source_ranges = []
custom_rules = {
format("%sallow-all-between-wsfc-nodes", local.prefix) = {
"${local.prefix}allow-all-between-wsfc-nodes" = {
description = "Allow all between WSFC nodes"
direction = "INGRESS"
action = "allow"
@ -91,7 +99,7 @@ module "firewall" {
]
extra_attributes = {}
}
format("%sallow-all-between-wsfc-witness", local.prefix) = {
"${local.prefix}allow-all-between-wsfc-witness" = {
description = "Allow all between WSFC witness nodes"
direction = "INGRESS"
action = "allow"
@ -106,7 +114,7 @@ module "firewall" {
]
extra_attributes = {}
}
format("%sallow-sql-to-wsfc-nodes", local.prefix) = {
"${local.prefix}allow-sql-to-wsfc-nodes" = {
description = "Allow SQL connections to WSFC nodes"
direction = "INGRESS"
action = "allow"
@ -119,7 +127,7 @@ module "firewall" {
]
extra_attributes = {}
}
format("%sallow-health-check-to-wsfc-nodes", local.prefix) = {
"${local.prefix}allow-health-check-to-wsfc-nodes" = {
description = "Allow health checks to WSFC nodes"
direction = "INGRESS"
action = "allow"
@ -135,39 +143,31 @@ module "firewall" {
}
}
# IP Address reservation for cluster and listener
module "ip-addresses" {
source = "../../../modules/net-address"
project_id = local.vpc_project
source = "../../../modules/net-address"
project_id = local.vpc_project
internal_addresses = local.internal_addresses
}
# L4 Internal Load Balancer for SQL Listener
module "listener-ilb" {
source = "../../../modules/net-ilb"
for_each = toset(var.always_on_groups)
project_id = var.project_id
region = var.region
name = format("%s-%s-ilb", var.prefix, each.value)
service_label = format("%s-%s-ilb", var.prefix, each.value)
address = module.ip-addresses.internal_addresses[format("%slb-%s", local.prefix, each.value)].address
network = local.network
subnetwork = local.subnetwork
source = "../../../modules/net-ilb"
for_each = toset(var.always_on_groups)
project_id = var.project_id
region = var.region
name = "${var.prefix}-${each.value}-ilb"
service_label = "${var.prefix}-${each.value}-ilb"
address = local.internal_address_ips["${local.prefix}lb-${each.value}"]
vpc_config = {
network = local.network
subnetwork = local.subnetwork
}
backends = [for k, node in module.nodes : {
failover = false
group = node.group.self_link
balancing_mode = "CONNECTION"
group = node.group.self_link
}]
health_check_config = {
type = "tcp",
check = { port = var.health_check_port },
config = var.health_check_config,
logging = true
enable_logging = true
tcp = {
port = var.health_check_port
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -68,13 +68,13 @@ module "projects" {
iam = try(each.value.iam, {})
kms_service_agents = try(each.value.kms, {})
labels = try(each.value.labels, {})
org_policies = try(each.value.org_policies, null)
org_policies = try(each.value.org_policies, {})
service_accounts = try(each.value.service_accounts, {})
services = try(each.value.services, [])
service_identities_iam = try(each.value.service_identities_iam, {})
vpc = try(each.value.vpc, null)
}
# tftest modules=7 resources=27
# tftest modules=7 resources=29
```
### Projects configuration
@ -153,16 +153,16 @@ labels:
environment: prod
# [opt] Org policy overrides defined at project level
org_policies:
policy_boolean:
constraints/compute.disableGuestAttributesAccess: true
policy_list:
constraints/compute.trustedImageProjects:
inherit_from_parent: null
status: true
suggested_value: null
org_policies:
constraints/compute.disableGuestAttributesAccess:
enforce: true
constraints/compute.trustedImageProjects:
allow:
values:
- projects/fast-prod-iac-core-0
- projects/fast-dev-iac-core-0
constraints/compute.vmExternalIpAccess:
deny:
all: true
# [opt] Service account to create for the project and their roles on the project
# in name => [roles] format
@ -221,23 +221,28 @@ vpc:
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [billing_account_id](variables.tf#L17) | Billing account id. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L119) | Project id. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L157) | Project id. | <code>string</code> | ✓ | |
| [billing_alert](variables.tf#L22) | Billing alert configuration. | <code title="object&#40;&#123;&#10; amount &#61; number&#10; thresholds &#61; object&#40;&#123;&#10; current &#61; list&#40;number&#41;&#10; forecasted &#61; list&#40;number&#41;&#10; &#125;&#41;&#10; credit_treatment &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [defaults](variables.tf#L35) | Project factory default values. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; billing_alert &#61; object&#40;&#123;&#10; amount &#61; number&#10; thresholds &#61; object&#40;&#123;&#10; current &#61; list&#40;number&#41;&#10; forecasted &#61; list&#40;number&#41;&#10; &#125;&#41;&#10; credit_treatment &#61; string&#10; &#125;&#41;&#10; environment_dns_zone &#61; string&#10; essential_contacts &#61; list&#40;string&#41;&#10; labels &#61; map&#40;string&#41;&#10; notification_channels &#61; list&#40;string&#41;&#10; shared_vpc_self_link &#61; string&#10; vpc_host_project &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [dns_zones](variables.tf#L57) | DNS private zones to create as child of var.defaults.environment_dns_zone. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [essential_contacts](variables.tf#L63) | Email contacts to be used for billing and GCP notifications. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [folder_id](variables.tf#L69) | Folder ID for the folder where the project will be created. | <code>string</code> | | <code>null</code> |
| [group_iam](variables.tf#L75) | Custom IAM settings in group => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam](variables.tf#L81) | Custom IAM settings in role => [principal] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [kms_service_agents](variables.tf#L87) | KMS IAM configuration in as service => [key]. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [labels](variables.tf#L93) | Labels to be assigned at project level. | <code>map&#40;string&#41;</code> | | <code>&#123;&#125;</code> |
| [org_policies](variables.tf#L99) | Org-policy overrides at project level. | <code title="object&#40;&#123;&#10; policy_boolean &#61; map&#40;bool&#41;&#10; policy_list &#61; map&#40;object&#40;&#123;&#10; inherit_from_parent &#61; bool&#10; suggested_value &#61; string&#10; status &#61; bool&#10; values &#61; list&#40;string&#41;&#10; &#125;&#41;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [prefix](variables.tf#L113) | Prefix used for the project id. | <code>string</code> | | <code>null</code> |
| [service_accounts](variables.tf#L124) | Service accounts to be created, and roles assigned them on the project. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_iam](variables.tf#L130) | IAM bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]} | <code>map&#40;map&#40;list&#40;string&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_identities_iam](variables.tf#L144) | Custom IAM settings for service identities in service => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [services](variables.tf#L137) | Services to be enabled for the project. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [vpc](variables.tf#L151) | VPC configuration for the project. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; gke_setup &#61; object&#40;&#123;&#10; enable_security_admin &#61; bool&#10; enable_host_service_agent &#61; bool&#10; &#125;&#41;&#10; subnets_iam &#61; map&#40;list&#40;string&#41;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [group_iam_additive](variables.tf#L81) | Custom additive IAM settings in group => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam](variables.tf#L87) | Custom IAM settings in role => [principal] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam_additive](variables.tf#L93) | Custom additive IAM settings in role => [principal] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [kms_service_agents](variables.tf#L99) | KMS IAM configuration in as service => [key]. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [labels](variables.tf#L105) | Labels to be assigned at project level. | <code>map&#40;string&#41;</code> | | <code>&#123;&#125;</code> |
| [org_policies](variables.tf#L111) | Org-policy overrides at project level. | <code title="map&#40;object&#40;&#123;&#10; inherit_from_parent &#61; optional&#40;bool&#41; &#35; for list policies only.&#10; reset &#61; optional&#40;bool&#41;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; deny &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; enforce &#61; optional&#40;bool, true&#41; &#35; for boolean policies only.&#10; rules &#61; optional&#40;list&#40;object&#40;&#123;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; deny &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; enforce &#61; optional&#40;bool, true&#41; &#35; for boolean policies only.&#10; condition &#61; object&#40;&#123;&#10; description &#61; optional&#40;string&#41;&#10; expression &#61; optional&#40;string&#41;&#10; location &#61; optional&#40;string&#41;&#10; title &#61; optional&#40;string&#41;&#10; &#125;&#41;&#10; &#125;&#41;&#41;, &#91;&#93;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [prefix](variables.tf#L151) | Prefix used for the project id. | <code>string</code> | | <code>null</code> |
| [service_accounts](variables.tf#L162) | Service accounts to be created, and roles assigned them on the project. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_additive](variables.tf#L168) | Service accounts to be created, and roles assigned them on the project additively. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_iam](variables.tf#L174) | IAM bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]} | <code>map&#40;map&#40;list&#40;string&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_iam_additive](variables.tf#L181) | IAM additive bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]} | <code>map&#40;map&#40;list&#40;string&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_identities_iam](variables.tf#L195) | Custom IAM settings for service identities in service => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_identities_iam_additive](variables.tf#L202) | Custom additive IAM settings for service identities in service => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [services](variables.tf#L188) | Services to be enabled for the project. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [vpc](variables.tf#L209) | VPC configuration for the project. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; gke_setup &#61; object&#40;&#123;&#10; enable_security_admin &#61; bool&#10; enable_host_service_agent &#61; bool&#10; &#125;&#41;&#10; subnets_iam &#61; map&#40;list&#40;string&#41;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
## Outputs

View File

@ -21,7 +21,14 @@ locals {
"group:${k}" if try(index(v, r), null) != null
]
}
_group_iam_bindings = distinct(flatten(values(var.group_iam)))
_group_iam_additive = {
for r in local._group_iam_additive_bindings : r => [
for k, v in var.group_iam_additive :
"group:${k}" if try(index(v, r), null) != null
]
}
_group_iam_bindings = distinct(flatten(values(var.group_iam)))
_group_iam_additive_bindings = distinct(flatten(values(var.group_iam_additive)))
_project_id = (
var.prefix == null || var.prefix == ""
? var.project_id
@ -37,9 +44,20 @@ locals {
_service_accounts_iam_bindings = distinct(flatten(
values(var.service_accounts)
))
_service_accounts_iam_additive = {
for r in local._service_accounts_iam_additive_bindings : r => [
for k, v in var.service_accounts_additive :
module.service-accounts[k].iam_email
if try(index(v, r), null) != null
]
}
_service_accounts_iam_additive_bindings = distinct(flatten(
values(var.service_accounts_additive)
))
_services = concat([
"billingbudgets.googleapis.com",
"essentialcontacts.googleapis.com"
"essentialcontacts.googleapis.com",
"orgpolicy.googleapis.com",
],
length(var.dns_zones) > 0 ? ["dns.googleapis.com"] : [],
try(var.vpc.gke_setup, null) != null ? ["container.googleapis.com"] : [],
@ -53,6 +71,14 @@ locals {
if contains(roles, role)
]
}
_service_identities_roles_additive = distinct(flatten(values(var.service_identities_iam_additive)))
_service_identities_iam_additive = {
for role in local._service_identities_roles_additive : role => [
for service, roles in var.service_identities_iam_additive :
"serviceAccount:${module.project.service_accounts.robots[service]}"
if contains(roles, role)
]
}
_vpc_subnet_bindings = (
local.vpc.subnets_iam == null || local.vpc.host_project == null
? []
@ -91,6 +117,20 @@ locals {
try(local._service_identities_iam[role], []),
)
}
iam_additive = {
for role in distinct(concat(
keys(var.iam_additive),
keys(local._group_iam_additive),
keys(local._service_accounts_iam_additive),
keys(local._service_identities_iam_additive),
)) :
role => concat(
try(var.iam_additive[role], []),
try(local._group_iam_additive[role], []),
try(local._service_accounts_iam_additive[role], []),
try(local._service_identities_iam_additive[role], []),
)
}
labels = merge(
coalesce(var.labels, {}), coalesce(try(var.defaults.labels, {}), {})
)
@ -147,10 +187,10 @@ module "project" {
prefix = var.prefix
contacts = { for c in local.essential_contacts : c => ["ALL"] }
iam = local.iam
iam_additive = local.iam_additive
labels = local.labels
org_policies = try(var.org_policies, {})
parent = var.folder_id
policy_boolean = try(var.org_policies.policy_boolean, {})
policy_list = try(var.org_policies.policy_list, {})
service_encryption_key_ids = var.kms_service_agents
services = local.services
shared_vpc_service_config = var.vpc == null ? null : {

View File

@ -48,15 +48,15 @@ labels:
# [opt] Org policy overrides defined at project level
org_policies:
policy_boolean:
constraints/compute.disableGuestAttributesAccess: true
policy_list:
constraints/compute.trustedImageProjects:
inherit_from_parent: null
status: true
suggested_value: null
constraints/compute.disableGuestAttributesAccess:
enforce: true
constraints/compute.trustedImageProjects:
allow:
values:
- projects/fast-dev-iac-core-0
constraints/compute.vmExternalIpAccess:
deny:
all: true
# [opt] Service account to create for the project and their roles on the project
# in name => [roles] format

View File

@ -78,12 +78,24 @@ variable "group_iam" {
default = {}
}
variable "group_iam_additive" {
description = "Custom additive IAM settings in group => [role] format."
type = map(list(string))
default = {}
}
variable "iam" {
description = "Custom IAM settings in role => [principal] format."
type = map(list(string))
default = {}
}
variable "iam_additive" {
description = "Custom additive IAM settings in role => [principal] format."
type = map(list(string))
default = {}
}
variable "kms_service_agents" {
description = "KMS IAM configuration in as service => [key]."
type = map(list(string))
@ -98,16 +110,42 @@ variable "labels" {
variable "org_policies" {
description = "Org-policy overrides at project level."
type = object({
policy_boolean = map(bool)
policy_list = map(object({
inherit_from_parent = bool
suggested_value = string
status = bool
values = list(string)
type = map(object({
inherit_from_parent = optional(bool) # for list policies only.
reset = optional(bool)
# default (unconditional) values
allow = optional(object({
all = optional(bool)
values = optional(list(string))
}))
})
default = null
deny = optional(object({
all = optional(bool)
values = optional(list(string))
}))
enforce = optional(bool, true) # for boolean policies only.
# conditional values
rules = optional(list(object({
allow = optional(object({
all = optional(bool)
values = optional(list(string))
}))
deny = optional(object({
all = optional(bool)
values = optional(list(string))
}))
enforce = optional(bool, true) # for boolean policies only.
condition = object({
description = optional(string)
expression = optional(string)
location = optional(string)
title = optional(string)
})
})), [])
}))
default = {}
nullable = false
}
variable "prefix" {
@ -127,6 +165,12 @@ variable "service_accounts" {
default = {}
}
variable "service_accounts_additive" {
description = "Service accounts to be created, and roles assigned them on the project additively."
type = map(list(string))
default = {}
}
variable "service_accounts_iam" {
description = "IAM bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]}"
type = map(map(list(string)))
@ -134,6 +178,13 @@ variable "service_accounts_iam" {
nullable = false
}
variable "service_accounts_iam_additive" {
description = "IAM additive bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]}"
type = map(map(list(string)))
default = {}
nullable = false
}
variable "services" {
description = "Services to be enabled for the project."
type = list(string)
@ -148,6 +199,13 @@ variable "service_identities_iam" {
nullable = false
}
variable "service_identities_iam_additive" {
description = "Custom additive IAM settings for service identities in service => [role] format."
type = map(list(string))
default = {}
nullable = false
}
variable "vpc" {
description = "VPC configuration for the project."
type = object({
@ -160,6 +218,3 @@ variable "vpc" {
})
default = null
}

View File

@ -6,6 +6,18 @@ They are meant to be used as minimal but complete starting points to create actu
## Blueprints
### Binary Authorization Pipeline
<a href="../gke/binauthz/" title="Binary Authorization Pipeline"><img src="../gke/binauthz/diagram.png" align="left" width="280px"></a> This [blueprint](../gke/binauthz/) shows how to create a CI and a CD pipeline in Cloud Build for the deployment of an application to a private GKE cluster with unrestricted access to a public endpoint. The blueprint enables a Binary Authorization policy in the project so only images that have been attested can be deployed to the cluster. The attestations are created using a cryptographic key pair that has been provisioned in KMS.
<br clear="left">
### Multi-cluster mesh on GKE (fleet API)
<a href="../gke/multi-cluster-mesh-gke-fleet-api/" title="Binary Authorization Pipeline"><img src="../gke/multi-cluster-mesh-gke-fleet-api/diagram.png" align="left" width="280px"></a> This [blueprint](../gke/multi-cluster-mesh-gke-fleet-api/) shows how to create a multi-cluster mesh for two private clusters on GKE. Anthos Service Mesh with automatic control plane management is set up for clusters using the Fleet API. This can only be done if the clusters are in a single project and in the same VPC. In this particular case both clusters having being deployed to different subnets in a shared VPC.
<br clear="left">
### Multitenant GKE fleet
<a href="./multitenant-fleet/" title="GKE multitenant fleet"><img src="./multitenant-fleet/diagram.png" align="left" width="280px"></a> This [blueprint](./multitenant-fleet/) allows simple centralized management of similar sets of GKE clusters and their nodepools in a single project, and optional fleet management via GKE Hub templated configurations.
@ -16,14 +28,5 @@ They are meant to be used as minimal but complete starting points to create actu
<a href="../networking/shared-vpc-gke/" title="Shared VPC with GKE"><img src="../networking/shared-vpc-gke/diagram.png" align="left" width="280px"></a> This [blueprint](../networking/shared-vpc-gke/) shows how to configure a Shared VPC, including the specific IAM configurations needed for GKE, and to give different level of access to the VPC subnets to different identities.
It is meant to be used as a starting point for most Shared VPC configurations, and to be integrated to the above blueprints where Shared VPC is needed in more complex network topologies.
<br clear="left">
### Binary Authorization Pipeline
<a href="../gke/binauthz/" title="Binary Authorization Pipeline"><img src="../gke/binauthz/diagram.png" align="left" width="280px"></a> This [blueprint](../gke/binauthz/) shows how to create a CI and a CD pipeline in Cloud Build for the deployment of an application to a private GKE cluster with unrestricted access to a public endpoint. The blueprint enables a Binary Authorization policy in the project so only images that have been attested can be deployed to the cluster. The attestations are created using a cryptographic key pair that has been provisioned in KMS.
<br clear="left">
### Multi-cluster mesh on GKE (fleet API)
<a href="../gke/multi-cluster-mesh-gke-fleet-api/" title="Binary Authorization Pipeline"><img src="../gke/multi-cluster-mesh-gke-fleet-api/diagram.png" align="left" width="280px"></a> This [blueprint](../gke/multi-cluster-mesh-gke-fleet-api/) shows how to create a multi-cluster mesh for two private clusters on GKE. Anthos Service Mesh with automatic control plane management is set up for clusters using the Fleet API. This can only be done if the clusters are in a single project and in the same VPC. In this particular case both clusters having being deployed to different subnets in a shared VPC.
<br clear="left">

View File

@ -99,13 +99,15 @@ module "cluster" {
}
module "cluster_nodepool" {
source = "../../../modules/gke-nodepool"
project_id = module.project.project_id
cluster_name = module.cluster.name
location = var.zone
name = "nodepool"
service_account = {}
node_count = { initial = 3 }
source = "../../../modules/gke-nodepool"
project_id = module.project.project_id
cluster_name = module.cluster.name
location = var.zone
name = "nodepool"
service_account = {
create = true
}
node_count = { initial = 3 }
}
module "kms" {

View File

@ -44,15 +44,17 @@ module "clusters" {
}
module "cluster_nodepools" {
for_each = var.clusters_config
source = "../../../modules/gke-nodepool"
project_id = module.fleet_project.project_id
cluster_name = module.clusters[each.key].name
location = var.region
name = "nodepool-${each.key}"
node_count = { initial = 1 }
service_account = {}
tags = ["${each.key}-node"]
for_each = var.clusters_config
source = "../../../modules/gke-nodepool"
project_id = module.fleet_project.project_id
cluster_name = module.clusters[each.key].name
location = var.region
name = "nodepool-${each.key}"
node_count = { initial = 1 }
service_account = {
create = true
}
tags = ["${each.key}-node"]
}
module "hub" {

View File

@ -246,20 +246,20 @@ module "gke" {
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [billing_account_id](variables.tf#L17) | Billing account id. | <code>string</code> | ✓ | |
| [folder_id](variables.tf#L129) | Folder used for the GKE project in folders/nnnnnnnnnnn format. | <code>string</code> | ✓ | |
| [prefix](variables.tf#L176) | Prefix used for resources that need unique names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L181) | ID of the project that will contain all the clusters. | <code>string</code> | ✓ | |
| [vpc_config](variables.tf#L193) | Shared VPC project and VPC details. | <code title="object&#40;&#123;&#10; host_project_id &#61; string&#10; vpc_self_link &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [clusters](variables.tf#L22) | Clusters configuration. Refer to the gke-cluster module for type details. | <code title="map&#40;object&#40;&#123;&#10; cluster_autoscaling &#61; optional&#40;any&#41;&#10; description &#61; optional&#40;string&#41;&#10; enable_addons &#61; optional&#40;any, &#123;&#10; horizontal_pod_autoscaling &#61; true, http_load_balancing &#61; true&#10; &#125;&#41;&#10; enable_features &#61; optional&#40;any, &#123;&#10; workload_identity &#61; true&#10; &#125;&#41;&#10; issue_client_certificate &#61; optional&#40;bool, false&#41;&#10; labels &#61; optional&#40;map&#40;string&#41;&#41;&#10; location &#61; string&#10; logging_config &#61; optional&#40;list&#40;string&#41;, &#91;&#34;SYSTEM_COMPONENTS&#34;&#93;&#41;&#10; maintenance_config &#61; optional&#40;any, &#123;&#10; daily_window_start_time &#61; &#34;03:00&#34;&#10; recurring_window &#61; null&#10; maintenance_exclusion &#61; &#91;&#93;&#10; &#125;&#41;&#10; max_pods_per_node &#61; optional&#40;number, 110&#41;&#10; min_master_version &#61; optional&#40;string&#41;&#10; monitoring_config &#61; optional&#40;list&#40;string&#41;, &#91;&#34;SYSTEM_COMPONENTS&#34;&#93;&#41;&#10; node_locations &#61; optional&#40;list&#40;string&#41;&#41;&#10; private_cluster_config &#61; optional&#40;any&#41;&#10; release_channel &#61; optional&#40;string&#41;&#10; vpc_config &#61; object&#40;&#123;&#10; subnetwork &#61; string&#10; network &#61; optional&#40;string&#41;&#10; secondary_range_blocks &#61; optional&#40;object&#40;&#123;&#10; pods &#61; string&#10; services &#61; string&#10; &#125;&#41;&#41;&#10; secondary_range_names &#61; optional&#40;object&#40;&#123;&#10; pods &#61; string&#10; services &#61; string&#10; &#125;&#41;, &#123; pods &#61; &#34;pods&#34;, services &#61; &#34;services&#34; &#125;&#41;&#10; master_authorized_ranges &#61; optional&#40;map&#40;string&#41;&#41;&#10; master_ipv4_cidr_block &#61; optional&#40;string&#41;&#10; &#125;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [fleet_configmanagement_clusters](variables.tf#L67) | Config management features enabled on specific sets of member clusters, in config name => [cluster name] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [fleet_configmanagement_templates](variables.tf#L74) | Sets of config management configurations that can be applied to member clusters, in config name => {options} format. | <code title="map&#40;object&#40;&#123;&#10; binauthz &#61; bool&#10; config_sync &#61; object&#40;&#123;&#10; git &#61; object&#40;&#123;&#10; gcp_service_account_email &#61; string&#10; https_proxy &#61; string&#10; policy_dir &#61; string&#10; secret_type &#61; string&#10; sync_branch &#61; string&#10; sync_repo &#61; string&#10; sync_rev &#61; string&#10; sync_wait_secs &#61; number&#10; &#125;&#41;&#10; prevent_drift &#61; string&#10; source_format &#61; string&#10; &#125;&#41;&#10; hierarchy_controller &#61; object&#40;&#123;&#10; enable_hierarchical_resource_quota &#61; bool&#10; enable_pod_tree_labels &#61; bool&#10; &#125;&#41;&#10; policy_controller &#61; object&#40;&#123;&#10; audit_interval_seconds &#61; number&#10; exemptable_namespaces &#61; list&#40;string&#41;&#10; log_denies_enabled &#61; bool&#10; referential_rules_enabled &#61; bool&#10; template_library_installed &#61; bool&#10; &#125;&#41;&#10; version &#61; string&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [fleet_features](variables.tf#L109) | Enable and configue fleet features. Set to null to disable GKE Hub if fleet workload identity is not used. | <code title="object&#40;&#123;&#10; appdevexperience &#61; bool&#10; configmanagement &#61; bool&#10; identityservice &#61; bool&#10; multiclusteringress &#61; string&#10; multiclusterservicediscovery &#61; bool&#10; servicemesh &#61; bool&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [fleet_workload_identity](variables.tf#L122) | Use Fleet Workload Identity for clusters. Enables GKE Hub if set to true. | <code>bool</code> | | <code>false</code> |
| [group_iam](variables.tf#L134) | Project-level IAM bindings for groups. Use group emails as keys, list of roles as values. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam](variables.tf#L141) | Project-level authoritative IAM bindings for users and service accounts in {ROLE => [MEMBERS]} format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [labels](variables.tf#L148) | Project-level labels. | <code>map&#40;string&#41;</code> | | <code>&#123;&#125;</code> |
| [nodepools](variables.tf#L154) | Nodepools configuration. Refer to the gke-nodepool module for type details. | <code title="map&#40;map&#40;object&#40;&#123;&#10; gke_version &#61; optional&#40;string&#41;&#10; labels &#61; optional&#40;map&#40;string&#41;, &#123;&#125;&#41;&#10; max_pods_per_node &#61; optional&#40;number&#41;&#10; name &#61; optional&#40;string&#41;&#10; node_config &#61; optional&#40;any, &#123; disk_type &#61; &#34;pd-balanced&#34; &#125;&#41;&#10; node_count &#61; optional&#40;map&#40;number&#41;, &#123; initial &#61; 1 &#125;&#41;&#10; node_locations &#61; optional&#40;list&#40;string&#41;&#41;&#10; nodepool_config &#61; optional&#40;any&#41;&#10; pod_range &#61; optional&#40;any&#41;&#10; reservation_affinity &#61; optional&#40;any&#41;&#10; service_account &#61; optional&#40;any&#41;&#10; sole_tenant_nodegroup &#61; optional&#40;string&#41;&#10; tags &#61; optional&#40;list&#40;string&#41;&#41;&#10; taints &#61; optional&#40;list&#40;any&#41;&#41;&#10;&#125;&#41;&#41;&#41;">map&#40;map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [project_services](variables.tf#L186) | Additional project services to enable. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [folder_id](variables.tf#L132) | Folder used for the GKE project in folders/nnnnnnnnnnn format. | <code>string</code> | ✓ | |
| [prefix](variables.tf#L179) | Prefix used for resources that need unique names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L184) | ID of the project that will contain all the clusters. | <code>string</code> | ✓ | |
| [vpc_config](variables.tf#L196) | Shared VPC project and VPC details. | <code title="object&#40;&#123;&#10; host_project_id &#61; string&#10; vpc_self_link &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [clusters](variables.tf#L22) | Clusters configuration. Refer to the gke-cluster module for type details. | <code title="map&#40;object&#40;&#123;&#10; cluster_autoscaling &#61; optional&#40;any&#41;&#10; description &#61; optional&#40;string&#41;&#10; enable_addons &#61; optional&#40;any, &#123;&#10; horizontal_pod_autoscaling &#61; true, http_load_balancing &#61; true&#10; &#125;&#41;&#10; enable_features &#61; optional&#40;any, &#123;&#10; workload_identity &#61; true&#10; &#125;&#41;&#10; issue_client_certificate &#61; optional&#40;bool, false&#41;&#10; labels &#61; optional&#40;map&#40;string&#41;&#41;&#10; location &#61; string&#10; logging_config &#61; optional&#40;list&#40;string&#41;, &#91;&#34;SYSTEM_COMPONENTS&#34;&#93;&#41;&#10; maintenance_config &#61; optional&#40;any, &#123;&#10; daily_window_start_time &#61; &#34;03:00&#34;&#10; recurring_window &#61; null&#10; maintenance_exclusion &#61; &#91;&#93;&#10; &#125;&#41;&#10; max_pods_per_node &#61; optional&#40;number, 110&#41;&#10; min_master_version &#61; optional&#40;string&#41;&#10; monitoring_config &#61; optional&#40;object&#40;&#123;&#10; enable_components &#61; optional&#40;list&#40;string&#41;, &#91;&#34;SYSTEM_COMPONENTS&#34;&#93;&#41;&#10; managed_prometheus &#61; optional&#40;bool&#41;&#10; &#125;&#41;&#41;&#10; node_locations &#61; optional&#40;list&#40;string&#41;&#41;&#10; private_cluster_config &#61; optional&#40;any&#41;&#10; release_channel &#61; optional&#40;string&#41;&#10; vpc_config &#61; object&#40;&#123;&#10; subnetwork &#61; string&#10; network &#61; optional&#40;string&#41;&#10; secondary_range_blocks &#61; optional&#40;object&#40;&#123;&#10; pods &#61; string&#10; services &#61; string&#10; &#125;&#41;&#41;&#10; secondary_range_names &#61; optional&#40;object&#40;&#123;&#10; pods &#61; string&#10; services &#61; string&#10; &#125;&#41;, &#123; pods &#61; &#34;pods&#34;, services &#61; &#34;services&#34; &#125;&#41;&#10; master_authorized_ranges &#61; optional&#40;map&#40;string&#41;&#41;&#10; master_ipv4_cidr_block &#61; optional&#40;string&#41;&#10; &#125;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [fleet_configmanagement_clusters](variables.tf#L70) | Config management features enabled on specific sets of member clusters, in config name => [cluster name] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [fleet_configmanagement_templates](variables.tf#L77) | Sets of config management configurations that can be applied to member clusters, in config name => {options} format. | <code title="map&#40;object&#40;&#123;&#10; binauthz &#61; bool&#10; config_sync &#61; object&#40;&#123;&#10; git &#61; object&#40;&#123;&#10; gcp_service_account_email &#61; string&#10; https_proxy &#61; string&#10; policy_dir &#61; string&#10; secret_type &#61; string&#10; sync_branch &#61; string&#10; sync_repo &#61; string&#10; sync_rev &#61; string&#10; sync_wait_secs &#61; number&#10; &#125;&#41;&#10; prevent_drift &#61; string&#10; source_format &#61; string&#10; &#125;&#41;&#10; hierarchy_controller &#61; object&#40;&#123;&#10; enable_hierarchical_resource_quota &#61; bool&#10; enable_pod_tree_labels &#61; bool&#10; &#125;&#41;&#10; policy_controller &#61; object&#40;&#123;&#10; audit_interval_seconds &#61; number&#10; exemptable_namespaces &#61; list&#40;string&#41;&#10; log_denies_enabled &#61; bool&#10; referential_rules_enabled &#61; bool&#10; template_library_installed &#61; bool&#10; &#125;&#41;&#10; version &#61; string&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [fleet_features](variables.tf#L112) | Enable and configue fleet features. Set to null to disable GKE Hub if fleet workload identity is not used. | <code title="object&#40;&#123;&#10; appdevexperience &#61; bool&#10; configmanagement &#61; bool&#10; identityservice &#61; bool&#10; multiclusteringress &#61; string&#10; multiclusterservicediscovery &#61; bool&#10; servicemesh &#61; bool&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [fleet_workload_identity](variables.tf#L125) | Use Fleet Workload Identity for clusters. Enables GKE Hub if set to true. | <code>bool</code> | | <code>false</code> |
| [group_iam](variables.tf#L137) | Project-level IAM bindings for groups. Use group emails as keys, list of roles as values. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam](variables.tf#L144) | Project-level authoritative IAM bindings for users and service accounts in {ROLE => [MEMBERS]} format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [labels](variables.tf#L151) | Project-level labels. | <code>map&#40;string&#41;</code> | | <code>&#123;&#125;</code> |
| [nodepools](variables.tf#L157) | Nodepools configuration. Refer to the gke-nodepool module for type details. | <code title="map&#40;map&#40;object&#40;&#123;&#10; gke_version &#61; optional&#40;string&#41;&#10; labels &#61; optional&#40;map&#40;string&#41;, &#123;&#125;&#41;&#10; max_pods_per_node &#61; optional&#40;number&#41;&#10; name &#61; optional&#40;string&#41;&#10; node_config &#61; optional&#40;any, &#123; disk_type &#61; &#34;pd-balanced&#34; &#125;&#41;&#10; node_count &#61; optional&#40;map&#40;number&#41;, &#123; initial &#61; 1 &#125;&#41;&#10; node_locations &#61; optional&#40;list&#40;string&#41;&#41;&#10; nodepool_config &#61; optional&#40;any&#41;&#10; pod_range &#61; optional&#40;any&#41;&#10; reservation_affinity &#61; optional&#40;any&#41;&#10; service_account &#61; optional&#40;any&#41;&#10; sole_tenant_nodegroup &#61; optional&#40;string&#41;&#10; tags &#61; optional&#40;list&#40;string&#41;&#41;&#10; taints &#61; optional&#40;list&#40;any&#41;&#41;&#10;&#125;&#41;&#41;&#41;">map&#40;map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [project_services](variables.tf#L189) | Additional project services to enable. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
## Outputs

View File

@ -39,9 +39,12 @@ variable "clusters" {
recurring_window = null
maintenance_exclusion = []
})
max_pods_per_node = optional(number, 110)
min_master_version = optional(string)
monitoring_config = optional(list(string), ["SYSTEM_COMPONENTS"])
max_pods_per_node = optional(number, 110)
min_master_version = optional(string)
monitoring_config = optional(object({
enable_components = optional(list(string), ["SYSTEM_COMPONENTS"])
managed_prometheus = optional(bool)
}))
node_locations = optional(list(string))
private_cluster_config = optional(any)
release_channel = optional(string)

View File

@ -6,11 +6,30 @@ They are meant to be used as minimal but complete starting points to create actu
## Blueprints
### Decentralized firewall management
<a href="./decentralized-firewall/" title="Decentralized firewall management"><img src="./decentralized-firewall/diagram.png" align="left" width="280px"></a> This [blueprint](./decentralized-firewall/) shows how a decentralized firewall management can be organized using the [firewall factory](../factories/net-vpc-firewall-yaml/).
<br clear="left">
### Network filtering with Squid
<a href="./filtering-proxy/" title="Network filtering with Squid"><img src="./filtering-proxy/squid.png" align="left" width="280px"></a> This [blueprint](./filtering-proxy/) how to deploy a filtering HTTP proxy to restrict Internet access, in a simplified setup using a VPC with two subnets and a Cloud DNS zone, and an optional MIG for scaling.
<br clear="left">
## HTTP Load Balancer with Cloud Armor
<a href="./glb-and-armor/" title="HTTP Load Balancer with Cloud Armor"><img src="./glb-and-armor/architecture.png" align="left" width="280px"></a> This [blueprint](./glb-and-armor/) contains all necessary Terraform modules to build a multi-regional infrastructure with horizontally scalable managed instance group backends, HTTP load balancing and Googles advanced WAF security tool (Cloud Armor) on top to securely deploy an application at global scale.
<br clear="left">
### Hub and Spoke via Peering
<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering blueprint"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is connected to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.
The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workaround is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.
<br clear="left">
### Hub and Spoke via Dynamic VPN
@ -18,6 +37,19 @@ The sample highlights the lack of transitivity in peering: the absence of connec
<a href="./hub-and-spoke-vpn/" title="Hub and spoke via dynamic VPN"><img src="./hub-and-spoke-vpn/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-vpn/) implements a hub and spoke topology via dynamic VPN tunnels, a common design where peering cannot be used due to limitations on the number of spokes or connectivity to managed services.
The blueprint shows how to implement spoke transitivity via BGP advertisements, how to expose hub DNS zones to spokes via DNS peering, and allows easy testing of different VPN and BGP configurations.
<br clear="left">
### ILB as next hop
<a href="./ilb-next-hop/" title="ILB as next hop"><img src="./ilb-next-hop/diagram.png" align="left" width="280px"></a> This [blueprint](./ilb-next-hop/) allows testing [ILB as next hop](https://cloud.google.com/load-balancing/docs/internal/ilb-next-hop-overview) using simple Linux gateway VMS between two VPCs, to emulate virtual appliances. An optional additional ILB can be enabled to test multiple load balancer configurations and hashing.
<br clear="left">
### Nginx-based reverse proxy cluster
<a href="./nginx-reverse-proxy-cluster/" title="Nginx-based reverse proxy cluster"><img src="./nginx-reverse-proxy-cluster/reverse-proxy.png" align="left" width="280px"></a> This [blueprint](./nginx-reverse-proxy-cluster/) how to deploy an autoscaling reverse proxy cluster using Nginx, based on regional Managed Instance Groups. The autoscaling is driven by Nginx current connections metric, sent by Cloud Ops Agent.
<br clear="left">
### DNS and Private Access for On-premises
@ -25,6 +57,19 @@ The blueprint shows how to implement spoke transitivity via BGP advertisements,
<a href="./onprem-google-access-dns/" title="DNS and Private Access for On-premises"><img src="./onprem-google-access-dns/diagram.png" align="left" width="280px"></a> This [blueprint](./onprem-google-access-dns/) uses an emulated on-premises environment running in Docker containers inside a GCE instance, to allow testing specific features like DNS policies, DNS forwarding zones across VPN, and Private Access for On-premises hosts.
The emulated on-premises environment can be used to test access to different services from outside Google Cloud, by implementing a VPN connection and BGP to Google CLoud via Strongswan and Bird.
<br clear="left">
### Calling a private Cloud Function from on-premises
<a href="./private-cloud-function-from-onprem/" title="Private Cloud Function from On-premises"><img src="./private-cloud-function-from-onprem/diagram.png" align="left" width="280px"></a> This [blueprint](./private-cloud-function-from-onprem/) shows how to invoke a [private Google Cloud Function](https://cloud.google.com/functions/docs/networking/network-settings) from the on-prem environment via a [Private Service Connect endpoint](https://cloud.google.com/vpc/docs/private-service-connect#benefits-apis).
<br clear="left">
### Calling on-premise services through PSC and hybrid NEGs
<a href="./psc-hybrid/" title="Hybrid connectivity to on-premise services thrugh PSC"><img src="./psc-hybrid/diagram.png" align="left" width="280px"></a> This [blueprint](./psc-hybrid/) shows how to privately connect to on-premise services (IP + port) from GCP, leveraging [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/private-service-connect) and [Hybrid Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts).
<br clear="left">
### Shared VPC with GKE and per-subnet support
@ -32,24 +77,5 @@ The emulated on-premises environment can be used to test access to different ser
<a href="./shared-vpc-gke/" title="Shared VPC with GKE"><img src="./shared-vpc-gke/diagram.png" align="left" width="280px"></a> This [blueprint](./shared-vpc-gke/) shows how to configure a Shared VPC, including the specific IAM configurations needed for GKE, and to give different level of access to the VPC subnets to different identities.
It is meant to be used as a starting point for most Shared VPC configurations, and to be integrated to the above blueprints where Shared VPC is needed in more complex network topologies.
<br clear="left">
### ILB as next hop
<a href="./ilb-next-hop/" title="ILB as next hop"><img src="./ilb-next-hop/diagram.png" align="left" width="280px"></a> This [blueprint](./ilb-next-hop/) allows testing [ILB as next hop](https://cloud.google.com/load-balancing/docs/internal/ilb-next-hop-overview) using simple Linux gateway VMS between two VPCs, to emulate virtual appliances. An optional additional ILB can be enabled to test multiple load balancer configurations and hashing.
<br clear="left">
### Calling a private Cloud Function from on-premises
<a href="./private-cloud-function-from-onprem/" title="Private Cloud Function from On-premises"><img src="./private-cloud-function-from-onprem/diagram.png" align="left" width="280px"></a> This [blueprint](./private-cloud-function-from-onprem/) shows how to invoke a [private Google Cloud Function](https://cloud.google.com/functions/docs/networking/network-settings) from the on-prem environment via a [Private Service Connect endpoint](https://cloud.google.com/vpc/docs/private-service-connect#benefits-apis).
<br clear="left">
### Calling on-premise services through PSC and hybrid NEGs
<a href="./psc-hybrid/" title="Hybrid connectivity to on-premise services thrugh PSC"><img src="./psc-hybrid/diagram.png" align="left" width="280px"></a> This [blueprint](./psc-hybrid/) shows how to privately connect to on-premise services (IP + port) from GCP, leveraging [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/private-service-connect) and [Hybrid Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts).
<br clear="left">
### Decentralized firewall management
<a href="./decentralized-firewall/" title="Decentralized firewall management"><img src="./decentralized-firewall/diagram.png" align="left" width="280px"></a> This [blueprint](./decentralized-firewall/) shows how a decentralized firewall management can be organized using the [firewall factory](../factories/net-vpc-firewall-yaml/).
<br clear="left">

View File

@ -84,7 +84,7 @@ module "dns-api-prod" {
domain = "googleapis.com."
client_networks = [module.vpc-prod.self_link]
recordsets = {
"CNAME *" = { ttl = 300, records = ["private.googleapis.com."] }
"CNAME *" = { records = ["private.googleapis.com."] }
}
}
@ -96,7 +96,7 @@ module "dns-api-dev" {
domain = "googleapis.com."
client_networks = [module.vpc-dev.self_link]
recordsets = {
"CNAME *" = { ttl = 300, records = ["private.googleapis.com."] }
"CNAME *" = { records = ["private.googleapis.com."] }
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -165,33 +165,31 @@ module "squid-vm" {
}
module "squid-mig" {
count = var.mig ? 1 : 0
source = "../../../modules/compute-mig"
project_id = module.project-host.project_id
location = "${var.region}-b"
name = "squid-mig"
target_size = 1
autoscaler_config = {
max_replicas = 10
min_replicas = 1
cooldown_period = 30
cpu_utilization_target = 0.65
load_balancing_utilization_target = null
metric = null
count = var.mig ? 1 : 0
source = "../../../modules/compute-mig"
project_id = module.project-host.project_id
location = "${var.region}-b"
name = "squid-mig"
instance_template = module.squid-vm.template.self_link
target_size = 1
auto_healing_policies = {
initial_delay_sec = 60
}
default_version = {
instance_template = module.squid-vm.template.self_link
name = "default"
autoscaler_config = {
max_replicas = 10
min_replicas = 1
cooldown_period = 30
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
health_check_config = {
type = "tcp"
check = { port = 3128 }
config = {}
logging = true
}
auto_healing_policies = {
health_check = module.squid-mig.0.health_check.self_link
initial_delay_sec = 60
enable_logging = true
tcp = {
port = 3128
}
}
}
@ -201,20 +199,20 @@ module "squid-ilb" {
project_id = module.project-host.project_id
region = var.region
name = "squid-ilb"
service_label = "squid-ilb"
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/proxy"]
ports = [3128]
service_label = "squid-ilb"
vpc_config = {
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/proxy"]
}
backends = [{
failover = false
group = module.squid-mig.0.group_manager.instance_group
balancing_mode = "CONNECTION"
group = module.squid-mig.0.group_manager.instance_group
}]
health_check_config = {
type = "tcp"
check = { port = 3128 }
config = {}
logging = true
enable_logging = true
tcp = {
port = 3128
}
}
}
@ -226,13 +224,10 @@ module "folder-apps" {
source = "../../../modules/folder"
parent = var.root_node
name = "apps"
policy_list = {
org_policies = {
# prevent VMs with public IPs in the apps folder
"constraints/compute.vmExternalIpAccess" = {
inherit_from_parent = false
suggested_value = null
status = false
values = []
deny = { all = true }
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -0,0 +1,140 @@
# HTTP Load Balancer with Cloud Armor
## Introduction
This blueprint contains all necessary Terraform modules to build a multi-regional infrastructure with horizontally scalable managed instance group backends, HTTP load balancing and Googles advanced WAF security tool (Cloud Armor) on top to securely deploy an application at global scale.
This tutorial is general enough to fit in a variety of use-cases, from hosting a mobile app's backend to deploy proprietary workloads at scale.
## Use cases
Even though there are many ways to implement an architecture, some workloads require high compute power or specific licenses while making sure the services are secured by a managed service and highly available across multiple regions. An architecture consisting of Managed Instance Groups in multiple regions available through an HTTP Load Balancer with Cloud Armor enabled is suitable for such use-cases.
This architecture caters to multiple workloads ranging from the ones requiring compliance with specific data access restrictions to compute-specific proprietary applications with specific licensing and OS requirements. Descriptions of some possible use-cases are as follows:
* __Proprietary OS workloads__: Some applications require specific Operating systems (enterprise grade Linux distributions for example) with specific licensing requirements or low-level access to the kernel. In such cases, since the applications cannot be containerised and horizontal scaling is required, multi-region Managed Instance Group (MIG) with custom instance images are the ideal implementation.
* __Industry-specific applications__: Other applications may require high compute power alongside a sophisticated layer of networking security. This architecture satisfies both these requirements by promising configurable compute power on the instances backed by various features offered by Cloud Armor such as traffic restriction, DDoS protection etc.
* __Workloads requiring GDPR compliance__: Most applications require restricting data access and usage from outside a certain region (mostly to comply with data residency requirements). This architecture caters to such workloads as Cloud Armor allows you to lock access to your workloads from various fine-grained identifiers.
* __Medical Queuing systems__: Another great example usage for this architecture will be applications requiring high compute power, availability and limited memory access requirements such as a medical queuing system.
* __DDoS Protection and WAF__: Applications and workloads exposed to the internet expose themselves to the risk of DDoS attacks. While L3/L4 and protocol based attacks are handled at Googles edge, L7 attacks can still be effective with botnets. A setup of an external Cloud Load Balancer with Cloud Armor and appropriate WAF rules can mitigate such attacks.
* __Geofencing__: If you want to restrict content served on your application due to licensing restrictions (similar to OTT content in the US), Geofencing allows you to create a virtual perimeter to stop the service from being accessed outside the region. The architecture of using a Cloud Load Balancer with Cloud Armor enables you to implement geofencing around your applications and services.
## Architecture
<p align="center"> <img src="architecture.png" width="700"> </p>
The main components that we would be setting up are (to learn more about these products, click on the hyperlinks):
* [Cloud Armor](https://cloud.google.com/armor) - Google Cloud Armor is the web-application firewall (WAF) and DDoS mitigation service that helps users defend their web apps and services at Google scale at the edge of Googles network.
* [Cloud Load Balancer](https://cloud.google.com/load-balancing) - When your app usage spikes, it is important to scale, optimize and secure the app. Cloud Load Balancing is a fully distributed solution that balances user traffic to multiple backends to avoid congestion, reduce latency and increase security. Some important features it offers that we use here are:
* Single global anycast IP and autoscaling - CLB acts as a frontend to all your backend instances across all regions. It provides cross-region load balancing, automatic multi-region failover and scales to support increase in resources.
* Global Forwarding Rule - To route traffic to different regions, global load balancers use global forwarding rules, which bind the global IP address and a single target proxy.
* Target Proxy - For external HTTP(S) load balancers, proxies route incoming requests to a URL map. This is essentially how you can handle the connections.
* URL Map - URL Maps are used to route requests to a backend service based on the rules that you define for the host and path of an incoming URL.
* Backend Service - A Backend Service defines CLB distributes traffic. The backend service configuration consists of a set of values - protocols to connect to backends, session settings, health checks and timeouts.
* Health Check - Health check is a method provided to determine if the corresponding backends respond to traffic. Health checks connect to backends on a configurable, periodic basis. Each connection attempt is called a probe. Google Cloud records the success or failure of each probe.
* [Firewall Rules](https://cloud.google.com/vpc/docs/firewalls) - Firewall rules let you allow or deny connections to or from your VM instances based on a configuration you specify.
* [Managed Instance Groups (MIG)](https://cloud.google.com/compute/docs/instance-groups) - Instance group is a collection of VM instances that you can manage as a single entity. MIGs allow you to operate apps and workloads on multiple identical VMs. You can also leverage the various features like autoscaling, autohealing, regional / multi-zone deployments.
## Costs
Pricing Estimates - We have created a sample estimate based on some usage we see from new startups looking to scale. This estimate would give you an idea of how much this deployment would essentially cost per month at this scale and you extend it to the scale you further prefer. Here's the [link](https://cloud.google.com/products/calculator/#id=3105bbf2-4ee0-4289-978e-9ab6855d37ed).
## Setup
This solution assumes you already have a project created and set up where you wish to host these resources. If not, and you would like for the project to create a new project as well, please refer to the [github repository](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/tree/master/blueprints/data-solutions/gcs-to-bq-with-least-privileges) for instructions.
### Prerequisites
* Have an [organization](https://cloud.google.com/resource-manager/docs/creating-managing-organization) set up in Google cloud.
* Have a [billing account](https://cloud.google.com/billing/docs/how-to/manage-billing-account) set up.
* Have an existing [project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) with [billing enabled](https://cloud.google.com/billing/docs/how-to/modify-project).
### Roles & Permissions
In order to spin up this architecture, you will need to be a user with the “__Project owner__” [IAM](https://cloud.google.com/iam) role on the existing project:
Note: To grant a user a role, take a look at the [Granting and Revoking Access](https://cloud.google.com/iam/docs/granting-changing-revoking-access#grant-single-role) documentation.
### Spinning up the architecture
#### Step 1: Cloning the repository
Click on the button below, sign in if required and when the prompt appears, click on “confirm”.
[![Open Cloudshell](../../../assets/images/cloud-shell-button.png)](https://goo.gle/GoCloudArmor)
This will clone the repository to your cloud shell and a screen like this one will appear:
![cloud_shell](cloud_shell.png)
Before we deploy the architecture, you will need the following information:
* The __project ID__.
#### Step 2: Deploying the resources
1. After cloning the repo, and going through the prerequisites, head back to the cloud shell editor.
2. Make sure youre in the following directory. if not, you can change your directory to it via the cd command:
cloudshell_open/cloud-foundation-fabric/blueprints/cloud-operations/glb_and_armor
3. Run the following command to initialize the terraform working directory:
terraform init
4. Copy the following command into a console and replace __[my-project-id]__ with your projects ID. Then run the following command to run the terraform script and create all relevant resources for this architecture:
terraform apply -var project_id=[my-project-id]
The resource creation will take a few minutes… but when its complete, you should see an output stating the command completed successfully with a list of the created resources.
__Congratulations__! You have successfully deployed an HTTP Load Balancer with two Managed Instance Group backends and Cloud Armor security.
## Testing your architecture
1. Connect to the siege VM using SSH (from Cloud Console or CLI) and run the following command:
siege -c 250 -t150s http://$LB_IP
2. In the Cloud Console, on the Navigation menu, click __Network Services > Load balancing__.
3. Click __Backends__, then click __http-backend__ and navigate to __http-lb__
4. Click on the __Monitoring__ tab.
5. Monitor the Frontend Location (Total inbound traffic) between North America and the two backends for 2 to 3 minutes. At first, traffic should just be directed to __us-east1-mig__ but as the RPS increases, traffic is also directed to __europe-west1-mig__. This demonstrates that by default traffic is forwarded to the closest backend but if the load is very high, traffic can be distributed across the backends.
6. Now, to test the IP deny-listing, rerun terraform as follows:
terraform apply -var project_id=my-project-id -var enforce_security_policy=true
This, applies a security policy to denylist the IP address of the siege VM
7. To test this, from the siege VM run the following command and verify that you get a __403 Forbidden__ error code back.
curl http://$LB_IP
## Cleaning up your environment
The easiest way to remove all the deployed resources is to run the following command in Cloud Shell:
terraform destroy
The above command will delete the associated resources so there will be no billable charges made afterwards.
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [project_id](variables.tf#L26) | Identifier of the project. | <code>string</code> | ✓ | |
| [enforce_security_policy](variables.tf#L31) | Enforce security policy. | <code>bool</code> | | <code>true</code> |
| [prefix](variables.tf#L37) | Prefix used for created resources. | <code>string</code> | | <code>null</code> |
| [project_create](variables.tf#L17) | Parameters for the creation of the new project. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [glb_ip_address](outputs.tf#L18) | Load balancer IP address. | |
| [vm_siege_external_ip](outputs.tf#L23) | Siege VM external IP address. | |
<!-- END TFDOC -->

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

View File

Before

Width:  |  Height:  |  Size: 144 KiB

After

Width:  |  Height:  |  Size: 144 KiB

View File

@ -0,0 +1,286 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
prefix = (var.prefix == null || var.prefix == "") ? "" : "${var.prefix}-"
}
module "project" {
source = "../../../modules/project"
billing_account = (var.project_create != null
? var.project_create.billing_account_id
: null
)
parent = (var.project_create != null
? var.project_create.parent
: null
)
prefix = var.project_create == null ? null : var.prefix
name = var.project_id
services = [
"compute.googleapis.com"
]
project_create = var.project_create != null
}
module "vpc" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${local.prefix}vpc"
subnets = [
{
ip_cidr_range = "10.0.1.0/24"
name = "subnet-ew1"
region = "europe-west1"
},
{
ip_cidr_range = "10.0.2.0/24"
name = "subnet-ue1"
region = "us-east1"
},
{
ip_cidr_range = "10.0.3.0/24"
name = "subnet-uw1"
region = "us-west1"
}
]
}
module "firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc.name
}
module "nat_ew1" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = "europe-west1"
name = "${local.prefix}nat-eu1"
router_network = module.vpc.name
}
module "nat_ue1" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = "us-east1"
name = "${local.prefix}nat-ue1"
router_network = module.vpc.name
}
module "instance_template_ew1" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "europe-west1-b"
name = "${local.prefix}europe-west1-template"
instance_type = "n1-standard-2"
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["europe-west1/subnet-ew1"]
}]
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-11"
}
metadata = {
startup-script-url = "gs://cloud-training/gcpnet/httplb/startup.sh"
}
create_template = true
tags = [
"http-server"
]
}
module "instance_template_ue1" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "us-east1-b"
name = "${local.prefix}us-east1-template"
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["us-east1/subnet-ue1"]
}]
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-11"
}
metadata = {
startup-script-url = "gs://cloud-training/gcpnet/httplb/startup.sh"
}
create_template = true
tags = [
"http-server"
]
}
module "vm_siege" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "us-west1-c"
name = "siege-vm"
instance_type = "n1-standard-2"
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["us-west1/subnet-uw1"]
nat = true
}]
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-11"
}
metadata = {
startup-script = <<EOT
#!/bin/bash
apt update -y
apt install -y siege
EOT
}
tags = [
"ssh"
]
}
module "mig_ew1" {
source = "../../../modules/compute-mig"
project_id = module.project.project_id
location = "europe-west1"
name = "${local.prefix}europe-west1-mig"
instance_template = module.instance_template_ew1.template.self_link
autoscaler_config = {
max_replicas = 5
min_replicas = 1
cooldown_period = 45
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
named_ports = {
http = 80
}
depends_on = [
module.nat_ew1
]
}
module "mig_ue1" {
source = "../../../modules/compute-mig"
project_id = module.project.project_id
location = "us-east1"
name = "${local.prefix}us-east1-mig"
instance_template = module.instance_template_ue1.template.self_link
autoscaler_config = {
max_replicas = 5
min_replicas = 1
cooldown_period = 45
scaling_signals = {
cpu_utilization = {
target = 0.65
}
}
}
named_ports = {
http = 80
}
depends_on = [
module.nat_ue1
]
}
module "glb" {
source = "../../../modules/net-glb"
name = "${local.prefix}http-lb"
project_id = module.project.project_id
backend_services_config = {
http-backend = {
bucket_config = null
enable_cdn = false
cdn_config = null
group_config = {
backends = [
{
group = module.mig_ew1.group_manager.instance_group
options = null
},
{
group = module.mig_ue1.group_manager.instance_group
options = null
}
],
health_checks = ["hc"]
log_config = {
enable = true
sample_rate = 1
}
options = {
affinity_cookie_ttl_sec = null
custom_request_headers = null
custom_response_headers = null
connection_draining_timeout_sec = null
load_balancing_scheme = null
locality_lb_policy = null
port_name = "http"
security_policy = try(google_compute_security_policy.policy[0].name, null)
session_affinity = null
timeout_sec = null
circuits_breakers = null
consistent_hash = null
iap = null
protocol = "HTTP"
}
}
}
}
health_checks_config = {
hc = {
type = "http"
logging = true
options = null
check = {
port_name = "http"
port_specification = "USE_NAMED_PORT"
}
}
}
}
resource "google_compute_security_policy" "policy" {
count = var.enforce_security_policy ? 1 : 0
name = "${local.prefix}denylist-siege"
project = module.project.project_id
rule {
action = "deny(403)"
priority = "1000"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = [module.vm_siege.external_ip]
}
}
description = "Deny access to siege VM IP"
}
rule {
action = "allow"
priority = "2147483647"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
description = "default rule"
}
}

View File

@ -14,7 +14,13 @@
* limitations under the License.
*/
output "policies" {
description = "Organization policies."
value = google_org_policy_policy.primary
output "glb_ip_address" {
description = "Load balancer IP address."
value = module.glb.global_forwarding_rule.ip_address
}
output "vm_siege_external_ip" {
description = "Siege VM external IP address."
value = module.vm_siege.external_ip
}

View File

@ -0,0 +1,41 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "project_create" {
description = "Parameters for the creation of the new project."
type = object({
billing_account_id = string
parent = string
})
default = null
}
variable "project_id" {
description = "Identifier of the project."
type = string
}
variable "enforce_security_policy" {
description = "Enforce security policy."
type = bool
default = true
}
variable "prefix" {
description = "Prefix used for created resources."
type = string
default = null
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -63,7 +63,7 @@ module "dev-dns-zone" {
domain = "dev.example.com."
client_networks = [module.landing-vpc.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
"A test-r2" = { ttl = 300, records = [module.dev-r2-vm.internal_ip] }
"A localhost" = { records = ["127.0.0.1"] }
"A test-r2" = { records = [module.dev-r2-vm.internal_ip] }
}
}

View File

@ -53,7 +53,7 @@ module "landing-dns-zone" {
domain = "example.com."
client_networks = [module.landing-vpc.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
"A test-r1" = { ttl = 300, records = [module.landing-r1-vm.internal_ip] }
"A localhost" = { records = ["127.0.0.1"] }
"A test-r1" = { records = [module.landing-r1-vm.internal_ip] }
}
}

View File

@ -63,7 +63,7 @@ module "prod-dns-zone" {
domain = "prd.example.com."
client_networks = [module.landing-vpc.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
"A test-r1" = { ttl = 300, records = [module.prod-r1-vm.internal_ip] }
"A localhost" = { records = ["127.0.0.1"] }
"A test-r1" = { records = [module.prod-r1-vm.internal_ip] }
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -62,22 +62,22 @@ module "ilb-left" {
project_id = module.project.project_id
region = var.region
name = "${local.prefix}ilb-left"
network = module.vpc-left.self_link
subnetwork = values(module.vpc-left.subnet_self_links)[0]
address = local.addresses.ilb-left
ports = null
backend_config = {
session_affinity = var.ilb_session_affinity
timeout_sec = null
connection_draining_timeout_sec = null
vpc_config = {
network = module.vpc-left.self_link
subnetwork = values(module.vpc-left.subnet_self_links)[0]
}
address = local.addresses.ilb-left
backend_service_config = {
session_affinity = var.ilb_session_affinity
}
backends = [for z, mod in module.gw : {
failover = false
group = mod.group.self_link
balancing_mode = "CONNECTION"
group = mod.group.self_link
}]
health_check_config = {
type = "tcp", check = { port = 22 }, config = {}, logging = true
enable_logging = true
tcp = {
port = 22
}
}
}
@ -86,21 +86,21 @@ module "ilb-right" {
project_id = module.project.project_id
region = var.region
name = "${local.prefix}ilb-right"
network = module.vpc-right.self_link
subnetwork = values(module.vpc-right.subnet_self_links)[0]
address = local.addresses.ilb-right
ports = null
backend_config = {
session_affinity = var.ilb_session_affinity
timeout_sec = null
connection_draining_timeout_sec = null
vpc_config = {
network = module.vpc-right.self_link
subnetwork = values(module.vpc-right.subnet_self_links)[0]
}
address = local.addresses.ilb-right
backend_service_config = {
session_affinity = var.ilb_session_affinity
}
backends = [for z, mod in module.gw : {
failover = false
group = mod.group.self_link
balancing_mode = "CONNECTION"
group = mod.group.self_link
}]
health_check_config = {
type = "tcp", check = { port = 22 }, config = {}, logging = true
enable_logging = true
tcp = {
port = 22
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -1,20 +1,17 @@
# Nginx-based reverse proxy cluster
This blueprint shows how to deploy an autoscaling reverse proxy cluster using Nginx, based on regional
Managed Instance Groups.
This blueprint shows how to deploy an autoscaling reverse proxy cluster using Nginx, based on regional Managed Instance Groups.
![High-level diagram](reverse-proxy.png "High-level diagram")
The autoscaling is driven by Nginx current connections metric, sent by Cloud Ops Agent.
The autoscaling is driven by Nginx current connections metric, sent by Cloud Ops Agent.
The example is for Nginx, but it could be easily adapted to any other reverse proxy software (eg.
Squid, Varnish, etc).
The example is for Nginx, but it could be easily adapted to any other reverse proxy software (eg. Squid, Varnish, etc).
## Ops Agent image
There is a simple [`Dockerfile`](Dockerfile) available for building Ops Agent to be run
inside the ContainerOS instance. Build the container, push it to your Container/Artifact
Repository and set the `ops_agent_image` to point to the image you built.
There is a simple [`Dockerfile`](Dockerfile) available for building Ops Agent to be run inside the ContainerOS instance. Build the container, push it to your Container/Artifact Repository and set the `ops_agent_image` to point to the image you built.
<!-- BEGIN TFDOC -->
## Variables

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -169,9 +169,9 @@ module "dns-gcp" {
domain = "gcp.example.org."
client_networks = [module.vpc.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
"A test-1" = { ttl = 300, records = [module.vm-test1.internal_ip] }
"A test-2" = { ttl = 300, records = [module.vm-test2.internal_ip] }
"A localhost" = { records = ["127.0.0.1"] }
"A test-1" = { records = [module.vm-test1.internal_ip] }
"A test-2" = { records = [module.vm-test2.internal_ip] }
}
}
@ -183,9 +183,9 @@ module "dns-api" {
domain = "googleapis.com."
client_networks = [module.vpc.self_link]
recordsets = {
"CNAME *" = { ttl = 300, records = ["private.googleapis.com."] }
"A private" = { ttl = 300, records = local.vips.private }
"A restricted" = { ttl = 300, records = local.vips.restricted }
"CNAME *" = { records = ["private.googleapis.com."] }
"A private" = { records = local.vips.private }
"A restricted" = { records = local.vips.restricted }
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -218,7 +218,7 @@ module "private-dns-onprem" {
domain = "${var.region}-${module.project.project_id}.cloudfunctions.net."
client_networks = [module.vpc-onprem.self_link]
recordsets = {
"A " = { ttl = 300, records = [module.addresses.psc_addresses[local.psc_name].address] }
"A " = { records = [module.addresses.psc_addresses[local.psc_name].address] }
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -157,8 +157,8 @@ module "host-dns" {
domain = "example.com."
client_networks = [module.vpc-shared.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
"A bastion" = { ttl = 300, records = [module.vm-bastion.internal_ip] }
"A localhost" = { records = ["127.0.0.1"] }
"A bastion" = { records = [module.vm-bastion.internal_ip] }
}
}
@ -219,11 +219,13 @@ module "cluster-1" {
}
module "cluster-1-nodepool-1" {
source = "../../../modules/gke-nodepool"
count = var.cluster_create ? 1 : 0
name = "nodepool-1"
project_id = module.project-svc-gke.project_id
location = module.cluster-1.0.location
cluster_name = module.cluster-1.0.name
service_account = {}
source = "../../../modules/gke-nodepool"
count = var.cluster_create ? 1 : 0
name = "nodepool-1"
project_id = module.project-svc-gke.project_id
location = module.cluster-1.0.location
cluster_name = module.cluster-1.0.name
service_account = {
create = true
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.36.0" # tftest
version = ">= 4.40.0" # tftest
}
}
}

View File

@ -7,3 +7,11 @@ The blueprints in this folder show how to automate installation of specific thir
### OpenShift cluster bootstrap on Shared VPC
<a href="./openshift/" title="HubOpenShift boostrap example"><img src="./openshift/diagram.png" align="left" width="280px"></a> This [example](./openshift/) shows how to quickly bootstrap an OpenShift 4.7 cluster on GCP, using typical enterprise features like Shared VPC and CMEK for instance disks.
<br clear="left">
### Wordpress deployment on Cloud Run
<a href="./wordpress/cloudrun/" title="Wordpress deployment on Cloud Run"><img src="./wordpress/cloudrun/architecture.png" align="left" width="280px"></a> This [example](./wordpress/cloudrun/) shows how to deploy a functioning new Wordpress website exposed to the public internet via CloudRun and Cloud SQL, with minimal technical overhead.
<br clear="left">

Some files were not shown because too many files have changed in this diff Show More