Merge pull request #8 from juliodiez/master

Sync branch
This commit is contained in:
Julio Diez 2023-02-10 10:27:54 +01:00 committed by GitHub
commit e8303e15ba
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
526 changed files with 17577 additions and 14947 deletions

12
.gitignore vendored
View File

@ -21,13 +21,13 @@ bundle.zip
**/*.pkrvars.hcl
fixture_*
fast/configs
fast/stages/**/[0-9]*providers.tf
fast/stages/**/terraform.tfvars
fast/stages/**/terraform.tfvars.json
fast/stages/**/terraform-*.auto.tfvars.json
fast/stages/**/0*.auto.tfvars*
fast/**/[0-9]*providers.tf
fast/**/terraform.tfvars
fast/**/terraform.tfvars.json
fast/**/terraform-*.auto.tfvars.json
fast/**/[0-9]*.auto.tfvars*
**/node_modules
fast/stages/**/globals.auto.tfvars.json
fast/**/globals.auto.tfvars.json
cloud_sql_proxy
examples/cloud-operations/binauthz/tenant-setup.yaml
examples/cloud-operations/binauthz/app/app.yaml

View File

@ -4,10 +4,33 @@ All notable changes to this project will be documented in this file.
<!-- markdownlint-disable MD024 -->
## [Unreleased]
<!-- None < 2022-12-13 10:03:24+00:00 -->
<!-- None < 2023-02-04 13:47:22+00:00 -->
### DOCUMENTATION
- [[#1052](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1052)] **incompatible change:** FAST multitenant bootstrap and resource management, rename org-level FAST stages ([ludoo](https://github.com/ludoo)) <!-- 2023-02-04 14:00:46+00:00 -->
### FAST
- [[#1052](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1052)] **incompatible change:** FAST multitenant bootstrap and resource management, rename org-level FAST stages ([ludoo](https://github.com/ludoo)) <!-- 2023-02-04 14:00:46+00:00 -->
### MODULES
- [[#1052](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1052)] **incompatible change:** FAST multitenant bootstrap and resource management, rename org-level FAST stages ([ludoo](https://github.com/ludoo)) <!-- 2023-02-04 14:00:46+00:00 -->
### TOOLS
- [[#1052](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1052)] **incompatible change:** FAST multitenant bootstrap and resource management, rename org-level FAST stages ([ludoo](https://github.com/ludoo)) <!-- 2023-02-04 14:00:46+00:00 -->
## [20.0.0] - 2023-02-04
<!-- 2023-02-04 13:47:22+00:00 < 2022-12-13 10:03:24+00:00 -->
### BLUEPRINTS
- [[#1038](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1038)] Vertex Pipelines MLOps framework blueprint ([javiergp](https://github.com/javiergp)) <!-- 2023-02-02 18:13:13+00:00 -->
- [[#1124](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1124)] Removed unused file package-lock.json ([apichick](https://github.com/apichick)) <!-- 2023-02-01 17:54:25+00:00 -->
- [[#1119](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1119)] **incompatible change:** Multi-Cluster Ingress gateway api config ([wiktorn](https://github.com/wiktorn)) <!-- 2023-01-31 13:16:52+00:00 -->
- [[#1111](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1111)] **incompatible change:** In the apigee module now both the /22 and /28 peering IP ranges are p… ([apichick](https://github.com/apichick)) <!-- 2023-01-31 10:46:38+00:00 -->
- [[#1106](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1106)] Network Dashboard: PSA support for Filestore and Memorystore ([aurelienlegrand](https://github.com/aurelienlegrand)) <!-- 2023-01-25 15:02:31+00:00 -->
- [[#1110](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1110)] Bump cookiejar from 2.1.3 to 2.1.4 in /blueprints/apigee/bigquery-analytics/functions/export ([dependabot[bot]](https://github.com/dependabot[bot])) <!-- 2023-01-24 15:07:12+00:00 -->
- [[#1097](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1097)] Use terraform resource to activate Anthos Service Mesh ([wiktorn](https://github.com/wiktorn)) <!-- 2023-01-23 08:25:31+00:00 -->
@ -49,6 +72,11 @@ All notable changes to this project will be documented in this file.
### MODULES
- [[#1127](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1127)] Skip node config for autopilot ([ludoo](https://github.com/ludoo)) <!-- 2023-02-02 15:13:57+00:00 -->
- [[#1125](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1125)] Added mesh_certificates setting in GKE cluster ([rosmo](https://github.com/rosmo)) <!-- 2023-02-02 10:19:01+00:00 -->
- [[#1094](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1094)] Added GLB example with MIG as backend ([eliamaldini](https://github.com/eliamaldini)) <!-- 2023-01-31 13:49:13+00:00 -->
- [[#1119](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1119)] **incompatible change:** Multi-Cluster Ingress gateway api config ([wiktorn](https://github.com/wiktorn)) <!-- 2023-01-31 13:16:52+00:00 -->
- [[#1111](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1111)] **incompatible change:** In the apigee module now both the /22 and /28 peering IP ranges are p… ([apichick](https://github.com/apichick)) <!-- 2023-01-31 10:46:38+00:00 -->
- [[#1116](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1116)] Include cloudbuild API in project module ([aymanfarhat](https://github.com/aymanfarhat)) <!-- 2023-01-27 20:38:01+00:00 -->
- [[#1115](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1115)] add new parameters support in apigee module ([blackillzone](https://github.com/blackillzone)) <!-- 2023-01-27 16:39:46+00:00 -->
- [[#1112](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1112)] Add HTTPS frontend with SNEG example ([juliodiez](https://github.com/juliodiez)) <!-- 2023-01-26 19:17:31+00:00 -->
@ -470,7 +498,7 @@ All notable changes to this project will be documented in this file.
- fix `tag` output on `data-catalog-policy-tag` module
- add shared-vpc support on `gcs-to-bq-with-least-privileges`
- new `net-ilb-l7` module
- new [02-networking-peering](fast/stages/02-networking-peering) networking stage
- new `02-networking-peering` networking stage
- **incompatible change** the variable for PSA ranges in networking stages have changed
## [14.0.0] - 2022-02-25
@ -489,8 +517,8 @@ All notable changes to this project will be documented in this file.
- **incompatible change** removed `ingress_settings` configuration option in the `cloud-functions` module.
- new [m4ce VM example](blueprints/cloud-operations/vm-migration/)
- Support for resource management tags in the `organization`, `folder`, `project`, `compute-vm`, and `kms` modules
- new [data platform](fast/stages/03-data-platform) stage 3
- new [02-networking-nva](fast/stages/02-networking-nva) networking stage
- new `data platform` stage 3
- new `02-networking-nva` networking stage
- allow customizing the names of custom roles
- added `environment` and `context` resource management tags
- use resource management tags to restrict scope of roles/orgpolicy.policyAdmin
@ -925,7 +953,8 @@ All notable changes to this project will be documented in this file.
- merge development branch with suite of new modules and end-to-end examples
<!-- markdown-link-check-disable -->
[Unreleased]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v19.0.0...HEAD
[Unreleased]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v20.0.0...HEAD
[20.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v19.0.0...v20.0.0
[19.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v18.0.0...v19.0.0
[18.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v16.0.0...v18.0.0
[16.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v15.0.0...v16.0.0

View File

@ -6,7 +6,7 @@ Currently available blueprints:
- **apigee** - [Apigee Hybrid on GKE](./apigee/hybrid-gke/), [Apigee X analytics in BigQuery](./apigee/bigquery-analytics), [Apigee network patterns](./apigee/network-patterns/)
- **cloud operations** - [Active Directory Federation Services](./cloud-operations/adfs), [Cloud Asset Inventory feeds for resource change tracking and remediation](./cloud-operations/asset-inventory-feed-remediation), [Fine-grained Cloud DNS IAM via Service Directory](./cloud-operations/dns-fine-grained-iam), [Cloud DNS & Shared VPC design](./cloud-operations/dns-shared-vpc), [Delegated Role Grants](./cloud-operations/iam-delegated-role-grants), [Networking Dashboard](./cloud-operations/network-dashboard), [Managing on-prem service account keys by uploading public keys](./cloud-operations/onprem-sa-key-management), [Compute Image builder with Hashicorp Packer](./cloud-operations/packer-image-builder), [Packer example](./cloud-operations/packer-image-builder/packer), [Compute Engine quota monitoring](./cloud-operations/quota-monitoring), [Scheduled Cloud Asset Inventory Export to Bigquery](./cloud-operations/scheduled-asset-inventory-export-bq), [Configuring workload identity federation for Terraform Cloud/Enterprise workflow](./cloud-operations/terraform-enterprise-wif), [TCP healthcheck and restart for unmanaged GCE instances](./cloud-operations/unmanaged-instances-healthcheck), [Migrate for Compute Engine (v5) blueprints](./cloud-operations/vm-migration), [Configuring workload identity federation to access Google Cloud resources from apps running on Azure](./cloud-operations/workload-identity-federation)
- **data solutions** - [GCE and GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion), [Data Platform](./data-solutions/data-platform-foundations), [Spinning up a foundation data pipeline on Google Cloud using Cloud Storage, Dataflow and BigQuery](./data-solutions/gcs-to-bq-with-least-privileges), [#SQL Server Always On Groups blueprint](./data-solutions/sqlserver-alwayson), [Data Playground](./data-solutions/data-playground)
- **data solutions** - [GCE and GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion), [Data Platform](./data-solutions/data-platform-foundations), [Spinning up a foundation data pipeline on Google Cloud using Cloud Storage, Dataflow and BigQuery](./data-solutions/gcs-to-bq-with-least-privileges), [#SQL Server Always On Groups blueprint](./data-solutions/sqlserver-alwayson), [Data Playground](./data-solutions/data-playground), [MLOps with Vertex AI](./data-solutions/vertex-mlops), [Shielded Folder](./data-solutions/shielded-folder)
- **factories** - [The why and the how of Resource Factories](./factories), [Google Cloud Identity Group Factory](./factories/cloud-identity-group-factory), [Google Cloud BQ Factory](./factories/bigquery-factory), [Google Cloud VPC Firewall Factory](./factories/net-vpc-firewall-yaml), [Minimal Project Factory](./factories/project-factory)
- **GKE** - [Binary Authorization Pipeline Blueprint](./gke/binauthz), [Storage API](./gke/binauthz/image), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api), [GKE Multitenant Blueprint](./gke/multitenant-fleet), [Shared VPC with GKE support](./networking/shared-vpc-gke/)
- **networking** - [Decentralized firewall management](./networking/decentralized-firewall), [Decentralized firewall validator](./networking/decentralized-firewall/validator), [Network filtering with Squid](./networking/filtering-proxy), [Network filtering with Squid with isolated VPCs using Private Service Connect](./networking/filtering-proxy-psc), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Hub and Spoke via VPN](./networking/hub-and-spoke-vpn), [Hub and Spoke via VPC Peering](./networking/hub-and-spoke-peering), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), On-prem DNS and Google Private Access, [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,251 +0,0 @@
{
"name": "apigee",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"dependencies": {
"superagent-debugger": "^1.2.9"
}
},
"node_modules/ansi-regex": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.1.1.tgz",
"integrity": "sha512-TIGnTpdo+E3+pCyAluZvtED5p5wCqLdezCyhPZzKPcxvFplEt4i+W7OONCKgeZFT3+y5NZZfOOS/Bdcanm1MYA==",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/ansi-styles": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-2.2.1.tgz",
"integrity": "sha512-kmCevFghRiWM7HB5zTPULl4r9bVFSWjz62MhqizDGUrq2NWuNMQyuv4tHHoKJHs69M/MF64lEcHdYIocrdWQYA==",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/chalk": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
"integrity": "sha512-U3lRVLMSlsCfjqYPbLyVv11M9CPW4I728d6TCKMAOJueEeB9/8o+eSsMnxPJD+Q+K909sdESg7C+tIkoH6on1A==",
"dependencies": {
"ansi-styles": "^2.2.1",
"escape-string-regexp": "^1.0.2",
"has-ansi": "^2.0.0",
"strip-ansi": "^3.0.0",
"supports-color": "^2.0.0"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"dependencies": {
"ms": "2.0.0"
}
},
"node_modules/escape-string-regexp": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz",
"integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==",
"engines": {
"node": ">=0.8.0"
}
},
"node_modules/has-ansi": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/has-ansi/-/has-ansi-2.0.0.tgz",
"integrity": "sha512-C8vBJ8DwUCx19vhm7urhTuUsr4/IyP6l4VzNQDv+ryHQObW3TTTp9yB68WpYgRe2bbaGuZ/se74IqFeVnMnLZg==",
"dependencies": {
"ansi-regex": "^2.0.0"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/lodash": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg=="
},
"node_modules/moment": {
"version": "2.29.4",
"resolved": "https://registry.npmjs.org/moment/-/moment-2.29.4.tgz",
"integrity": "sha512-5LC9SOxjSc2HF6vO2CyuTDNivEdoz2IvyJJGj6X8DJ0eFyfszE0QiEd+iXmBvUP3WHxSjFH/vIsA0EN00cgr8w==",
"engines": {
"node": "*"
}
},
"node_modules/ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="
},
"node_modules/object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
"integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/query-string": {
"version": "4.3.4",
"resolved": "https://registry.npmjs.org/query-string/-/query-string-4.3.4.tgz",
"integrity": "sha512-O2XLNDBIg1DnTOa+2XrIwSiXEV8h2KImXUnjhhn2+UsvZ+Es2uyd5CCRTNQlDGbzUQOW3aYCBx9rVA6dzsiY7Q==",
"dependencies": {
"object-assign": "^4.1.0",
"strict-uri-encode": "^1.0.0"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/strict-uri-encode": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/strict-uri-encode/-/strict-uri-encode-1.1.0.tgz",
"integrity": "sha512-R3f198pcvnB+5IpnBlRkphuE9n46WyVl8I39W/ZUTZLz4nqSP/oLYUrcnJrw462Ds8he4YKMov2efsTIw1BDGQ==",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/strip-ansi": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-3.0.1.tgz",
"integrity": "sha512-VhumSSbBqDTP8p2ZLKj40UjBCV4+v8bUSEpUb4KjRgWk9pbqGF4REFj6KEagidb2f/M6AzC0EmFyDNGaw9OCzg==",
"dependencies": {
"ansi-regex": "^2.0.0"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/superagent-debugger": {
"version": "1.2.9",
"resolved": "https://registry.npmjs.org/superagent-debugger/-/superagent-debugger-1.2.9.tgz",
"integrity": "sha512-iH4NvJl1utorgRbrsYoOM8yoeTbS7YWLoDkAwRy2rgB6aP5Lr36XxmpE8GbgvmUY6R4QmYr+4R4IdAGMPmwR9g==",
"dependencies": {
"chalk": "^1.1.3",
"debug": "^2.6.0",
"lodash": "^4.17.4",
"moment": "^2.17.1",
"query-string": "^4.3.1"
}
},
"node_modules/supports-color": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz",
"integrity": "sha512-KKNVtd6pCYgPIKU4cp2733HWYCpplQhddZLBUryaAHou723x+FRzQ5Df824Fj+IyyuiQTRoub4SnIFfIcrp70g==",
"engines": {
"node": ">=0.8.0"
}
}
},
"dependencies": {
"ansi-regex": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.1.1.tgz",
"integrity": "sha512-TIGnTpdo+E3+pCyAluZvtED5p5wCqLdezCyhPZzKPcxvFplEt4i+W7OONCKgeZFT3+y5NZZfOOS/Bdcanm1MYA=="
},
"ansi-styles": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-2.2.1.tgz",
"integrity": "sha512-kmCevFghRiWM7HB5zTPULl4r9bVFSWjz62MhqizDGUrq2NWuNMQyuv4tHHoKJHs69M/MF64lEcHdYIocrdWQYA=="
},
"chalk": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
"integrity": "sha512-U3lRVLMSlsCfjqYPbLyVv11M9CPW4I728d6TCKMAOJueEeB9/8o+eSsMnxPJD+Q+K909sdESg7C+tIkoH6on1A==",
"requires": {
"ansi-styles": "^2.2.1",
"escape-string-regexp": "^1.0.2",
"has-ansi": "^2.0.0",
"strip-ansi": "^3.0.0",
"supports-color": "^2.0.0"
}
},
"debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"requires": {
"ms": "2.0.0"
}
},
"escape-string-regexp": {
"version": "1.0.5",
"resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz",
"integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg=="
},
"has-ansi": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/has-ansi/-/has-ansi-2.0.0.tgz",
"integrity": "sha512-C8vBJ8DwUCx19vhm7urhTuUsr4/IyP6l4VzNQDv+ryHQObW3TTTp9yB68WpYgRe2bbaGuZ/se74IqFeVnMnLZg==",
"requires": {
"ansi-regex": "^2.0.0"
}
},
"lodash": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg=="
},
"moment": {
"version": "2.29.4",
"resolved": "https://registry.npmjs.org/moment/-/moment-2.29.4.tgz",
"integrity": "sha512-5LC9SOxjSc2HF6vO2CyuTDNivEdoz2IvyJJGj6X8DJ0eFyfszE0QiEd+iXmBvUP3WHxSjFH/vIsA0EN00cgr8w=="
},
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="
},
"object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
"integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="
},
"query-string": {
"version": "4.3.4",
"resolved": "https://registry.npmjs.org/query-string/-/query-string-4.3.4.tgz",
"integrity": "sha512-O2XLNDBIg1DnTOa+2XrIwSiXEV8h2KImXUnjhhn2+UsvZ+Es2uyd5CCRTNQlDGbzUQOW3aYCBx9rVA6dzsiY7Q==",
"requires": {
"object-assign": "^4.1.0",
"strict-uri-encode": "^1.0.0"
}
},
"strict-uri-encode": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/strict-uri-encode/-/strict-uri-encode-1.1.0.tgz",
"integrity": "sha512-R3f198pcvnB+5IpnBlRkphuE9n46WyVl8I39W/ZUTZLz4nqSP/oLYUrcnJrw462Ds8he4YKMov2efsTIw1BDGQ=="
},
"strip-ansi": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-3.0.1.tgz",
"integrity": "sha512-VhumSSbBqDTP8p2ZLKj40UjBCV4+v8bUSEpUb4KjRgWk9pbqGF4REFj6KEagidb2f/M6AzC0EmFyDNGaw9OCzg==",
"requires": {
"ansi-regex": "^2.0.0"
}
},
"superagent-debugger": {
"version": "1.2.9",
"resolved": "https://registry.npmjs.org/superagent-debugger/-/superagent-debugger-1.2.9.tgz",
"integrity": "sha512-iH4NvJl1utorgRbrsYoOM8yoeTbS7YWLoDkAwRy2rgB6aP5Lr36XxmpE8GbgvmUY6R4QmYr+4R4IdAGMPmwR9g==",
"requires": {
"chalk": "^1.1.3",
"debug": "^2.6.0",
"lodash": "^4.17.4",
"moment": "^2.17.1",
"query-string": "^4.3.1"
}
},
"supports-color": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz",
"integrity": "sha512-KKNVtd6pCYgPIKU4cp2733HWYCpplQhddZLBUryaAHou723x+FRzQ5Df824Fj+IyyuiQTRoub4SnIFfIcrp70g=="
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

Binary file not shown.

Binary file not shown.

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -52,6 +52,20 @@ running on a VPC with a private IP and a dedicated Service Account. A GCS bucket
### SQL Server Always On Availability Groups
<a href="./sqlserver-alwayson/" title="SQL Server Always On Availability Groups"><img src="https://cloud.google.com/compute/images/sqlserver-ag-architecture.svg" align="left" width="280px"></a>
This [blueprint](./data-platform-foundations/) implements SQL Server Always On Availability Groups using Fabric modules. It builds a two node cluster with a fileshare witness instance in an existing VPC and adds the necessary firewalling. The actual setup process (apart from Active Directory operations) has been scripted, so that least amount of manual works needs to performed.
This [blueprint](./sqlserver-alwayson/) implements SQL Server Always On Availability Groups using Fabric modules. It builds a two node cluster with a fileshare witness instance in an existing VPC and adds the necessary firewalling. The actual setup process (apart from Active Directory operations) has been scripted, so that least amount of manual works needs to performed.
<br clear="left">
### MLOps with Vertex AI
<a href="./vertex-mlops/" title="MLOps with Vertex AI"><img src="./vertex-mlops/images/mlops_projects.png" align="left" width="280px"></a>
This [blueprint](./vertex-mlops/) implements the infrastructure required to have a fully functional MLOPs environment using Vertex AI: required GCP services activation, Vertex Workbench, GCS buckets to host Vertex AI and Cloud Build artifacts, Artifact Registry docker repository to host custom images, required service accounts, networking and Workload Identity Federation Provider for Github integration (optional).
<br clear="left">
### Shielded Folder
<a href="./shielded-folder/" title="Shielded Folder"><img src="./shielded-folder/images/overview_diagram.png" align="left" width="280px"></a>
This [blueprint](./shielded-folder/) implements an opinionated folder configuration according to GCP best practices. Configurations implemented on the folder would be beneficial to host workloads inheriting constraints from the folder they belong to.
<br clear="left">

View File

@ -1,8 +1,8 @@
# GCE and GCS CMEK via centralized Cloud KMS
This example creates a sample centralized [Cloud KMS](https://cloud.google.com/kms?hl=it) configuration, and uses it to implement CMEK for [Cloud Storage](https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys) and [Compute Engine](https://cloud.google.com/compute/docs/disks/customer-managed-encryption) in a separate project.
This example creates a sample centralized [Cloud KMS](https://cloud.google.com/kms?hl=it) configuration, and uses it to implement CMEK for [Cloud Storage](https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys) and [Compute Engine](https://cloud.google.com/compute/docs/disks/customer-managed-encryption) in a service project.
The example is designed to match real-world use cases with a minimum amount of resources, and be used as a starting point for scenarios where application projects implement CMEK using keys managed by a central team. It also includes the IAM wiring needed to make such scenarios work.
The example is designed to match real-world use cases with a minimum amount of resources, and be used as a starting point for scenarios where application projects implement CMEK using keys managed by a central team. It also includes the IAM wiring needed to make such scenarios work. Regional resources are used in this example, but the same logic will apply for 'dual regional', 'multi regional' or 'global' resources.
This is the high level diagram:
@ -35,12 +35,10 @@ This sample creates several distinct groups of resources:
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [billing_account](variables.tf#L16) | Billing account id used as default for new projects. | <code>string</code> | ✓ | |
| [root_node](variables.tf#L45) | The resource name of the parent Folder or Organization. Must be of the form folders/folder_id or organizations/org_id. | <code>string</code> | ✓ | |
| [location](variables.tf#L21) | The location where resources will be deployed. | <code>string</code> | | <code>&#34;europe&#34;</code> |
| [project_kms_name](variables.tf#L27) | Name for the new KMS Project. | <code>string</code> | | <code>&#34;my-project-kms-001&#34;</code> |
| [project_service_name](variables.tf#L33) | Name for the new Service Project. | <code>string</code> | | <code>&#34;my-project-service-001&#34;</code> |
| [region](variables.tf#L39) | The region where resources will be deployed. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
| [prefix](variables.tf#L21) | Optional prefix used to generate resources names. | <code>string</code> | ✓ | |
| [project_config](variables.tf#L27) | Provide 'billing_account_id' and 'parent' values if project creation is needed, uses existing 'projects_id' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object&#40;&#123;&#10; billing_account_id &#61; optional&#40;string, null&#41;&#10; parent &#61; optional&#40;string, null&#41;&#10; project_ids &#61; optional&#40;object&#40;&#123;&#10; encryption &#61; string&#10; service &#61; string&#10; &#125;&#41;, &#123;&#10; encryption &#61; &#34;encryption&#34;,&#10; service &#61; &#34;service&#34;&#10; &#125;&#10; &#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [location](variables.tf#L15) | The location where resources will be deployed. | <code>string</code> | | <code>&#34;europe&#34;</code> |
| [region](variables.tf#L44) | The region where resources will be deployed. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
| [vpc_ip_cidr_range](variables.tf#L50) | Ip range used in the subnet deployef in the Service Project. | <code>string</code> | | <code>&#34;10.0.0.0&#47;20&#34;</code> |
| [vpc_name](variables.tf#L56) | Name of the VPC created in the Service Project. | <code>string</code> | | <code>&#34;local&#34;</code> |
| [vpc_subnet_name](variables.tf#L62) | Name of the subnet created in the Service Project. | <code>string</code> | | <code>&#34;subnet&#34;</code> |

View File

@ -12,33 +12,61 @@
# See the License for the specific language governing permissions and
# limitations under the License.
locals {
# Needed when you create KMS keys and encrypted resources in the same terraform state but different projects.
kms_keys = {
gce = "projects/${module.project-kms.project_id}/locations/${var.region}/keyRings/${var.prefix}-${var.region}/cryptoKeys/key-gcs"
gcs = "projects/${module.project-kms.project_id}/locations/${var.region}/keyRings/${var.prefix}-${var.region}/cryptoKeys/key-gcs"
}
}
###############################################################################
# Projects #
###############################################################################
module "project-service" {
source = "../../../modules/project"
name = var.project_service_name
parent = var.root_node
billing_account = var.billing_account
name = var.project_config.project_ids.service
parent = var.project_config.parent
billing_account = var.project_config.billing_account_id
project_create = var.project_config.billing_account_id != null
prefix = var.project_config.billing_account_id == null ? null : var.prefix
services = [
"compute.googleapis.com",
"servicenetworking.googleapis.com",
"storage-component.googleapis.com"
"storage.googleapis.com",
"storage-component.googleapis.com",
]
service_encryption_key_ids = {
compute = [
local.kms_keys.gce
]
storage = [
local.kms_keys.gcs
]
}
service_config = {
disable_on_destroy = false, disable_dependent_services = false
}
depends_on = [
module.kms
]
oslogin = true
}
module "project-kms" {
source = "../../../modules/project"
name = var.project_kms_name
parent = var.root_node
billing_account = var.billing_account
name = var.project_config.project_ids.encryption
parent = var.project_config.parent
billing_account = var.project_config.billing_account_id
project_create = var.project_config.billing_account_id != null
prefix = var.project_config.billing_account_id == null ? null : var.prefix
services = [
"cloudkms.googleapis.com",
"servicenetworking.googleapis.com"
]
oslogin = true
service_config = {
disable_on_destroy = false, disable_dependent_services = false
}
}
###############################################################################
@ -48,11 +76,11 @@ module "project-kms" {
module "vpc" {
source = "../../../modules/net-vpc"
project_id = module.project-service.project_id
name = var.vpc_name
name = "${var.prefix}-vpc"
subnets = [
{
ip_cidr_range = var.vpc_ip_cidr_range
name = var.vpc_subnet_name
ip_cidr_range = "10.0.0.0/20"
name = "${var.prefix}-${var.region}"
region = var.region
}
]
@ -63,7 +91,7 @@ module "vpc-firewall" {
project_id = module.project-service.project_id
network = module.vpc.name
default_rules_config = {
admin_ranges = [var.vpc_ip_cidr_range]
admin_ranges = ["10.0.0.0/20"]
}
}
@ -75,22 +103,10 @@ module "kms" {
source = "../../../modules/kms"
project_id = module.project-kms.project_id
keyring = {
name = "my-keyring",
location = var.location
name = "${var.prefix}-${var.region}",
location = var.region
}
keys = { key-gce = null, key-gcs = null }
key_iam = {
key-gce = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
"serviceAccount:${module.project-service.service_accounts.robots.compute}",
]
},
key-gcs = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
"serviceAccount:${module.project-service.service_accounts.robots.storage}",
]
}
}
}
###############################################################################
@ -101,10 +117,10 @@ module "vm_example" {
source = "../../../modules/compute-vm"
project_id = module.project-service.project_id
zone = "${var.region}-b"
name = "kms-vm"
name = "${var.prefix}-vm"
network_interfaces = [{
network = module.vpc.self_link,
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet"],
subnetwork = module.vpc.subnet_self_links["${var.region}/${var.prefix}-${var.region}"],
nat = false,
addresses = null
}]
@ -127,7 +143,7 @@ module "vm_example" {
encryption = {
encrypt_boot = true
disk_encryption_key_raw = null
kms_key_self_link = module.kms.key_ids.key-gce
kms_key_self_link = local.kms_keys.gce
}
}
@ -138,7 +154,9 @@ module "vm_example" {
module "kms-gcs" {
source = "../../../modules/gcs"
project_id = module.project-service.project_id
prefix = "my-bucket-001"
name = "kms-gcs"
encryption_key = module.kms.keys.key-gcs.id
prefix = var.prefix
name = "${var.prefix}-bucket"
location = var.region
storage_class = "REGIONAL"
encryption_key = local.kms_keys.gcs
}

View File

@ -12,28 +12,33 @@
# See the License for the specific language governing permissions and
# limitations under the License.
variable "billing_account" {
description = "Billing account id used as default for new projects."
type = string
}
variable "location" {
description = "The location where resources will be deployed."
type = string
default = "europe"
}
variable "project_kms_name" {
description = "Name for the new KMS Project."
variable "prefix" {
description = "Optional prefix used to generate resources names."
type = string
default = "my-project-kms-001"
nullable = false
}
variable "project_service_name" {
description = "Name for the new Service Project."
type = string
default = "my-project-service-001"
variable "project_config" {
description = "Provide 'billing_account_id' and 'parent' values if project creation is needed, uses existing 'projects_id' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format."
type = object({
billing_account_id = optional(string, null)
parent = optional(string, null)
project_ids = optional(object({
encryption = string
service = string
}), {
encryption = "encryption",
service = "service"
}
)
})
nullable = false
}
variable "region" {
@ -42,11 +47,6 @@ variable "region" {
default = "europe-west1"
}
variable "root_node" {
description = "The resource name of the parent Folder or Organization. Must be of the form folders/folder_id or organizations/org_id."
type = string
}
variable "vpc_ip_cidr_range" {
description = "Ip range used in the subnet deployef in the Service Project."
type = string

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -40,6 +40,7 @@ locals {
LOD_SA_DF = module.load-sa-df-0.email
ORC_PRJ = module.orch-project.project_id
ORC_GCS = module.orch-cs-0.url
ORC_GCS_TMP_DF = module.orch-cs-df-template.url
TRF_PRJ = module.transf-project.project_id
TRF_GCS_STAGING = module.transf-cs-df-0.url
TRF_NET_VPC = local.transf_vpc

View File

@ -25,6 +25,11 @@ locals {
? var.network_config.network_self_link
: module.orch-vpc.0.self_link
)
# Note: This formatting is needed for output purposes since the fabric artifact registry
# module doesn't yet expose the docker usage path of a registry folder in the needed format.
orch_docker_path = format("%s-docker.pkg.dev/%s/%s",
var.region, module.orch-project.project_id, module.orch-artifact-reg.name)
}
module "orch-project" {
@ -44,6 +49,8 @@ module "orch-project" {
"roles/iam.serviceAccountUser",
"roles/storage.objectAdmin",
"roles/storage.admin",
"roles/artifactregistry.admin",
"roles/serviceusage.serviceUsageConsumer",
]
}
iam = {
@ -65,7 +72,15 @@ module "orch-project" {
]
"roles/storage.objectAdmin" = [
module.orch-sa-cmp-0.iam_email,
module.orch-sa-df-build.iam_email,
"serviceAccount:${module.orch-project.service_accounts.robots.composer}",
"serviceAccount:${module.orch-project.service_accounts.robots.cloudbuild}",
]
"roles/artifactregistry.reader" = [
module.load-sa-df-0.iam_email,
]
"roles/cloudbuild.serviceAgent" = [
module.orch-sa-df-build.iam_email,
]
"roles/storage.objectViewer" = [module.load-sa-df-0.iam_email]
}
@ -81,6 +96,7 @@ module "orch-project" {
"compute.googleapis.com",
"container.googleapis.com",
"containerregistry.googleapis.com",
"artifactregistry.googleapis.com",
"dataflow.googleapis.com",
"orgpolicy.googleapis.com",
"pubsub.googleapis.com",
@ -148,3 +164,46 @@ module "orch-nat" {
region = var.region
router_network = module.orch-vpc.0.name
}
module "orch-artifact-reg" {
source = "../../../modules/artifact-registry"
project_id = module.orch-project.project_id
id = "${var.prefix}-app-images"
location = var.region
format = "DOCKER"
description = "Docker repository storing application images e.g. Dataflow, Cloud Run etc..."
}
module "orch-cs-df-template" {
source = "../../../modules/gcs"
project_id = module.orch-project.project_id
prefix = var.prefix
name = "orc-cs-df-template"
location = var.region
storage_class = "REGIONAL"
encryption_key = try(local.service_encryption_keys.storage, null)
}
module "orch-cs-build-staging" {
source = "../../../modules/gcs"
project_id = module.orch-project.project_id
prefix = var.prefix
name = "orc-cs-build-staging"
location = var.region
storage_class = "REGIONAL"
encryption_key = try(local.service_encryption_keys.storage, null)
}
module "orch-sa-df-build" {
source = "../../../modules/iam-service-account"
project_id = module.orch-project.project_id
prefix = var.prefix
name = "orc-sa-df-build"
display_name = "Data platform Dataflow build service account"
# Note values below should pertain to the system / group / users who are able to
# invoke the build via this service account
iam = {
"roles/iam.serviceAccountTokenCreator" = [local.groups_iam.data-engineers]
"roles/iam.serviceAccountUser" = [local.groups_iam.data-engineers]
}
}

View File

@ -71,11 +71,13 @@ Legend: <code>+</code> additive, <code>•</code> conditional.
| members | roles |
|---|---|
|<b>gcp-data-engineers</b><br><small><i>group</i></small>|[roles/bigquery.dataEditor](https://cloud.google.com/iam/docs/understanding-roles#bigquery.dataEditor) <br>[roles/bigquery.jobUser](https://cloud.google.com/iam/docs/understanding-roles#bigquery.jobUser) <br>[roles/cloudbuild.builds.editor](https://cloud.google.com/iam/docs/understanding-roles#cloudbuild.builds.editor) <br>[roles/composer.admin](https://cloud.google.com/iam/docs/understanding-roles#composer.admin) <br>[roles/composer.environmentAndStorageObjectAdmin](https://cloud.google.com/iam/docs/understanding-roles#composer.environmentAndStorageObjectAdmin) <br>[roles/iam.serviceAccountUser](https://cloud.google.com/iam/docs/understanding-roles#iam.serviceAccountUser) <br>[roles/iap.httpsResourceAccessor](https://cloud.google.com/iam/docs/understanding-roles#iap.httpsResourceAccessor) <br>[roles/storage.admin](https://cloud.google.com/iam/docs/understanding-roles#storage.admin) <br>[roles/storage.objectAdmin](https://cloud.google.com/iam/docs/understanding-roles#storage.objectAdmin) |
|<b>gcp-data-engineers</b><br><small><i>group</i></small>|[roles/artifactregistry.admin](https://cloud.google.com/iam/docs/understanding-roles#artifactregistry.admin) <br>[roles/bigquery.dataEditor](https://cloud.google.com/iam/docs/understanding-roles#bigquery.dataEditor) <br>[roles/bigquery.jobUser](https://cloud.google.com/iam/docs/understanding-roles#bigquery.jobUser) <br>[roles/cloudbuild.builds.editor](https://cloud.google.com/iam/docs/understanding-roles#cloudbuild.builds.editor) <br>[roles/composer.admin](https://cloud.google.com/iam/docs/understanding-roles#composer.admin) <br>[roles/composer.environmentAndStorageObjectAdmin](https://cloud.google.com/iam/docs/understanding-roles#composer.environmentAndStorageObjectAdmin) <br>[roles/iam.serviceAccountUser](https://cloud.google.com/iam/docs/understanding-roles#iam.serviceAccountUser) <br>[roles/iap.httpsResourceAccessor](https://cloud.google.com/iam/docs/understanding-roles#iap.httpsResourceAccessor) <br>[roles/serviceusage.serviceUsageConsumer](https://cloud.google.com/iam/docs/understanding-roles#serviceusage.serviceUsageConsumer) <br>[roles/storage.admin](https://cloud.google.com/iam/docs/understanding-roles#storage.admin) <br>[roles/storage.objectAdmin](https://cloud.google.com/iam/docs/understanding-roles#storage.objectAdmin) |
|<b>SERVICE_IDENTITY_cloudcomposer-accounts</b><br><small><i>serviceAccount</i></small>|[roles/composer.ServiceAgentV2Ext](https://cloud.google.com/iam/docs/understanding-roles#composer.ServiceAgentV2Ext) <br>[roles/storage.objectAdmin](https://cloud.google.com/iam/docs/understanding-roles#storage.objectAdmin) |
|<b>SERVICE_IDENTITY_gcp-sa-cloudbuild</b><br><small><i>serviceAccount</i></small>|[roles/storage.objectAdmin](https://cloud.google.com/iam/docs/understanding-roles#storage.objectAdmin) |
|<b>SERVICE_IDENTITY_service-networking</b><br><small><i>serviceAccount</i></small>|[roles/servicenetworking.serviceAgent](https://cloud.google.com/iam/docs/understanding-roles#servicenetworking.serviceAgent) <code>+</code>|
|<b>load-df-0</b><br><small><i>serviceAccount</i></small>|[roles/bigquery.dataEditor](https://cloud.google.com/iam/docs/understanding-roles#bigquery.dataEditor) <br>[roles/storage.objectViewer](https://cloud.google.com/iam/docs/understanding-roles#storage.objectViewer) |
|<b>load-df-0</b><br><small><i>serviceAccount</i></small>|[roles/artifactregistry.reader](https://cloud.google.com/iam/docs/understanding-roles#artifactregistry.reader) <br>[roles/bigquery.dataEditor](https://cloud.google.com/iam/docs/understanding-roles#bigquery.dataEditor) <br>[roles/storage.objectViewer](https://cloud.google.com/iam/docs/understanding-roles#storage.objectViewer) |
|<b>orc-cmp-0</b><br><small><i>serviceAccount</i></small>|[roles/bigquery.jobUser](https://cloud.google.com/iam/docs/understanding-roles#bigquery.jobUser) <br>[roles/composer.worker](https://cloud.google.com/iam/docs/understanding-roles#composer.worker) <br>[roles/iam.serviceAccountUser](https://cloud.google.com/iam/docs/understanding-roles#iam.serviceAccountUser) <br>[roles/storage.objectAdmin](https://cloud.google.com/iam/docs/understanding-roles#storage.objectAdmin) |
|<b>orc-sa-df-build</b><br><small><i>serviceAccount</i></small>|[roles/cloudbuild.serviceAgent](https://cloud.google.com/iam/docs/understanding-roles#cloudbuild.serviceAgent) <br>[roles/storage.objectAdmin](https://cloud.google.com/iam/docs/understanding-roles#storage.objectAdmin) |
|<b>trf-df-0</b><br><small><i>serviceAccount</i></small>|[roles/bigquery.dataEditor](https://cloud.google.com/iam/docs/understanding-roles#bigquery.dataEditor) |
## Project <i>trf</i>

View File

@ -21,7 +21,7 @@ The approach adapts to different high-level requirements:
- least privilege principle
- rely on service account impersonation
The code in this blueprint doesn't address Organization-level configurations (Organization policy, VPC-SC, centralized logs). We expect those elements to be managed by automation stages external to this script like those in [FAST](../../../fast) and this blueprint deployed on top of them as one of the [stages](../../../fast/stages/03-data-platform/dev/README.md).
The code in this blueprint doesn't address Organization-level configurations (Organization policy, VPC-SC, centralized logs). We expect those elements to be managed by automation stages external to this script like those in [FAST](../../../fast) and this blueprint deployed on top of them as one of the [stages](../../../fast/stages/3-data-platform/dev/README.md).
### Project structure
@ -219,7 +219,7 @@ module "data-platform" {
prefix = "myprefix"
}
# tftest modules=39 resources=287
# tftest modules=43 resources=297
```
## Customizations
@ -263,13 +263,14 @@ You can find examples in the `[demo](./demo)` folder.
| name | description | sensitive |
|---|---|:---:|
| [bigquery-datasets](outputs.tf#L17) | BigQuery datasets. | |
| [demo_commands](outputs.tf#L27) | Demo commands. Relevant only if Composer is deployed. | |
| [gcs-buckets](outputs.tf#L40) | GCS buckets. | |
| [kms_keys](outputs.tf#L53) | Cloud MKS keys. | |
| [projects](outputs.tf#L58) | GCP Projects informations. | |
| [vpc_network](outputs.tf#L84) | VPC network. | |
| [vpc_subnet](outputs.tf#L93) | VPC subnetworks. | |
| [bigquery-datasets](outputs.tf#L16) | BigQuery datasets. | |
| [demo_commands](outputs.tf#L26) | Demo commands. Relevant only if Composer is deployed. | |
| [df_template](outputs.tf#L49) | Dataflow template image and template details. | |
| [gcs-buckets](outputs.tf#L58) | GCS buckets. | |
| [kms_keys](outputs.tf#L71) | Cloud MKS keys. | |
| [projects](outputs.tf#L76) | GCP Projects informations. | |
| [vpc_network](outputs.tf#L102) | VPC network. | |
| [vpc_subnet](outputs.tf#L111) | VPC subnetworks. | |
<!-- END TFDOC -->
## TODOs

View File

@ -23,10 +23,11 @@ Below you can find a description of each example:
## Running the demo
To run demo examples, please follow the following steps:
- 01: copy sample data to the `drop off` Cloud Storage bucket impersonating the `load` service account.
- 02: copy sample data structure definition in the `orchestration` Cloud Storage bucket impersonating the `orchestration` service account.
- 03: copy the Cloud Composer DAG to the Cloud Composer Storage bucket impersonating the `orchestration` service account.
- 04: Open the Cloud Composer Airflow UI and run the imported DAG.
- 05: Run the BigQuery query to see results.
- 01: Copy sample data to the `drop off` Cloud Storage bucket impersonating the `load` service account.
- 02: Copy sample data structure definition in the `orchestration` Cloud Storage bucket impersonating the `orchestration` service account.
- 03: Copy the Cloud Composer DAG to the Cloud Composer Storage bucket impersonating the `orchestration` service account.
- 04: Build the Dataflow Flex template and image via a Cloud Build pipeline
- 05: Open the Cloud Composer Airflow UI and run the imported DAG.
- 06: Run the BigQuery query to see results.
You can find pre-computed commands in the `demo_commands` output variable of the deployed terraform [data pipeline](../).

View File

@ -0,0 +1,160 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

View File

@ -0,0 +1,29 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM gcr.io/dataflow-templates-base/python39-template-launcher-base
ENV FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE="/template/requirements.txt"
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="/template/csv2bq.py"
COPY ./src/ /template
RUN apt-get update \
&& apt-get install -y libffi-dev git \
&& rm -rf /var/lib/apt/lists/* \
&& pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE \
&& pip download --no-cache-dir --dest /tmp/dataflow-requirements-cache -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE
ENV PIP_NO_DEPS=True

View File

@ -0,0 +1,63 @@
## Pipeline summary
This demo serves as a simple example of building and launching a Flex Template Dataflow pipeline. The code mainly focuses on reading a CSV file as input along with a JSON schema file as side input. The pipeline Parses both inputs and writes the data to the relevant BigQuery table while applying the schema passed from input.
![Dataflow pipeline overview](../../images/df_demo_pipeline.png "Dataflow pipeline overview")
## Example build run
Below is an example for triggering the Dataflow flex template build pipeline defined in `cloudbuild.yaml`. The Terraform output provides an example as well filled with the parameters values based on the generated resources in the data platform.
```
GCP_PROJECT="[ORCHESTRATION-PROJECT]"
TEMPLATE_IMAGE="[REGION].pkg.dev/[ORCHESTRATION-PROJECT]/[REPOSITORY]/csv2bq:latest"
TEMPLATE_PATH="gs://[DATAFLOW-TEMPLATE-BUCKEt]/csv2bq.json"
STAGIN_PATH="gs://[ORCHESTRATION-STAGING-BUCKET]/build"
LOG_PATH="gs://[ORCHESTRATION-LOGS-BUCKET]/logs"
REGION="[REGION]"
BUILD_SERVICE_ACCOUNT=orc-sa-df-build@[SERVICE_PROJECT_ID].iam.gserviceaccount.com
gcloud builds submit \
--config=cloudbuild.yaml \
--project=$GCP_PROJECT \
--region=$REGION \
--gcs-log-dir=$LOG_PATH \
--gcs-source-staging-dir=$STAGIN_PATH \
--substitutions=_TEMPLATE_IMAGE=$TEMPLATE_IMAGE,_TEMPLATE_PATH=$TEMPLATE_PATH,_DOCKER_DIR="." \
--impersonate-service-account=$BUILD_SERVICE_ACCOUNT
```
**Note:** For the scope of the demo, the launch of this build is manual, but in production, this build would be launched via a configured cloud build trigger when new changes are merged into the code branch of the Dataflow template.
## Example Dataflow pipeline launch in bash (from flex template)
Below is an example of launching a dataflow pipeline manually, based on the built template. When launched manually, the Dataflow pipeline would be launched via the orchestration service account, which is what the Airflow DAG is also using in the scope of this demo.
**Note:** In the data platform demo, the launch of this Dataflow pipeline is handled by the airflow operator (DataflowStartFlexTemplateOperator).
```
#!/bin/bash
PROJECT_ID=[LOAD-PROJECT]
REGION=[REGION]
ORCH_SERVICE_ACCOUNT=orchestrator@[SERVICE_PROJECT_ID].iam.gserviceaccount.com
SUBNET=[SUBNET-NAME]
PIPELINE_STAGIN_PATH="gs://[LOAD-STAGING-BUCKET]/build"
CSV_FILE=gs://[DROP-ZONE-BUCKET]/customers.csv
JSON_SCHEMA=gs://[ORCHESTRATION-BUCKET]/customers_schema.json
OUTPUT_TABLE=[DESTINATION-PROJ].[DESTINATION-DATASET].customers
TEMPLATE_PATH=gs://[ORCHESTRATION-DF-GCS]/csv2bq.json
gcloud dataflow flex-template run "csv2bq-`date +%Y%m%d-%H%M%S`" \
--template-file-gcs-location $TEMPLATE_PATH \
--parameters temp_location="$PIPELINE_STAGIN_PATH/tmp" \
--parameters staging_location="$PIPELINE_STAGIN_PATH/stage" \
--parameters csv_file=$CSV_FILE \
--parameters json_schema=$JSON_SCHEMA\
--parameters output_table=$OUTPUT_TABLE \
--region $REGION \
--project $PROJECT_ID \
--subnetwork="regions/$REGION/subnetworks/$SUBNET" \
--service-account-email=$ORCH_SERVICE_ACCOUNT
```

View File

@ -0,0 +1,30 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
steps:
- name: gcr.io/cloud-builders/gcloud
id: "Build docker image"
args: ['builds', 'submit', '--tag', '$_TEMPLATE_IMAGE', '.']
dir: '$_DOCKER_DIR'
waitFor: ['-']
- name: gcr.io/cloud-builders/gcloud
id: "Build template"
args: ['dataflow',
'flex-template',
'build',
'$_TEMPLATE_PATH',
'--image=$_TEMPLATE_IMAGE',
'--sdk-language=PYTHON'
]
waitFor: ['Build docker image']

View File

@ -0,0 +1,79 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import apache_beam as beam
from apache_beam.io import ReadFromText, Read, WriteToBigQuery, BigQueryDisposition
from apache_beam.options.pipeline_options import PipelineOptions, SetupOptions
from apache_beam.io.filesystems import FileSystems
import json
import argparse
class ParseRow(beam.DoFn):
"""
Splits a given csv row by a seperator, validates fields and returns a dict
structure compatible with the BigQuery transform
"""
def process(self, element: str, table_fields: list, delimiter: str):
split_row = element.split(delimiter)
parsed_row = {}
for i, field in enumerate(table_fields['BigQuery Schema']):
parsed_row[field['name']] = split_row[i]
yield parsed_row
def run(argv=None, save_main_session=True):
parser = argparse.ArgumentParser()
parser.add_argument('--csv_file',
type=str,
required=True,
help='Path to the CSV file')
parser.add_argument('--json_schema',
type=str,
required=True,
help='Path to the JSON schema')
parser.add_argument('--output_table',
type=str,
required=True,
help='BigQuery path for the output table')
args, pipeline_args = parser.parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(
SetupOptions).save_main_session = save_main_session
with beam.Pipeline(options=pipeline_options) as p:
def get_table_schema(table_path, table_schema):
return {'fields': table_schema['BigQuery Schema']}
csv_input = p | 'Read CSV' >> ReadFromText(args.csv_file)
schema_input = p | 'Load Schema' >> beam.Create(
json.loads(FileSystems.open(args.json_schema).read()))
table_fields = beam.pvalue.AsDict(schema_input)
parsed = csv_input | 'Parse and validate rows' >> beam.ParDo(
ParseRow(), table_fields, ',')
parsed | 'Write to BigQuery' >> WriteToBigQuery(
args.output_table,
schema=get_table_schema,
create_disposition=BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=BigQueryDisposition.WRITE_TRUNCATE,
schema_side_inputs=(table_fields, ))
if __name__ == "__main__":
run()

View File

@ -0,0 +1,461 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------------
# Load The Dependencies
# --------------------------------------------------------------------------------
import datetime
import json
import os
import time
from airflow import models
from airflow.operators import dummy
from airflow.providers.google.cloud.operators.dataflow import DataflowStartFlexTemplateOperator
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator, BigQueryUpsertTableOperator, BigQueryUpdateTableSchemaOperator
from airflow.utils.task_group import TaskGroup
# --------------------------------------------------------------------------------
# Set variables - Needed for the DEMO
# --------------------------------------------------------------------------------
BQ_LOCATION = os.environ.get("BQ_LOCATION")
DATA_CAT_TAGS = json.loads(os.environ.get("DATA_CAT_TAGS"))
DWH_LAND_PRJ = os.environ.get("DWH_LAND_PRJ")
DWH_LAND_BQ_DATASET = os.environ.get("DWH_LAND_BQ_DATASET")
DWH_LAND_GCS = os.environ.get("DWH_LAND_GCS")
DWH_CURATED_PRJ = os.environ.get("DWH_CURATED_PRJ")
DWH_CURATED_BQ_DATASET = os.environ.get("DWH_CURATED_BQ_DATASET")
DWH_CURATED_GCS = os.environ.get("DWH_CURATED_GCS")
DWH_CONFIDENTIAL_PRJ = os.environ.get("DWH_CONFIDENTIAL_PRJ")
DWH_CONFIDENTIAL_BQ_DATASET = os.environ.get("DWH_CONFIDENTIAL_BQ_DATASET")
DWH_CONFIDENTIAL_GCS = os.environ.get("DWH_CONFIDENTIAL_GCS")
DWH_PLG_PRJ = os.environ.get("DWH_PLG_PRJ")
DWH_PLG_BQ_DATASET = os.environ.get("DWH_PLG_BQ_DATASET")
DWH_PLG_GCS = os.environ.get("DWH_PLG_GCS")
GCP_REGION = os.environ.get("GCP_REGION")
DRP_PRJ = os.environ.get("DRP_PRJ")
DRP_BQ = os.environ.get("DRP_BQ")
DRP_GCS = os.environ.get("DRP_GCS")
DRP_PS = os.environ.get("DRP_PS")
LOD_PRJ = os.environ.get("LOD_PRJ")
LOD_GCS_STAGING = os.environ.get("LOD_GCS_STAGING")
LOD_NET_VPC = os.environ.get("LOD_NET_VPC")
LOD_NET_SUBNET = os.environ.get("LOD_NET_SUBNET")
LOD_SA_DF = os.environ.get("LOD_SA_DF")
ORC_PRJ = os.environ.get("ORC_PRJ")
ORC_GCS = os.environ.get("ORC_GCS")
ORC_GCS_TMP_DF = os.environ.get("ORC_GCS_TMP_DF")
TRF_PRJ = os.environ.get("TRF_PRJ")
TRF_GCS_STAGING = os.environ.get("TRF_GCS_STAGING")
TRF_NET_VPC = os.environ.get("TRF_NET_VPC")
TRF_NET_SUBNET = os.environ.get("TRF_NET_SUBNET")
TRF_SA_DF = os.environ.get("TRF_SA_DF")
TRF_SA_BQ = os.environ.get("TRF_SA_BQ")
DF_KMS_KEY = os.environ.get("DF_KMS_KEY", "")
DF_REGION = os.environ.get("GCP_REGION")
DF_ZONE = os.environ.get("GCP_REGION") + "-b"
# --------------------------------------------------------------------------------
# Set default arguments
# --------------------------------------------------------------------------------
# If you are running Airflow in more than one time zone
# see https://airflow.apache.org/docs/apache-airflow/stable/timezone.html
# for best practices
yesterday = datetime.datetime.now() - datetime.timedelta(days=1)
default_args = {
'owner': 'airflow',
'start_date': yesterday,
'depends_on_past': False,
'email': [''],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
}
dataflow_environment = {
'serviceAccountEmail': LOD_SA_DF,
'workerZone': DF_ZONE,
'stagingLocation': f'{LOD_GCS_STAGING}/staging',
'tempLocation': f'{LOD_GCS_STAGING}/tmp',
'subnetwork': LOD_NET_SUBNET,
'kmsKeyName': DF_KMS_KEY,
'ipConfiguration': 'WORKER_IP_PRIVATE'
}
# --------------------------------------------------------------------------------
# Main DAG
# --------------------------------------------------------------------------------
with models.DAG('data_pipeline_dc_tags_dag_flex',
default_args=default_args,
schedule_interval=None) as dag:
start = dummy.DummyOperator(task_id='start', trigger_rule='all_success')
end = dummy.DummyOperator(task_id='end', trigger_rule='all_success')
# Bigquery Tables created here for demo porpuse.
# Consider a dedicated pipeline or tool for a real life scenario.
with TaskGroup('upsert_table') as upsert_table:
upsert_table_customers = BigQueryUpsertTableOperator(
task_id="upsert_table_customers",
project_id=DWH_LAND_PRJ,
dataset_id=DWH_LAND_BQ_DATASET,
impersonation_chain=[TRF_SA_DF],
table_resource={
"tableReference": {
"tableId": "customers"
},
},
)
upsert_table_purchases = BigQueryUpsertTableOperator(
task_id="upsert_table_purchases",
project_id=DWH_LAND_PRJ,
dataset_id=DWH_LAND_BQ_DATASET,
impersonation_chain=[TRF_SA_BQ],
table_resource={"tableReference": {
"tableId": "purchases"
}},
)
upsert_table_customer_purchase_curated = BigQueryUpsertTableOperator(
task_id="upsert_table_customer_purchase_curated",
project_id=DWH_CURATED_PRJ,
dataset_id=DWH_CURATED_BQ_DATASET,
impersonation_chain=[TRF_SA_BQ],
table_resource={
"tableReference": {
"tableId": "customer_purchase"
}
},
)
upsert_table_customer_purchase_confidential = BigQueryUpsertTableOperator(
task_id="upsert_table_customer_purchase_confidential",
project_id=DWH_CONFIDENTIAL_PRJ,
dataset_id=DWH_CONFIDENTIAL_BQ_DATASET,
impersonation_chain=[TRF_SA_BQ],
table_resource={
"tableReference": {
"tableId": "customer_purchase"
}
},
)
# Bigquery Tables schema defined here for demo porpuse.
# Consider a dedicated pipeline or tool for a real life scenario.
with TaskGroup('update_schema_table') as update_schema_table:
update_table_schema_customers = BigQueryUpdateTableSchemaOperator(
task_id="update_table_schema_customers",
project_id=DWH_LAND_PRJ,
dataset_id=DWH_LAND_BQ_DATASET,
table_id="customers",
impersonation_chain=[TRF_SA_BQ],
include_policy_tags=True,
schema_fields_updates=[{
"mode": "REQUIRED",
"name": "id",
"type": "INTEGER",
"description": "ID"
}, {
"mode": "REQUIRED",
"name": "name",
"type": "STRING",
"description": "Name",
"policyTags": {
"names": [DATA_CAT_TAGS.get('2_Private', None)]
}
}, {
"mode": "REQUIRED",
"name": "surname",
"type": "STRING",
"description": "Surname",
"policyTags": {
"names": [DATA_CAT_TAGS.get('2_Private', None)]
}
}, {
"mode": "REQUIRED",
"name": "timestamp",
"type": "TIMESTAMP",
"description": "Timestamp"
}])
update_table_schema_purchases = BigQueryUpdateTableSchemaOperator(
task_id="update_table_schema_purchases",
project_id=DWH_LAND_PRJ,
dataset_id=DWH_LAND_BQ_DATASET,
table_id="purchases",
impersonation_chain=[TRF_SA_BQ],
include_policy_tags=True,
schema_fields_updates=[{
"mode": "REQUIRED",
"name": "id",
"type": "INTEGER",
"description": "ID"
}, {
"mode": "REQUIRED",
"name": "customer_id",
"type": "INTEGER",
"description": "ID"
}, {
"mode": "REQUIRED",
"name": "item",
"type": "STRING",
"description": "Item Name"
}, {
"mode": "REQUIRED",
"name": "price",
"type": "FLOAT",
"description": "Item Price"
}, {
"mode": "REQUIRED",
"name": "timestamp",
"type": "TIMESTAMP",
"description": "Timestamp"
}])
update_table_schema_customer_purchase_curated = BigQueryUpdateTableSchemaOperator(
task_id="update_table_schema_customer_purchase_curated",
project_id=DWH_CURATED_PRJ,
dataset_id=DWH_CURATED_BQ_DATASET,
table_id="customer_purchase",
impersonation_chain=[TRF_SA_BQ],
include_policy_tags=True,
schema_fields_updates=[{
"mode": "REQUIRED",
"name": "customer_id",
"type": "INTEGER",
"description": "ID"
}, {
"mode": "REQUIRED",
"name": "purchase_id",
"type": "INTEGER",
"description": "ID"
}, {
"mode": "REQUIRED",
"name": "name",
"type": "STRING",
"description": "Name",
"policyTags": {
"names": [DATA_CAT_TAGS.get('2_Private', None)]
}
}, {
"mode": "REQUIRED",
"name": "surname",
"type": "STRING",
"description": "Surname",
"policyTags": {
"names": [DATA_CAT_TAGS.get('2_Private', None)]
}
}, {
"mode": "REQUIRED",
"name": "item",
"type": "STRING",
"description": "Item Name"
}, {
"mode": "REQUIRED",
"name": "price",
"type": "FLOAT",
"description": "Item Price"
}, {
"mode": "REQUIRED",
"name": "timestamp",
"type": "TIMESTAMP",
"description": "Timestamp"
}])
update_table_schema_customer_purchase_confidential = BigQueryUpdateTableSchemaOperator(
task_id="update_table_schema_customer_purchase_confidential",
project_id=DWH_CONFIDENTIAL_PRJ,
dataset_id=DWH_CONFIDENTIAL_BQ_DATASET,
table_id="customer_purchase",
impersonation_chain=[TRF_SA_BQ],
include_policy_tags=True,
schema_fields_updates=[{
"mode": "REQUIRED",
"name": "customer_id",
"type": "INTEGER",
"description": "ID"
}, {
"mode": "REQUIRED",
"name": "purchase_id",
"type": "INTEGER",
"description": "ID"
}, {
"mode": "REQUIRED",
"name": "name",
"type": "STRING",
"description": "Name",
"policyTags": {
"names": [DATA_CAT_TAGS.get('2_Private', None)]
}
}, {
"mode": "REQUIRED",
"name": "surname",
"type": "STRING",
"description": "Surname",
"policyTags": {
"names": [DATA_CAT_TAGS.get('2_Private', None)]
}
}, {
"mode": "REQUIRED",
"name": "item",
"type": "STRING",
"description": "Item Name"
}, {
"mode": "REQUIRED",
"name": "price",
"type": "FLOAT",
"description": "Item Price"
}, {
"mode": "REQUIRED",
"name": "timestamp",
"type": "TIMESTAMP",
"description": "Timestamp"
}])
customers_import = DataflowStartFlexTemplateOperator(
task_id='dataflow_customers_import',
project_id=LOD_PRJ,
location=DF_REGION,
body={
'launchParameter': {
'jobName': f'dataflow-customers-import-{round(time.time())}',
'containerSpecGcsPath': f'{ORC_GCS_TMP_DF}/csv2bq.json',
'environment': {
'serviceAccountEmail': LOD_SA_DF,
'workerZone': DF_ZONE,
'stagingLocation': f'{LOD_GCS_STAGING}/staging',
'tempLocation': f'{LOD_GCS_STAGING}/tmp',
'subnetwork': LOD_NET_SUBNET,
'kmsKeyName': DF_KMS_KEY,
'ipConfiguration': 'WORKER_IP_PRIVATE'
},
'parameters': {
'csv_file':
f'{DRP_GCS}/customers.csv',
'json_schema':
f'{ORC_GCS}/customers_schema.json',
'output_table':
f'{DWH_LAND_PRJ}:{DWH_LAND_BQ_DATASET}.customers',
}
}
})
purchases_import = DataflowStartFlexTemplateOperator(
task_id='dataflow_purchases_import',
project_id=LOD_PRJ,
location=DF_REGION,
body={
'launchParameter': {
'jobName': f'dataflow-purchases-import-{round(time.time())}',
'containerSpecGcsPath': f'{ORC_GCS_TMP_DF}/csv2bq.json',
'environment': {
'serviceAccountEmail': LOD_SA_DF,
'workerZone': DF_ZONE,
'stagingLocation': f'{LOD_GCS_STAGING}/staging',
'tempLocation': f'{LOD_GCS_STAGING}/tmp',
'subnetwork': LOD_NET_SUBNET,
'kmsKeyName': DF_KMS_KEY,
'ipConfiguration': 'WORKER_IP_PRIVATE'
},
'parameters': {
'csv_file':
f'{DRP_GCS}/purchases.csv',
'json_schema':
f'{ORC_GCS}/purchases_schema.json',
'output_table':
f'{DWH_LAND_PRJ}:{DWH_LAND_BQ_DATASET}.purchases',
}
}
})
join_customer_purchase = BigQueryInsertJobOperator(
task_id='bq_join_customer_purchase',
gcp_conn_id='bigquery_default',
project_id=TRF_PRJ,
location=BQ_LOCATION,
configuration={
'jobType': 'QUERY',
'query': {
'query':
"""SELECT
c.id as customer_id,
p.id as purchase_id,
c.name as name,
c.surname as surname,
p.item as item,
p.price as price,
p.timestamp as timestamp
FROM `{dwh_0_prj}.{dwh_0_dataset}.customers` c
JOIN `{dwh_0_prj}.{dwh_0_dataset}.purchases` p ON c.id = p.customer_id
""".format(
dwh_0_prj=DWH_LAND_PRJ,
dwh_0_dataset=DWH_LAND_BQ_DATASET,
),
'destinationTable': {
'projectId': DWH_CURATED_PRJ,
'datasetId': DWH_CURATED_BQ_DATASET,
'tableId': 'customer_purchase'
},
'writeDisposition':
'WRITE_TRUNCATE',
"useLegacySql":
False
}
},
impersonation_chain=[TRF_SA_BQ])
confidential_customer_purchase = BigQueryInsertJobOperator(
task_id='bq_confidential_customer_purchase',
gcp_conn_id='bigquery_default',
project_id=TRF_PRJ,
location=BQ_LOCATION,
configuration={
'jobType': 'QUERY',
'query': {
'query':
"""SELECT
customer_id,
purchase_id,
name,
surname,
item,
price,
timestamp
FROM `{dwh_cur_prj}.{dwh_cur_dataset}.customer_purchase`
""".format(
dwh_cur_prj=DWH_CURATED_PRJ,
dwh_cur_dataset=DWH_CURATED_BQ_DATASET,
),
'destinationTable': {
'projectId': DWH_CONFIDENTIAL_PRJ,
'datasetId': DWH_CONFIDENTIAL_BQ_DATASET,
'tableId': 'customer_purchase'
},
'writeDisposition':
'WRITE_TRUNCATE',
"useLegacySql":
False
}
},
impersonation_chain=[TRF_SA_BQ])
start >> upsert_table >> update_schema_table >> [
customers_import, purchases_import
] >> join_customer_purchase >> confidential_customer_purchase >> end

View File

@ -0,0 +1,225 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------------
# Load The Dependencies
# --------------------------------------------------------------------------------
import datetime
import json
import os
import time
from airflow import models
from airflow.providers.google.cloud.operators.dataflow import DataflowStartFlexTemplateOperator
from airflow.operators import dummy
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
# --------------------------------------------------------------------------------
# Set variables - Needed for the DEMO
# --------------------------------------------------------------------------------
BQ_LOCATION = os.environ.get("BQ_LOCATION")
DATA_CAT_TAGS = json.loads(os.environ.get("DATA_CAT_TAGS"))
DWH_LAND_PRJ = os.environ.get("DWH_LAND_PRJ")
DWH_LAND_BQ_DATASET = os.environ.get("DWH_LAND_BQ_DATASET")
DWH_LAND_GCS = os.environ.get("DWH_LAND_GCS")
DWH_CURATED_PRJ = os.environ.get("DWH_CURATED_PRJ")
DWH_CURATED_BQ_DATASET = os.environ.get("DWH_CURATED_BQ_DATASET")
DWH_CURATED_GCS = os.environ.get("DWH_CURATED_GCS")
DWH_CONFIDENTIAL_PRJ = os.environ.get("DWH_CONFIDENTIAL_PRJ")
DWH_CONFIDENTIAL_BQ_DATASET = os.environ.get("DWH_CONFIDENTIAL_BQ_DATASET")
DWH_CONFIDENTIAL_GCS = os.environ.get("DWH_CONFIDENTIAL_GCS")
DWH_PLG_PRJ = os.environ.get("DWH_PLG_PRJ")
DWH_PLG_BQ_DATASET = os.environ.get("DWH_PLG_BQ_DATASET")
DWH_PLG_GCS = os.environ.get("DWH_PLG_GCS")
GCP_REGION = os.environ.get("GCP_REGION")
DRP_PRJ = os.environ.get("DRP_PRJ")
DRP_BQ = os.environ.get("DRP_BQ")
DRP_GCS = os.environ.get("DRP_GCS")
DRP_PS = os.environ.get("DRP_PS")
LOD_PRJ = os.environ.get("LOD_PRJ")
LOD_GCS_STAGING = os.environ.get("LOD_GCS_STAGING")
LOD_NET_VPC = os.environ.get("LOD_NET_VPC")
LOD_NET_SUBNET = os.environ.get("LOD_NET_SUBNET")
LOD_SA_DF = os.environ.get("LOD_SA_DF")
ORC_PRJ = os.environ.get("ORC_PRJ")
ORC_GCS = os.environ.get("ORC_GCS")
ORC_GCS_TMP_DF = os.environ.get("ORC_GCS_TMP_DF")
TRF_PRJ = os.environ.get("TRF_PRJ")
TRF_GCS_STAGING = os.environ.get("TRF_GCS_STAGING")
TRF_NET_VPC = os.environ.get("TRF_NET_VPC")
TRF_NET_SUBNET = os.environ.get("TRF_NET_SUBNET")
TRF_SA_DF = os.environ.get("TRF_SA_DF")
TRF_SA_BQ = os.environ.get("TRF_SA_BQ")
DF_KMS_KEY = os.environ.get("DF_KMS_KEY", "")
DF_REGION = os.environ.get("GCP_REGION")
DF_ZONE = os.environ.get("GCP_REGION") + "-b"
# --------------------------------------------------------------------------------
# Set default arguments
# --------------------------------------------------------------------------------
# If you are running Airflow in more than one time zone
# see https://airflow.apache.org/docs/apache-airflow/stable/timezone.html
# for best practices
yesterday = datetime.datetime.now() - datetime.timedelta(days=1)
default_args = {
'owner': 'airflow',
'start_date': yesterday,
'depends_on_past': False,
'email': [''],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
}
dataflow_environment = {
'serviceAccountEmail': LOD_SA_DF,
'workerZone': DF_ZONE,
'stagingLocation': f'{LOD_GCS_STAGING}/staging',
'tempLocation': f'{LOD_GCS_STAGING}/tmp',
'subnetwork': LOD_NET_SUBNET,
'kmsKeyName': DF_KMS_KEY,
'ipConfiguration': 'WORKER_IP_PRIVATE'
}
# --------------------------------------------------------------------------------
# Main DAG
# --------------------------------------------------------------------------------
with models.DAG('data_pipeline_dag_flex',
default_args=default_args,
schedule_interval=None) as dag:
start = dummy.DummyOperator(task_id='start', trigger_rule='all_success')
end = dummy.DummyOperator(task_id='end', trigger_rule='all_success')
# Bigquery Tables automatically created for demo purposes.
# Consider a dedicated pipeline or tool for a real life scenario.
customers_import = DataflowStartFlexTemplateOperator(
task_id='dataflow_customers_import',
project_id=LOD_PRJ,
location=DF_REGION,
body={
'launchParameter': {
'jobName': f'dataflow-customers-import-{round(time.time())}',
'containerSpecGcsPath': f'{ORC_GCS_TMP_DF}/csv2bq.json',
'environment': dataflow_environment,
'parameters': {
'csv_file':
f'{DRP_GCS}/customers.csv',
'json_schema':
f'{ORC_GCS}/customers_schema.json',
'output_table':
f'{DWH_LAND_PRJ}:{DWH_LAND_BQ_DATASET}.customers',
}
}
})
purchases_import = DataflowStartFlexTemplateOperator(
task_id='dataflow_purchases_import',
project_id=LOD_PRJ,
location=DF_REGION,
body={
'launchParameter': {
'jobName': f'dataflow-purchases-import-{round(time.time())}',
'containerSpecGcsPath': f'{ORC_GCS_TMP_DF}/csv2bq.json',
'environment': dataflow_environment,
'parameters': {
'csv_file':
f'{DRP_GCS}/purchases.csv',
'json_schema':
f'{ORC_GCS}/purchases_schema.json',
'output_table':
f'{DWH_LAND_PRJ}:{DWH_LAND_BQ_DATASET}.purchases',
}
}
})
join_customer_purchase = BigQueryInsertJobOperator(
task_id='bq_join_customer_purchase',
gcp_conn_id='bigquery_default',
project_id=TRF_PRJ,
location=BQ_LOCATION,
configuration={
'jobType': 'QUERY',
'query': {
'query':
"""SELECT
c.id as customer_id,
p.id as purchase_id,
p.item as item,
p.price as price,
p.timestamp as timestamp
FROM `{dwh_0_prj}.{dwh_0_dataset}.customers` c
JOIN `{dwh_0_prj}.{dwh_0_dataset}.purchases` p ON c.id = p.customer_id
""".format(
dwh_0_prj=DWH_LAND_PRJ,
dwh_0_dataset=DWH_LAND_BQ_DATASET,
),
'destinationTable': {
'projectId': DWH_CURATED_PRJ,
'datasetId': DWH_CURATED_BQ_DATASET,
'tableId': 'customer_purchase'
},
'writeDisposition':
'WRITE_TRUNCATE',
"useLegacySql":
False
}
},
impersonation_chain=[TRF_SA_BQ])
confidential_customer_purchase = BigQueryInsertJobOperator(
task_id='bq_confidential_customer_purchase',
gcp_conn_id='bigquery_default',
project_id=TRF_PRJ,
location=BQ_LOCATION,
configuration={
'jobType': 'QUERY',
'query': {
'query':
"""SELECT
c.id as customer_id,
p.id as purchase_id,
c.name as name,
c.surname as surname,
p.item as item,
p.price as price,
p.timestamp as timestamp
FROM `{dwh_0_prj}.{dwh_0_dataset}.customers` c
JOIN `{dwh_0_prj}.{dwh_0_dataset}.purchases` p ON c.id = p.customer_id
""".format(
dwh_0_prj=DWH_LAND_PRJ,
dwh_0_dataset=DWH_LAND_BQ_DATASET,
),
'destinationTable': {
'projectId': DWH_CONFIDENTIAL_PRJ,
'datasetId': DWH_CONFIDENTIAL_BQ_DATASET,
'tableId': 'customer_purchase'
},
'writeDisposition':
'WRITE_TRUNCATE',
"useLegacySql":
False
}
},
impersonation_chain=[TRF_SA_BQ])
start >> [
customers_import, purchases_import
] >> join_customer_purchase >> confidential_customer_purchase >> end

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

@ -13,7 +13,6 @@
# limitations under the License.
# tfdoc:file:description Output variables.
output "bigquery-datasets" {
description = "BigQuery datasets."
value = {
@ -30,13 +29,32 @@ output "demo_commands" {
01 = "gsutil -i ${module.drop-sa-cs-0.email} cp demo/data/*.csv gs://${module.drop-cs-0.name}"
02 = try("gsutil -i ${module.orch-sa-cmp-0.email} cp demo/data/*.j* gs://${module.orch-cs-0.name}", "Composer not deployed.")
03 = try("gsutil -i ${module.orch-sa-cmp-0.email} cp demo/*.py ${google_composer_environment.orch-cmp-0[0].config[0].dag_gcs_prefix}/", "Composer not deployed")
04 = try("Open ${google_composer_environment.orch-cmp-0[0].config.0.airflow_uri} and run uploaded DAG.", "Composer not deployed")
05 = <<EOT
04 = <<EOT
gcloud builds submit \
--config=./demo/dataflow-csv2bq/cloudbuild.yaml \
--project=${module.orch-project.project_id} \
--region="${var.region}" \
--gcs-log-dir=gs://${module.orch-cs-build-staging.name}/log \
--gcs-source-staging-dir=gs://${module.orch-cs-build-staging.name}/staging \
--impersonate-service-account=${module.orch-sa-df-build.email} \
--substitutions=_TEMPLATE_IMAGE="${local.orch_docker_path}/csv2bq:latest",_TEMPLATE_PATH="gs://${module.orch-cs-df-template.name}/csv2bq.json",_DOCKER_DIR="./demo/dataflow-csv2bq"
EOT
05 = try("Open ${google_composer_environment.orch-cmp-0[0].config.0.airflow_uri} and run uploaded DAG.", "Composer not deployed")
06 = <<EOT
bq query --project_id=${module.dwh-conf-project.project_id} --use_legacy_sql=false 'SELECT * EXCEPT (name, surname) FROM `${module.dwh-conf-project.project_id}.${module.dwh-conf-bq-0.dataset_id}.customer_purchase` LIMIT 1000'"
EOT
}
}
output "df_template" {
description = "Dataflow template image and template details."
value = {
df_template_img = "${local.orch_docker_path}/[image-name]:[version]"
df_template_cs = "gs://${module.orch-cs-df-template.name}"
build_staging_cs = "gs://${module.orch-cs-build-staging.name}"
}
}
output "gcs-buckets" {
description = "GCS buckets."
value = {
@ -98,3 +116,4 @@ output "vpc_subnet" {
transformation = local.transf_subnet
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -0,0 +1,180 @@
# Shielded folder
This blueprint implements an opinionated folder configuration according to GCP best practices. Configurations set at the folder level would be beneficial to host workloads inheriting constraints from the folder they belong to.
In this blueprint, a folder will be created setting following features:
- Organizational policies
- Hierarchical firewall rules
- Cloud KMS
- VPC-SC
Within the folder, the following projects will be created:
- 'audit-logs' where Audit Logs sink will be created
- 'sec-core' where Cloud KMS and Cloud Secret manager will be configured
The following diagram is a high-level reference of the resources created and managed here:
![Shielded architecture overview](./images/overview_diagram.png "Shielded architecture overview")
## Design overview and choices
Despite its simplicity, this blueprint implements the basics of a design that we've seen working well for various customers.
The approach adapts to different high-level requirements:
- IAM roles inheritance
- Organizational policies
- Audit log sink
- VPC Service Control
- Cloud KMS
## Project structure
The Shielded Folder blueprint is designed to rely on several projects:
- `audit-log`: to host Audit logging buckets and Audit log sync to GCS, BigQuery or PubSub
- `sec-core`: to host security-related resources such as Cloud KMS and Cloud Secrets Manager
This separation into projects allows adhering to the least-privilege principle by using project-level roles.
## User groups
User groups provide a stable frame of reference that allows decoupling the final set of permissions from the stage where entities and resources are created, and their IAM bindings are defined.
We use groups to control access to resources:
- `data-engineers`: They handle and run workloads on the `workload` subfolder. They have editor access to all resources in the `workload` folder in order to troubleshoot possible issues within the workload. This team can also impersonate any service account in the workload folder.
- `data-security`: They handle security configurations for the shielded folder. They have owner access to the `audit-log` and `sec-core` projects.
## Encryption
The blueprint supports the configuration of an instance of Cloud KMS to handle encryption on the resources. The encryption is disabled by default, but you can enable it by configuring the `enable_features.encryption` variable.
The script will create keys to encrypt log sink buckets/datasets/topics in the specified regions. Configuring the `kms_keys` variable, you can create additional KMS keys needed by your workload.
## Customizations
### Organization policy
You can configure the Organization policies enforced on the folder editing yaml files in the [org-policies](./data/org-policies/) folder. An opinionated list of policies that we suggest enforcing is listed.
Some additional Organization policy constraints you may want to evaluate adding:
- `constraints/gcp.resourceLocations`: to define the locations where location-based GCP resources can be created.
- `constraints/gcp.restrictCmekCryptoKeyProjects`: to define which projects may be used to supply Customer-Managed Encryption Keys (CMEK) when creating resources.
### VPC Service Control
VPC Service Control is configured to have a Perimeter containing all projects within the folder. Additional projects you may add to the folder won't be automatically added to the perimeter, and a new `terraform apply` is needed to add the project to the perimeter.
The VPC SC configuration is set to dry-run mode, but switching to enforced mode is a simple operation involving modifying a few lines of code highlighted by ad-hoc comments.
Access level rules are not defined. Before moving the configuration to enforced mode, configure access policies to continue accessing resources from outside of the perimeter.
An access level based on the network range you are using to reach the console (e.g. Proxy IP, Internet connection, ...) is suggested. Example:
```tfvars
vpc_sc_access_levels = {
users = {
conditions = [
{ members = ["user:user1@example.com"] }
]
}
}
```
Alternatively, you can configure an access level based on the identity that needs to reach resources from outside the perimeter.
```tfvars
vpc_sc_access_levels = {
users = {
conditions = [
{ ip_subnetworks = ["101.101.101.0/24"] }
]
}
}
```
## How to run this script
To deploy this blueprint in your GCP organization, you will need
- a folder or organization where resources will be created
- a billing account that will be associated with the new projects
The Shielded Folder blueprint is meant to be executed by a Service Account (or a regular user) having this minimal set of permission:
- Billing account
- `roles/billing.user`
- Folder level
- `roles/resourcemanager.folderAdmin`
- `roles/resourcemanager.projectCreator`
The shielded Folfer blueprint assumes [groups described](#user-groups) are created in your GCP organization.
### Variable configuration PIPPO
There are several sets of variables you will need to fill in:
```tfvars
access_policy_config = {
access_policy_create = {
parent = "organizations/1234567890123"
title = "ShieldedMVP"
}
}
folder_config = {
folder_create = {
display_name = "ShieldedMVP"
parent = "organizations/1234567890123"
}
}
organization = {
domain = "example.com"
id = "1122334455"
}
prefix = "prefix"
project_config = {
billing_account_id = "123456-123456-123456"
}
```
### Deploying the blueprint
Once the configuration is complete, run the project factory by running
```bash
terraform init
terraform apply
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [access_policy_config](variables.tf#L17) | Provide 'access_policy_create' values if a folder scoped Access Policy creation is needed, uses existing 'policy_name' otherwise. Parent is in 'organizations/123456' format. Policy will be created scoped to the folder. | <code title="object&#40;&#123;&#10; policy_name &#61; optional&#40;string, null&#41;&#10; access_policy_create &#61; optional&#40;object&#40;&#123;&#10; parent &#61; string&#10; title &#61; string&#10; &#125;&#41;, null&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [folder_config](variables.tf#L49) | Provide 'folder_create' values if folder creation is needed, uses existing 'folder_id' otherwise. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object&#40;&#123;&#10; folder_id &#61; optional&#40;string, null&#41;&#10; folder_create &#61; optional&#40;object&#40;&#123;&#10; display_name &#61; string&#10; parent &#61; string&#10; &#125;&#41;, null&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [organization](variables.tf#L128) | Organization details. | <code title="object&#40;&#123;&#10; domain &#61; string&#10; id &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [prefix](variables.tf#L136) | Prefix used for resources that need unique names. | <code>string</code> | ✓ | |
| [project_config](variables.tf#L141) | Provide 'billing_account_id' value if project creation is needed, uses existing 'project_ids' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object&#40;&#123;&#10; billing_account_id &#61; optional&#40;string, null&#41;&#10; project_ids &#61; optional&#40;object&#40;&#123;&#10; sec-core &#61; string&#10; audit-logs &#61; string&#10; &#125;&#41;, &#123;&#10; sec-core &#61; &#34;sec-core&#34;&#10; audit-logs &#61; &#34;audit-logs&#34;&#10; &#125;&#10; &#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [data_dir](variables.tf#L29) | Relative path for the folder storing configuration data. | <code>string</code> | | <code>&#34;data&#34;</code> |
| [enable_features](variables.tf#L35) | Flag to enable features on the solution. | <code title="object&#40;&#123;&#10; encryption &#61; optional&#40;bool, false&#41;&#10; log_sink &#61; optional&#40;bool, true&#41;&#10; vpc_sc &#61; optional&#40;bool, true&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; encryption &#61; false&#10; log_sink &#61; true&#10; vpc_sc &#61; true&#10;&#125;">&#123;&#8230;&#125;</code> |
| [groups](variables.tf#L65) | User groups. | <code title="object&#40;&#123;&#10; workload-engineers &#61; optional&#40;string, &#34;gcp-data-engineers&#34;&#41;&#10; workload-security &#61; optional&#40;string, &#34;gcp-data-security&#34;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>&#123;&#125;</code> |
| [kms_keys](variables.tf#L75) | KMS keys to create, keyed by name. | <code title="map&#40;object&#40;&#123;&#10; iam &#61; optional&#40;map&#40;list&#40;string&#41;&#41;, &#123;&#125;&#41;&#10; labels &#61; optional&#40;map&#40;string&#41;, &#123;&#125;&#41;&#10; locations &#61; optional&#40;list&#40;string&#41;, &#91;&#34;global&#34;, &#34;europe&#34;, &#34;europe-west1&#34;&#93;&#41;&#10; rotation_period &#61; optional&#40;string, &#34;7776000s&#34;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [log_locations](variables.tf#L86) | Optional locations for GCS, BigQuery, and logging buckets created here. | <code title="object&#40;&#123;&#10; bq &#61; optional&#40;string, &#34;europe&#34;&#41;&#10; storage &#61; optional&#40;string, &#34;europe&#34;&#41;&#10; logging &#61; optional&#40;string, &#34;global&#34;&#41;&#10; pubsub &#61; optional&#40;string, &#34;global&#34;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; bq &#61; &#34;europe&#34;&#10; storage &#61; &#34;europe&#34;&#10; logging &#61; &#34;global&#34;&#10; pubsub &#61; null&#10;&#125;">&#123;&#8230;&#125;</code> |
| [log_sinks](variables.tf#L103) | Org-level log sinks, in name => {type, filter} format. | <code title="map&#40;object&#40;&#123;&#10; filter &#61; string&#10; type &#61; string&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code title="&#123;&#10; audit-logs &#61; &#123;&#10; filter &#61; &#34;logName:&#92;&#34;&#47;logs&#47;cloudaudit.googleapis.com&#37;2Factivity&#92;&#34; OR logName:&#92;&#34;&#47;logs&#47;cloudaudit.googleapis.com&#37;2Fsystem_event&#92;&#34;&#34;&#10; type &#61; &#34;bigquery&#34;&#10; &#125;&#10; vpc-sc &#61; &#123;&#10; filter &#61; &#34;protoPayload.metadata.&#64;type&#61;&#92;&#34;type.googleapis.com&#47;google.cloud.audit.VpcServiceControlAuditMetadata&#92;&#34;&#34;&#10; type &#61; &#34;bigquery&#34;&#10; &#125;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [vpc_sc_access_levels](variables.tf#L161) | VPC SC access level definitions. | <code title="map&#40;object&#40;&#123;&#10; combining_function &#61; optional&#40;string&#41;&#10; conditions &#61; optional&#40;list&#40;object&#40;&#123;&#10; device_policy &#61; optional&#40;object&#40;&#123;&#10; allowed_device_management_levels &#61; optional&#40;list&#40;string&#41;&#41;&#10; allowed_encryption_statuses &#61; optional&#40;list&#40;string&#41;&#41;&#10; require_admin_approval &#61; bool&#10; require_corp_owned &#61; bool&#10; require_screen_lock &#61; optional&#40;bool&#41;&#10; os_constraints &#61; optional&#40;list&#40;object&#40;&#123;&#10; os_type &#61; string&#10; minimum_version &#61; optional&#40;string&#41;&#10; require_verified_chrome_os &#61; optional&#40;bool&#41;&#10; &#125;&#41;&#41;&#41;&#10; &#125;&#41;&#41;&#10; ip_subnetworks &#61; optional&#40;list&#40;string&#41;, &#91;&#93;&#41;&#10; members &#61; optional&#40;list&#40;string&#41;, &#91;&#93;&#41;&#10; negate &#61; optional&#40;bool&#41;&#10; regions &#61; optional&#40;list&#40;string&#41;, &#91;&#93;&#41;&#10; required_access_levels &#61; optional&#40;list&#40;string&#41;, &#91;&#93;&#41;&#10; &#125;&#41;&#41;, &#91;&#93;&#41;&#10; description &#61; optional&#40;string&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [vpc_sc_egress_policies](variables.tf#L190) | VPC SC egress policy defnitions. | <code title="map&#40;object&#40;&#123;&#10; from &#61; object&#40;&#123;&#10; identity_type &#61; optional&#40;string, &#34;ANY_IDENTITY&#34;&#41;&#10; identities &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#10; to &#61; object&#40;&#123;&#10; operations &#61; optional&#40;list&#40;object&#40;&#123;&#10; method_selectors &#61; optional&#40;list&#40;string&#41;&#41;&#10; service_name &#61; string&#10; &#125;&#41;&#41;, &#91;&#93;&#41;&#10; resources &#61; optional&#40;list&#40;string&#41;&#41;&#10; resource_type_external &#61; optional&#40;bool, false&#41;&#10; &#125;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [vpc_sc_ingress_policies](variables.tf#L210) | VPC SC ingress policy defnitions. | <code title="map&#40;object&#40;&#123;&#10; from &#61; object&#40;&#123;&#10; access_levels &#61; optional&#40;list&#40;string&#41;, &#91;&#93;&#41;&#10; identity_type &#61; optional&#40;string&#41;&#10; identities &#61; optional&#40;list&#40;string&#41;&#41;&#10; resources &#61; optional&#40;list&#40;string&#41;, &#91;&#93;&#41;&#10; &#125;&#41;&#10; to &#61; object&#40;&#123;&#10; operations &#61; optional&#40;list&#40;object&#40;&#123;&#10; method_selectors &#61; optional&#40;list&#40;string&#41;&#41;&#10; service_name &#61; string&#10; &#125;&#41;&#41;, &#91;&#93;&#41;&#10; resources &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [folders](outputs.tf#L15) | Folders id. | |
| [folders_sink_writer_identities](outputs.tf#L23) | Folders id. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,15 @@
# skip boilerplate check
healthchecks:
- 35.191.0.0/16
- 130.211.0.0/22
- 209.85.152.0/22
- 209.85.204.0/22
rfc1918:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
onprem_probes:
- 10.255.255.254/32

View File

@ -0,0 +1,50 @@
# skip boilerplate check
allow-admins:
description: Access from the admin subnet to all subnets
direction: INGRESS
action: allow
priority: 1000
ranges:
- $rfc1918
ports:
all: []
target_resources: null
enable_logging: false
allow-healthchecks:
description: Enable HTTP and HTTPS healthchecks
direction: INGRESS
action: allow
priority: 1001
ranges:
- $healthchecks
ports:
tcp: ["80", "443"]
target_resources: null
enable_logging: false
allow-ssh-from-iap:
description: Enable SSH from IAP
direction: INGRESS
action: allow
priority: 1002
ranges:
- 35.235.240.0/20
ports:
tcp: ["22"]
target_resources: null
enable_logging: false
allow-icmp:
description: Enable ICMP
direction: INGRESS
action: allow
priority: 1003
ranges:
- 0.0.0.0/0
ports:
icmp: []
target_resources: null
enable_logging: false

View File

@ -0,0 +1,119 @@
# skip boilerplate check
- accessapproval.googleapis.com
- adsdatahub.googleapis.com
- aiplatform.googleapis.com
- alloydb.googleapis.com
- alpha-documentai.googleapis.com
- analyticshub.googleapis.com
- apigee.googleapis.com
- apigeeconnect.googleapis.com
- artifactregistry.googleapis.com
- assuredworkloads.googleapis.com
- automl.googleapis.com
- baremetalsolution.googleapis.com
- batch.googleapis.com
- beyondcorp.googleapis.com
- bigquery.googleapis.com
- bigquerydatapolicy.googleapis.com
- bigquerydatatransfer.googleapis.com
- bigquerymigration.googleapis.com
- bigqueryreservation.googleapis.com
- bigtable.googleapis.com
- binaryauthorization.googleapis.com
- cloudasset.googleapis.com
- cloudbuild.googleapis.com
- clouddebugger.googleapis.com
- clouderrorreporting.googleapis.com
- cloudfunctions.googleapis.com
- cloudkms.googleapis.com
- cloudprofiler.googleapis.com
- cloudresourcemanager.googleapis.com
- cloudsearch.googleapis.com
- cloudtrace.googleapis.com
- composer.googleapis.com
- compute.googleapis.com
- connectgateway.googleapis.com
- contactcenterinsights.googleapis.com
- container.googleapis.com
- containeranalysis.googleapis.com
- containerfilesystem.googleapis.com
- containerregistry.googleapis.com
- containerthreatdetection.googleapis.com
- contentwarehouse.googleapis.com
- datacatalog.googleapis.com
- dataflow.googleapis.com
- datafusion.googleapis.com
- datalineage.googleapis.com
- datamigration.googleapis.com
- datapipelines.googleapis.com
- dataplex.googleapis.com
- dataproc.googleapis.com
- datastream.googleapis.com
- dialogflow.googleapis.com
- dlp.googleapis.com
- dns.googleapis.com
- documentai.googleapis.com
- domains.googleapis.com
- essentialcontacts.googleapis.com
- eventarc.googleapis.com
- file.googleapis.com
- firebaseappcheck.googleapis.com
- firebaserules.googleapis.com
- firestore.googleapis.com
- gameservices.googleapis.com
- gkebackup.googleapis.com
- gkeconnect.googleapis.com
- gkehub.googleapis.com
- gkemulticloud.googleapis.com
- healthcare.googleapis.com
- iam.googleapis.com
- iamcredentials.googleapis.com
- iaptunnel.googleapis.com
- ids.googleapis.com
- integrations.googleapis.com
- language.googleapis.com
- lifesciences.googleapis.com
- logging.googleapis.com
- managedidentities.googleapis.com
- memcache.googleapis.com
- meshca.googleapis.com
- metastore.googleapis.com
- ml.googleapis.com
- monitoring.googleapis.com
- networkconnectivity.googleapis.com
- networkmanagement.googleapis.com
- networksecurity.googleapis.com
- networkservices.googleapis.com
- notebooks.googleapis.com
- opsconfigmonitoring.googleapis.com
- osconfig.googleapis.com
- oslogin.googleapis.com
- policytroubleshooter.googleapis.com
- privateca.googleapis.com
- pubsub.googleapis.com
- pubsublite.googleapis.com
- recaptchaenterprise.googleapis.com
- recommender.googleapis.com
- redis.googleapis.com
- retail.googleapis.com
- run.googleapis.com
- secretmanager.googleapis.com
- servicecontrol.googleapis.com
- servicedirectory.googleapis.com
- spanner.googleapis.com
- speakerid.googleapis.com
- speech.googleapis.com
- sqladmin.googleapis.com
- storage.googleapis.com
- storagetransfer.googleapis.com
- texttospeech.googleapis.com
- tpu.googleapis.com
- trafficdirector.googleapis.com
- transcoder.googleapis.com
- translate.googleapis.com
- videointelligence.googleapis.com
- vision.googleapis.com
- visionai.googleapis.com
- vpcaccess.googleapis.com
- workstations.googleapis.com

View File

@ -0,0 +1,119 @@
# skip boilerplate check
- accessapproval.googleapis.com
- adsdatahub.googleapis.com
- aiplatform.googleapis.com
- alloydb.googleapis.com
- alpha-documentai.googleapis.com
- analyticshub.googleapis.com
- apigee.googleapis.com
- apigeeconnect.googleapis.com
- artifactregistry.googleapis.com
- assuredworkloads.googleapis.com
- automl.googleapis.com
- baremetalsolution.googleapis.com
- batch.googleapis.com
- beyondcorp.googleapis.com
- bigquery.googleapis.com
- bigquerydatapolicy.googleapis.com
- bigquerydatatransfer.googleapis.com
- bigquerymigration.googleapis.com
- bigqueryreservation.googleapis.com
- bigtable.googleapis.com
- binaryauthorization.googleapis.com
- cloudasset.googleapis.com
- cloudbuild.googleapis.com
- clouddebugger.googleapis.com
- clouderrorreporting.googleapis.com
- cloudfunctions.googleapis.com
- cloudkms.googleapis.com
- cloudprofiler.googleapis.com
- cloudresourcemanager.googleapis.com
- cloudsearch.googleapis.com
- cloudtrace.googleapis.com
- composer.googleapis.com
- compute.googleapis.com
- connectgateway.googleapis.com
- contactcenterinsights.googleapis.com
- container.googleapis.com
- containeranalysis.googleapis.com
- containerfilesystem.googleapis.com
- containerregistry.googleapis.com
- containerthreatdetection.googleapis.com
- contentwarehouse.googleapis.com
- datacatalog.googleapis.com
- dataflow.googleapis.com
- datafusion.googleapis.com
- datalineage.googleapis.com
- datamigration.googleapis.com
- datapipelines.googleapis.com
- dataplex.googleapis.com
- dataproc.googleapis.com
- datastream.googleapis.com
- dialogflow.googleapis.com
- dlp.googleapis.com
- dns.googleapis.com
- documentai.googleapis.com
- domains.googleapis.com
- essentialcontacts.googleapis.com
- eventarc.googleapis.com
- file.googleapis.com
- firebaseappcheck.googleapis.com
- firebaserules.googleapis.com
- firestore.googleapis.com
- gameservices.googleapis.com
- gkebackup.googleapis.com
- gkeconnect.googleapis.com
- gkehub.googleapis.com
- gkemulticloud.googleapis.com
- healthcare.googleapis.com
- iam.googleapis.com
- iamcredentials.googleapis.com
- iaptunnel.googleapis.com
- ids.googleapis.com
- integrations.googleapis.com
- language.googleapis.com
- lifesciences.googleapis.com
- logging.googleapis.com
- managedidentities.googleapis.com
- memcache.googleapis.com
- meshca.googleapis.com
- metastore.googleapis.com
- ml.googleapis.com
- monitoring.googleapis.com
- networkconnectivity.googleapis.com
- networkmanagement.googleapis.com
- networksecurity.googleapis.com
- networkservices.googleapis.com
- notebooks.googleapis.com
- opsconfigmonitoring.googleapis.com
- osconfig.googleapis.com
- oslogin.googleapis.com
- policytroubleshooter.googleapis.com
- privateca.googleapis.com
- pubsub.googleapis.com
- pubsublite.googleapis.com
- recaptchaenterprise.googleapis.com
- recommender.googleapis.com
- redis.googleapis.com
- retail.googleapis.com
- run.googleapis.com
- secretmanager.googleapis.com
- servicecontrol.googleapis.com
- servicedirectory.googleapis.com
- spanner.googleapis.com
- speakerid.googleapis.com
- speech.googleapis.com
- sqladmin.googleapis.com
- storage.googleapis.com
- storagetransfer.googleapis.com
- texttospeech.googleapis.com
- tpu.googleapis.com
- trafficdirector.googleapis.com
- transcoder.googleapis.com
- translate.googleapis.com
- videointelligence.googleapis.com
- vision.googleapis.com
- visionai.googleapis.com
- vpcaccess.googleapis.com
- workstations.googleapis.com

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@ -0,0 +1,102 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
# tfdoc:file:description Security project, Cloud KMS and Secret Manager resources.
locals {
kms_locations = distinct(flatten([
for k, v in var.kms_keys : v.locations
]))
kms_locations_keys = {
for loc in local.kms_locations : loc => {
for k, v in var.kms_keys : k => v if contains(v.locations, loc)
}
}
kms_log_locations = distinct(flatten([
for k, v in local.kms_log_sink_keys : compact(v.locations)
]))
# Log sink keys
kms_log_sink_keys = {
"storage" = {
labels = {}
locations = [var.log_locations.storage]
rotation_period = "7776000s"
}
"bq" = {
labels = {}
locations = [var.log_locations.bq]
rotation_period = "7776000s"
}
"pubsub" = {
labels = {}
locations = [var.log_locations.pubsub]
rotation_period = "7776000s"
}
}
kms_log_locations_keys = {
for loc in local.kms_log_locations : loc => {
for k, v in local.kms_log_sink_keys : k => v if contains(v.locations, loc)
}
}
}
module "sec-project" {
count = var.enable_features.encryption ? 1 : 0
source = "../../../modules/project"
name = var.project_config.project_ids["sec-core"]
parent = module.folder.id
billing_account = var.project_config.billing_account_id
project_create = var.project_config.billing_account_id != null && var.enable_features.encryption
prefix = var.project_config.billing_account_id == null ? null : var.prefix
group_iam = {
(local.groups.workload-security) = [
"roles/editor"
]
}
services = [
"cloudkms.googleapis.com",
"secretmanager.googleapis.com",
"stackdriver.googleapis.com"
]
}
module "sec-kms" {
for_each = var.enable_features.encryption ? toset(local.kms_locations) : toset([])
source = "../../../modules/kms"
project_id = module.sec-project[0].project_id
keyring = {
location = each.key
name = "${each.key}"
}
# rename to `key_iam` to switch to authoritative bindings
key_iam_additive = {
for k, v in local.kms_locations_keys[each.key] : k => v.iam
}
keys = local.kms_locations_keys[each.key]
}
module "log-kms" {
for_each = var.enable_features.encryption ? toset(local.kms_log_locations) : toset([])
source = "../../../modules/kms"
project_id = module.sec-project[0].project_id
keyring = {
location = each.key
name = "${each.key}"
}
keys = local.kms_log_locations_keys[each.key]
}

View File

@ -0,0 +1,107 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
# tfdoc:file:description Audit log project and sink.
locals {
gcs_storage_class = (
length(split("-", var.log_locations.storage)) < 2
? "MULTI_REGIONAL"
: "REGIONAL"
)
log_types = toset([for k, v in var.log_sinks : v.type])
_log_keys = var.enable_features.encryption ? {
bq = var.enable_features.log_sink ? ["projects/${module.sec-project.0.project_id}/locations/${var.log_locations.bq}/keyRings/${var.log_locations.bq}/cryptoKeys/bq"] : null
pubsub = var.enable_features.log_sink ? ["projects/${module.sec-project.0.project_id}/locations/${var.log_locations.pubsub}/keyRings/${var.log_locations.pubsub}/cryptoKeys/pubsub"] : null
storage = var.enable_features.log_sink ? ["projects/${module.sec-project.0.project_id}/locations/${var.log_locations.storage}/keyRings/${var.log_locations.storage}/cryptoKeys/storage"] : null
} : {}
log_keys = {
for service, key in local._log_keys : service => key if key != null
}
}
module "log-export-project" {
count = var.enable_features.log_sink ? 1 : 0
source = "../../../modules/project"
name = var.project_config.project_ids["audit-logs"]
parent = module.folder.id
billing_account = var.project_config.billing_account_id
project_create = var.project_config.billing_account_id != null
prefix = var.project_config.billing_account_id == null ? null : var.prefix
group_iam = {
(local.groups.workload-security) = [
"roles/editor"
]
}
iam = {
# "roles/owner" = [module.automation-tf-bootstrap-sa.iam_email]
}
services = [
"bigquery.googleapis.com",
"pubsub.googleapis.com",
"storage.googleapis.com",
"stackdriver.googleapis.com"
]
service_encryption_key_ids = var.enable_features.encryption ? local.log_keys : {}
depends_on = [
module.log-kms
]
}
# one log export per type, with conditionals to skip those not needed
module "log-export-dataset" {
source = "../../../modules/bigquery-dataset"
count = var.enable_features.log_sink && contains(local.log_types, "bigquery") ? 1 : 0
project_id = module.log-export-project[0].project_id
id = "${var.prefix}_audit_export"
friendly_name = "Audit logs export."
location = replace(var.log_locations.bq, "europe", "EU")
encryption_key = var.enable_features.encryption ? module.log-kms[var.log_locations.bq].keys["bq"].id : false
}
module "log-export-gcs" {
source = "../../../modules/gcs"
count = var.enable_features.log_sink && contains(local.log_types, "storage") ? 1 : 0
project_id = module.log-export-project[0].project_id
name = "audit-logs"
prefix = var.prefix
location = replace(var.log_locations.storage, "europe", "EU")
storage_class = local.gcs_storage_class
encryption_key = var.enable_features.encryption ? module.log-kms[var.log_locations.storage].keys["storage"].id : null
}
module "log-export-logbucket" {
source = "../../../modules/logging-bucket"
for_each = var.enable_features.log_sink ? toset([for k, v in var.log_sinks : k if v.type == "logging"]) : []
parent_type = "project"
parent = module.log-export-project[0].project_id
id = "audit-logs-${each.key}"
location = var.log_locations.logging
#TODO check if logging bucket support encryption.
}
module "log-export-pubsub" {
source = "../../../modules/pubsub"
for_each = toset([for k, v in var.log_sinks : k if v.type == "pubsub" && var.enable_features.log_sink])
project_id = module.log-export-project[0].project_id
name = "audit-logs-${each.key}"
regions = [var.log_locations.pubsub]
kms_key = var.enable_features.encryption ? module.log-kms[var.log_locations.pubsub].keys["pubsub"].id : null
}

View File

@ -0,0 +1,141 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Folder resources.
locals {
# Create Log sink ingress policies
_sink_ingress_policies = var.enable_features.log_sink ? {
log_sink = {
from = {
access_levels = ["*"]
identities = values(module.folder.sink_writer_identities)
}
to = {
resources = ["projects/${module.log-export-project.0.number}"]
operations = [{ service_name = "*" }]
} }
} : null
_vpc_sc_vpc_accessible_services = var.data_dir != null ? yamldecode(
file("${var.data_dir}/vpc-sc/restricted-services.yaml")
) : null
_vpc_sc_restricted_services = var.data_dir != null ? yamldecode(
file("${var.data_dir}/vpc-sc/restricted-services.yaml")
) : null
access_policy_create = var.access_policy_config.access_policy_create != null ? {
parent = "organizations/${var.organization.id}"
title = "shielded-folder"
scopes = [module.folder.id]
} : null
groups = {
for k, v in var.groups : k => "${v}@${var.organization.domain}"
}
groups_iam = {
for k, v in local.groups : k => "group:${v}"
}
group_iam = {
(local.groups.workload-engineers) = [
"roles/editor",
"roles/iam.serviceAccountTokenCreator"
]
}
vpc_sc_resources = [
for k, v in data.google_projects.folder-projects.projects : format("projects/%s", v.number)
]
log_sink_destinations = var.enable_features.log_sink ? merge(
# use the same dataset for all sinks with `bigquery` as destination
{ for k, v in var.log_sinks : k => module.log-export-dataset.0 if v.type == "bigquery" },
# use the same gcs bucket for all sinks with `storage` as destination
{ for k, v in var.log_sinks : k => module.log-export-gcs.0 if v.type == "storage" },
# use separate pubsub topics and logging buckets for sinks with
# destination `pubsub` and `logging`
module.log-export-pubsub,
module.log-export-logbucket
) : null
}
module "folder" {
source = "../../../modules/folder"
folder_create = var.folder_config.folder_create != null
parent = try(var.folder_config.folder_create.parent, null)
name = try(var.folder_config.folder_create.display_name, null)
id = var.folder_config.folder_create != null ? null : var.folder_config.folder_id
group_iam = local.group_iam
org_policies_data_path = var.data_dir != null ? "${var.data_dir}/org-policies" : null
firewall_policy_factory = var.data_dir != null ? {
cidr_file = "${var.data_dir}/firewall-policies/cidrs.yaml"
policy_name = "${var.prefix}-fw-policy"
rules_file = "${var.data_dir}/firewall-policies/hierarchical-policy-rules.yaml"
} : null
logging_sinks = var.enable_features.log_sink ? {
for name, attrs in var.log_sinks : name => {
bq_partitioned_table = attrs.type == "bigquery"
destination = local.log_sink_destinations[name].id
filter = attrs.filter
type = attrs.type
}
} : null
}
module "folder-workload" {
source = "../../../modules/folder"
parent = module.folder.id
name = "${var.prefix}-workload"
}
#TODO VPCSC: Access levels
data "google_projects" "folder-projects" {
filter = "parent.id:${split("/", module.folder.id)[1]}"
depends_on = [
module.sec-project,
module.log-export-project
]
}
module "vpc-sc" {
count = var.enable_features.vpc_sc ? 1 : 0
source = "../../../modules/vpc-sc"
access_policy = try(var.access_policy_config.policy_name, null)
access_policy_create = local.access_policy_create
access_levels = var.vpc_sc_access_levels
egress_policies = var.vpc_sc_egress_policies
ingress_policies = merge(var.vpc_sc_ingress_policies, local._sink_ingress_policies)
service_perimeters_regular = {
shielded = {
# Move `spec` definition to `status` and comment `use_explicit_dry_run_spec` variable to enforce VPC-SC configuration
# Before enforing configuration check logs and create Access Level, Ingress/Egress policy as needed
status = null
spec = {
access_levels = keys(var.vpc_sc_access_levels)
resources = local.vpc_sc_resources
restricted_services = local._vpc_sc_restricted_services
egress_policies = keys(var.vpc_sc_egress_policies)
ingress_policies = keys(merge(var.vpc_sc_ingress_policies, local._sink_ingress_policies))
vpc_accessible_services = {
allowed_services = local._vpc_sc_vpc_accessible_services
enable_restriction = true
}
}
use_explicit_dry_run_spec = true
}
}
}

View File

@ -0,0 +1,30 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
output "folders" {
description = "Folders id."
value = {
shielded-folder = module.folder.id
workload-folder = module.folder-workload.id
}
}
output "folders_sink_writer_identities" {
description = "Folders id."
value = {
shielded-folder = module.folder.sink_writer_identities
workload-folder = module.folder-workload.sink_writer_identities
}
}

View File

@ -0,0 +1,229 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Variables definition.
variable "access_policy_config" {
description = "Provide 'access_policy_create' values if a folder scoped Access Policy creation is needed, uses existing 'policy_name' otherwise. Parent is in 'organizations/123456' format. Policy will be created scoped to the folder."
type = object({
policy_name = optional(string, null)
access_policy_create = optional(object({
parent = string
title = string
}), null)
})
nullable = false
}
variable "data_dir" {
description = "Relative path for the folder storing configuration data."
type = string
default = "data"
}
variable "enable_features" {
description = "Flag to enable features on the solution."
type = object({
encryption = optional(bool, false)
log_sink = optional(bool, true)
vpc_sc = optional(bool, true)
})
default = {
encryption = false
log_sink = true
vpc_sc = true
}
}
variable "folder_config" {
description = "Provide 'folder_create' values if folder creation is needed, uses existing 'folder_id' otherwise. Parent is in 'folders/nnn' or 'organizations/nnn' format."
type = object({
folder_id = optional(string, null)
folder_create = optional(object({
display_name = string
parent = string
}), null)
})
validation {
condition = var.folder_config.folder_id != null || var.folder_config.folder_create != null
error_message = "At least one attribute should be set."
}
nullable = false
}
variable "groups" {
description = "User groups."
type = object({
workload-engineers = optional(string, "gcp-data-engineers")
workload-security = optional(string, "gcp-data-security")
})
default = {}
nullable = false
}
variable "kms_keys" {
description = "KMS keys to create, keyed by name."
type = map(object({
iam = optional(map(list(string)), {})
labels = optional(map(string), {})
locations = optional(list(string), ["global", "europe", "europe-west1"])
rotation_period = optional(string, "7776000s")
}))
default = {}
}
variable "log_locations" {
description = "Optional locations for GCS, BigQuery, and logging buckets created here."
type = object({
bq = optional(string, "europe")
storage = optional(string, "europe")
logging = optional(string, "global")
pubsub = optional(string, "global")
})
default = {
bq = "europe"
storage = "europe"
logging = "global"
pubsub = null
}
nullable = false
}
variable "log_sinks" {
description = "Org-level log sinks, in name => {type, filter} format."
type = map(object({
filter = string
type = string
}))
default = {
audit-logs = {
filter = "logName:\"/logs/cloudaudit.googleapis.com%2Factivity\" OR logName:\"/logs/cloudaudit.googleapis.com%2Fsystem_event\""
type = "bigquery"
}
vpc-sc = {
filter = "protoPayload.metadata.@type=\"type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata\""
type = "bigquery"
}
}
validation {
condition = alltrue([
for k, v in var.log_sinks :
contains(["bigquery", "logging", "pubsub", "storage"], v.type)
])
error_message = "Type must be one of 'bigquery', 'logging', 'pubsub', 'storage'."
}
}
variable "organization" {
description = "Organization details."
type = object({
domain = string
id = string
})
}
variable "prefix" {
description = "Prefix used for resources that need unique names."
type = string
}
variable "project_config" {
description = "Provide 'billing_account_id' value if project creation is needed, uses existing 'project_ids' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format."
type = object({
billing_account_id = optional(string, null)
project_ids = optional(object({
sec-core = string
audit-logs = string
}), {
sec-core = "sec-core"
audit-logs = "audit-logs"
}
)
})
nullable = false
validation {
condition = var.project_config.billing_account_id != null || var.project_config.project_ids != null
error_message = "At least one attribute should be set."
}
}
variable "vpc_sc_access_levels" {
description = "VPC SC access level definitions."
type = map(object({
combining_function = optional(string)
conditions = optional(list(object({
device_policy = optional(object({
allowed_device_management_levels = optional(list(string))
allowed_encryption_statuses = optional(list(string))
require_admin_approval = bool
require_corp_owned = bool
require_screen_lock = optional(bool)
os_constraints = optional(list(object({
os_type = string
minimum_version = optional(string)
require_verified_chrome_os = optional(bool)
})))
}))
ip_subnetworks = optional(list(string), [])
members = optional(list(string), [])
negate = optional(bool)
regions = optional(list(string), [])
required_access_levels = optional(list(string), [])
})), [])
description = optional(string)
}))
default = {}
nullable = false
}
variable "vpc_sc_egress_policies" {
description = "VPC SC egress policy defnitions."
type = map(object({
from = object({
identity_type = optional(string, "ANY_IDENTITY")
identities = optional(list(string))
})
to = object({
operations = optional(list(object({
method_selectors = optional(list(string))
service_name = string
})), [])
resources = optional(list(string))
resource_type_external = optional(bool, false)
})
}))
default = {}
nullable = false
}
variable "vpc_sc_ingress_policies" {
description = "VPC SC ingress policy defnitions."
type = map(object({
from = object({
access_levels = optional(list(string), [])
identity_type = optional(string)
identities = optional(list(string))
resources = optional(list(string), [])
})
to = object({
operations = optional(list(object({
method_selectors = optional(list(string))
service_name = string
})), [])
resources = optional(list(string))
})
}))
default = {}
nullable = false
}

View File

@ -0,0 +1,79 @@
# MLOps with Vertex AI
## Introduction
This example implements the infrastructure required to deploy an end-to-end [MLOps process](https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf) using [Vertex AI](https://cloud.google.com/vertex-ai) platform.
## GCP resources
The blueprint will deploy all the required resources to have a fully functional MLOPs environment containing:
- Vertex Workbench (for the experimentation environment)
- GCP Project (optional) to host all the resources
- Isolated VPC network and a subnet to be used by Vertex and Dataflow. Alternatively, an external Shared VPC can be configured using the `network_config`variable.
- Firewall rule to allow the internal subnet communication required by Dataflow
- Cloud NAT required to reach the internet from the different computing resources (Vertex and Dataflow)
- GCS buckets to host Vertex AI and Cloud Build Artifacts. By default the buckets will be regional and should match the Vertex AI region for the different resources (i.e. Vertex Managed Dataset) and processes (i.e. Vertex trainining)
- BigQuery Dataset where the training data will be stored. This is optional, since the training data could be already hosted in an existing BigQuery dataset.
- Artifact Registry Docker repository to host the custom images.
- Service account (`mlops-[env]@`) with the minimum permissions required by Vertex AI and Dataflow (if this service is used inside of the Vertex AI Pipeline).
- Service account (`github@`) to be used by Workload Identity Federation, to federate Github identity (Optional).
- Secret to store the Github SSH key to get access the CICD code repo.
![MLOps project description](./images/mlops_projects.png "MLOps project description")
## Pre-requirements
### User groups
Assign roles relying on User groups is a way to decouple the final set of permissions from the stage where entities and resources are created, and their IAM bindings defined. You can configure the group names through the `groups` variable. These groups should be created before launching Terraform.
We use the following groups to control access to resources:
- *Data Scientits* (gcp-ml-ds@<company.org>). They manage notebooks and create ML pipelines.
- *ML Engineers* (gcp-ml-eng@<company.org>). They manage the different Vertex resources.
- *ML Viewer* (gcp-ml-eng@<company.org>). Group with wiewer permission for the different resources.
Please note that these groups are not suitable for production grade environments. Roles can be customized in the `main.tf`file.
## Instructions
### Deploy the experimentation environment
- Create a `terraform.tfvars` file and specify the variables to match your desired configuration. You can use the provided `terraform.tfvars.sample` as reference.
- Run `terraform init` and `terraform apply`
## What's next?
This blueprint can be used as a building block for setting up an end2end ML Ops solution. As next step, you can follow this [guide](https://cloud.google.com/architecture/architecture-for-mlops-using-tfx-kubeflow-pipelines-and-cloud-build) to setup a Vertex AI pipeline and run it on the deployed infraestructure.
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [project_id](variables.tf#L101) | Project id, references existing project if `project_create` is null. | <code>string</code> | ✓ | |
| [bucket_name](variables.tf#L18) | GCS bucket name to store the Vertex AI artifacts. | <code>string</code> | | <code>null</code> |
| [dataset_name](variables.tf#L24) | BigQuery Dataset to store the training data. | <code>string</code> | | <code>null</code> |
| [groups](variables.tf#L30) | Name of the groups (name@domain.org) to apply opinionated IAM permissions. | <code title="object&#40;&#123;&#10; gcp-ml-ds &#61; string&#10; gcp-ml-eng &#61; string&#10; gcp-ml-viewer &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; gcp-ml-ds &#61; null&#10; gcp-ml-eng &#61; null&#10; gcp-ml-viewer &#61; null&#10;&#125;">&#123;&#8230;&#125;</code> |
| [identity_pool_claims](variables.tf#L45) | Claims to be used by Workload Identity Federation (i.e.: attribute.repository/ORGANIZATION/REPO). If a not null value is provided, then google_iam_workload_identity_pool resource will be created. | <code>string</code> | | <code>null</code> |
| [labels](variables.tf#L51) | Labels to be assigned at project level. | <code>map&#40;string&#41;</code> | | <code>&#123;&#125;</code> |
| [location](variables.tf#L57) | Location used for multi-regional resources. | <code>string</code> | | <code>&#34;eu&#34;</code> |
| [network_config](variables.tf#L63) | Shared VPC network configurations to use. If null networks will be created in projects with preconfigured values. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; network_self_link &#61; string&#10; subnet_self_link &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [notebooks](variables.tf#L73) | Vertex AI workbenchs to be deployed. | <code title="map&#40;object&#40;&#123;&#10; owner &#61; string&#10; region &#61; string&#10; subnet &#61; string&#10; internal_ip_only &#61; optional&#40;bool, false&#41;&#10; idle_shutdown &#61; optional&#40;bool&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [prefix](variables.tf#L86) | Prefix used for the project id. | <code>string</code> | | <code>null</code> |
| [project_create](variables.tf#L92) | Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [project_services](variables.tf#L106) | List of core services enabled on all projects. | <code>list&#40;string&#41;</code> | | <code title="&#91;&#10; &#34;aiplatform.googleapis.com&#34;,&#10; &#34;artifactregistry.googleapis.com&#34;,&#10; &#34;bigquery.googleapis.com&#34;,&#10; &#34;cloudbuild.googleapis.com&#34;,&#10; &#34;compute.googleapis.com&#34;,&#10; &#34;datacatalog.googleapis.com&#34;,&#10; &#34;dataflow.googleapis.com&#34;,&#10; &#34;iam.googleapis.com&#34;,&#10; &#34;monitoring.googleapis.com&#34;,&#10; &#34;notebooks.googleapis.com&#34;,&#10; &#34;secretmanager.googleapis.com&#34;,&#10; &#34;servicenetworking.googleapis.com&#34;,&#10; &#34;serviceusage.googleapis.com&#34;&#10;&#93;">&#91;&#8230;&#93;</code> |
| [region](variables.tf#L126) | Region used for regional resources. | <code>string</code> | | <code>&#34;europe-west4&#34;</code> |
| [repo_name](variables.tf#L132) | Cloud Source Repository name. null to avoid to create it. | <code>string</code> | | <code>null</code> |
| [sa_mlops_name](variables.tf#L138) | Name for the MLOPs Service Account. | <code>string</code> | | <code>&#34;sa-mlops&#34;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [github](outputs.tf#L33) | Github Configuration. | |
| [notebook](outputs.tf#L39) | Vertex AI managed notebook details. | |
| [project](outputs.tf#L44) | The project resource as return by the `project` module. | |
| [project_id](outputs.tf#L49) | Project ID. | |
<!-- END TFDOC -->
# TODO
- Add support for User Managed Notebooks, SA permission option and non default SA for Single User mode.
- Improve default naming for local VPC and Cloud NAT

View File

@ -0,0 +1,74 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
resource "google_iam_workload_identity_pool" "github_pool" {
count = var.identity_pool_claims == null ? 0 : 1
project = module.project.project_id
workload_identity_pool_id = "gh-pool"
display_name = "Github Actions Identity Pool"
description = "Identity pool for Github Actions"
}
resource "google_iam_workload_identity_pool_provider" "github_provider" {
count = var.identity_pool_claims == null ? 0 : 1
project = module.project.project_id
workload_identity_pool_id = google_iam_workload_identity_pool.github_pool[0].workload_identity_pool_id
workload_identity_pool_provider_id = "gh-provider"
display_name = "Github Actions provider"
description = "OIDC provider for Github Actions"
attribute_mapping = {
"google.subject" = "assertion.sub"
"attribute.repository" = "assertion.repository"
}
oidc {
issuer_uri = "https://token.actions.githubusercontent.com"
}
}
module "artifact_registry" {
source = "../../../modules/artifact-registry"
id = "docker-repo"
project_id = module.project.project_id
location = var.region
format = "DOCKER"
# iam = {
# "roles/artifactregistry.admin" = ["group:cicd@example.com"]
# }
}
module "service-account-github" {
source = "../../../modules/iam-service-account"
name = "sa-github"
project_id = module.project.project_id
iam = var.identity_pool_claims == null ? {} : { "roles/iam.workloadIdentityUser" = ["principalSet://iam.googleapis.com/${google_iam_workload_identity_pool.github_pool[0].name}/${var.identity_pool_claims}"] }
}
# NOTE: Secret manager module at the moment does not support CMEK
module "secret-manager" {
project_id = module.project.project_id
source = "../../../modules/secret-manager"
secrets = {
github-key = [var.region]
}
iam = {
github-key = {
"roles/secretmanager.secretAccessor" = [
"serviceAccount:${module.project.service_accounts.robots.cloudbuild}",
module.service-account-mlops.iam_email
]
}
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

View File

@ -0,0 +1,278 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
group_iam = merge(
var.groups.gcp-ml-viewer == null ? {} : {
(var.groups.gcp-ml-viewer) = [
"roles/aiplatform.viewer",
"roles/artifactregistry.reader",
"roles/dataflow.viewer",
"roles/logging.viewer",
"roles/storage.objectViewer"
]
},
var.groups.gcp-ml-ds == null ? {} : {
(var.groups.gcp-ml-ds) = [
"roles/aiplatform.admin",
"roles/artifactregistry.admin",
"roles/bigquery.dataEditor",
"roles/bigquery.jobUser",
"roles/bigquery.user",
"roles/cloudbuild.builds.editor",
"roles/cloudfunctions.developer",
"roles/dataflow.developer",
"roles/dataflow.worker",
"roles/iam.serviceAccountUser",
"roles/logging.logWriter",
"roles/logging.viewer",
"roles/notebooks.admin",
"roles/pubsub.editor",
"roles/serviceusage.serviceUsageConsumer",
"roles/storage.admin"
]
},
var.groups.gcp-ml-eng == null ? {} : {
(var.groups.gcp-ml-eng) = [
"roles/aiplatform.admin",
"roles/artifactregistry.admin",
"roles/bigquery.dataEditor",
"roles/bigquery.jobUser",
"roles/bigquery.user",
"roles/dataflow.developer",
"roles/dataflow.worker",
"roles/iam.serviceAccountUser",
"roles/logging.logWriter",
"roles/logging.viewer",
"roles/serviceusage.serviceUsageConsumer",
"roles/storage.admin"
]
}
)
service_encryption_keys = var.service_encryption_keys
shared_vpc_project = try(var.network_config.host_project, null)
subnet = (
local.use_shared_vpc
? var.network_config.subnet_self_link
: values(module.vpc-local.0.subnet_self_links)[0]
)
vpc = (
local.use_shared_vpc
? var.network_config.network_self_link
: module.vpc-local.0.self_link
)
use_shared_vpc = var.network_config != null
shared_vpc_bindings = {
"roles/compute.networkUser" = [
"robot-df", "notebooks"
]
}
shared_vpc_role_members = {
robot-df = "serviceAccount:${module.project.service_accounts.robots.dataflow}"
notebooks = "serviceAccount:${module.project.service_accounts.robots.notebooks}"
}
# reassemble in a format suitable for for_each
shared_vpc_bindings_map = {
for binding in flatten([
for role, members in local.shared_vpc_bindings : [
for member in members : { role = role, member = member }
]
]) : "${binding.role}-${binding.member}" => binding
}
}
module "gcs-bucket" {
count = var.bucket_name == null ? 0 : 1
source = "../../../modules/gcs"
project_id = module.project.project_id
name = var.bucket_name
prefix = var.prefix
location = var.region
storage_class = "REGIONAL"
versioning = false
encryption_key = try(local.service_encryption_keys.storage, null)
}
# Default bucket for Cloud Build to prevent error: "'us' violates constraint constraints/gcp.resourceLocations"
# https://stackoverflow.com/questions/53206667/cloud-build-fails-with-resource-location-constraint
module "gcs-bucket-cloudbuild" {
source = "../../../modules/gcs"
project_id = module.project.project_id
name = "${var.project_id}_cloudbuild"
prefix = var.prefix
location = var.region
storage_class = "REGIONAL"
versioning = false
encryption_key = try(local.service_encryption_keys.storage, null)
}
module "bq-dataset" {
count = var.dataset_name == null ? 0 : 1
source = "../../../modules/bigquery-dataset"
project_id = module.project.project_id
id = var.dataset_name
location = var.region
encryption_key = try(local.service_encryption_keys.bq, null)
}
module "vpc-local" {
count = local.use_shared_vpc ? 0 : 1
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "default"
subnets = [
{
"name" : "default",
"region" : "${var.region}",
"ip_cidr_range" : "10.4.0.0/24",
"secondary_ip_range" : null
}
]
psa_config = {
ranges = {
"vertex" : "10.13.0.0/18"
}
routes = null
}
}
module "firewall" {
count = local.use_shared_vpc ? 0 : 1
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-local[0].name
default_rules_config = {
disabled = true
}
ingress_rules = {
dataflow-ingress = {
description = "Dataflow service."
direction = "INGRESS"
action = "allow"
sources = ["dataflow"]
targets = ["dataflow"]
ranges = []
use_service_accounts = false
rules = [{ protocol = "tcp", ports = ["12345-12346"] }]
extra_attributes = {}
}
}
}
module "cloudnat" {
count = local.use_shared_vpc ? 0 : 1
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "default"
router_network = module.vpc-local[0].self_link
}
module "project" {
source = "../../../modules/project"
name = var.project_id
parent = try(var.project_create.parent, null)
billing_account = try(var.project_create.billing_account_id, null)
project_create = var.project_create != null
prefix = var.prefix
group_iam = local.group_iam
iam = {
"roles/aiplatform.user" = [module.service-account-mlops.iam_email]
"roles/artifactregistry.reader" = [module.service-account-mlops.iam_email]
"roles/artifactregistry.writer" = [module.service-account-github.iam_email]
"roles/bigquery.dataEditor" = [module.service-account-mlops.iam_email]
"roles/bigquery.jobUser" = [module.service-account-mlops.iam_email]
"roles/bigquery.user" = [module.service-account-mlops.iam_email]
"roles/cloudbuild.builds.editor" = [
module.service-account-mlops.iam_email,
module.service-account-github.iam_email
]
"roles/cloudfunctions.invoker" = [module.service-account-mlops.iam_email]
"roles/dataflow.developer" = [module.service-account-mlops.iam_email]
"roles/dataflow.worker" = [module.service-account-mlops.iam_email]
"roles/iam.serviceAccountUser" = [
module.service-account-mlops.iam_email,
"serviceAccount:${module.project.service_accounts.robots.cloudbuild}"
]
"roles/monitoring.metricWriter" = [module.service-account-mlops.iam_email]
"roles/run.invoker" = [module.service-account-mlops.iam_email]
"roles/serviceusage.serviceUsageConsumer" = [
module.service-account-mlops.iam_email,
module.service-account-github.iam_email
]
"roles/storage.admin" = [
module.service-account-mlops.iam_email,
module.service-account-github.iam_email
]
}
labels = var.labels
org_policies = {
# Example of applying a project wide policy
# "constraints/compute.requireOsLogin" = {
# enforce = false
# }
}
service_encryption_key_ids = {
bq = [try(local.service_encryption_keys.bq, null)]
compute = [try(local.service_encryption_keys.compute, null)]
cloudbuild = [try(local.service_encryption_keys.storage, null)]
notebooks = [try(local.service_encryption_keys.compute, null)]
storage = [try(local.service_encryption_keys.storage, null)]
}
services = var.project_services
shared_vpc_service_config = local.shared_vpc_project == null ? null : {
attach = true
host_project = local.shared_vpc_project
}
}
module "service-account-mlops" {
source = "../../../modules/iam-service-account"
name = var.sa_mlops_name
project_id = module.project.project_id
iam = {
"roles/iam.serviceAccountUser" = [module.service-account-github.iam_email]
}
}
resource "google_project_iam_member" "shared_vpc" {
count = local.use_shared_vpc ? 1 : 0
project = var.network_config.host_project
role = "roles/compute.networkUser"
member = "serviceAccount:${module.project.service_accounts.robots.notebooks}"
}
resource "google_sourcerepo_repository" "code-repo" {
count = var.repo_name == null ? 0 : 1
name = var.repo_name
project = module.project.project_id
}

View File

@ -0,0 +1,60 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
resource "google_notebooks_runtime" "runtime" {
for_each = var.notebooks
name = each.key
project = module.project.project_id
location = var.notebooks[each.key].region
access_config {
access_type = "SINGLE_USER"
runtime_owner = var.notebooks[each.key].owner
}
software_config {
enable_health_monitoring = true
idle_shutdown = var.notebooks[each.key].idle_shutdown
idle_shutdown_timeout = 1800
}
virtual_machine {
virtual_machine_config {
machine_type = "n1-standard-4"
network = local.vpc
subnet = local.subnet
internal_ip_only = var.notebooks[each.key].internal_ip_only
dynamic "encryption_config" {
for_each = try(local.service_encryption_keys.compute, null) == null ? [] : [1]
content {
kms_key = local.service_encryption_keys.compute
}
}
metadata = {
notebook-disable-nbconvert = "false"
notebook-disable-downloads = "false"
notebook-disable-terminal = "false"
#notebook-disable-root = "true"
#notebook-upgrade-schedule = "48 4 * * MON"
}
data_disk {
initialize_params {
disk_size_gb = "100"
disk_type = "PD_STANDARD"
}
}
}
}
}

View File

@ -0,0 +1,52 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
# TODO(): proper outputs
locals {
docker_split = try(split("/", module.artifact_registry.id), null)
docker_repo = try("${local.docker_split[3]}-docker.pkg.dev/${local.docker_split[1]}/${local.docker_split[5]}", null)
gh_config = {
WORKLOAD_ID_PROVIDER = try(google_iam_workload_identity_pool_provider.github_provider[0].name, null)
SERVICE_ACCOUNT = try(module.service-account-github.email, null)
PROJECT_ID = module.project.project_id
DOCKER_REPO = local.docker_repo
SA_MLOPS = module.service-account-mlops.email
SUBNETWORK = local.subnet
}
}
output "github" {
description = "Github Configuration."
value = local.gh_config
}
output "notebook" {
description = "Vertex AI managed notebook details."
value = { for k, v in resource.google_notebooks_runtime.runtime : k => v.id }
}
output "project" {
description = "The project resource as return by the `project` module."
value = module.project
}
output "project_id" {
description = "Project ID."
value = module.project.project_id
}

View File

@ -0,0 +1,20 @@
bucket_name = "creditcards-dev"
dataset_name = "creditcards"
identity_pool_claims = "attribute.repository/ORGANIZATION/REPO"
labels = {
"env" : "dev",
"team" : "ml"
}
notebooks = {
"myworkbench" : {
"owner" : "user@example.com",
"region" : "europe-west4",
"subnet" : "default",
}
}
prefix = "pref"
project_id = "creditcards-dev"
project_create = {
billing_account_id = "000000-123456-123456"
parent = "folders/111111111111"
}

View File

@ -0,0 +1,152 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "bucket_name" {
description = "GCS bucket name to store the Vertex AI artifacts."
type = string
default = null
}
variable "dataset_name" {
description = "BigQuery Dataset to store the training data."
type = string
default = null
}
variable "groups" {
description = "Name of the groups (name@domain.org) to apply opinionated IAM permissions."
type = object({
gcp-ml-ds = string
gcp-ml-eng = string
gcp-ml-viewer = string
})
default = {
gcp-ml-ds = null
gcp-ml-eng = null
gcp-ml-viewer = null
}
nullable = false
}
variable "identity_pool_claims" {
description = "Claims to be used by Workload Identity Federation (i.e.: attribute.repository/ORGANIZATION/REPO). If a not null value is provided, then google_iam_workload_identity_pool resource will be created."
type = string
default = null
}
variable "labels" {
description = "Labels to be assigned at project level."
type = map(string)
default = {}
}
variable "location" {
description = "Location used for multi-regional resources."
type = string
default = "eu"
}
variable "network_config" {
description = "Shared VPC network configurations to use. If null networks will be created in projects with preconfigured values."
type = object({
host_project = string
network_self_link = string
subnet_self_link = string
})
default = null
}
variable "notebooks" {
description = "Vertex AI workbenchs to be deployed."
type = map(object({
owner = string
region = string
subnet = string
internal_ip_only = optional(bool, false)
idle_shutdown = optional(bool)
}))
default = {}
nullable = false
}
variable "prefix" {
description = "Prefix used for the project id."
type = string
default = null
}
variable "project_create" {
description = "Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format."
type = object({
billing_account_id = string
parent = string
})
default = null
}
variable "project_id" {
description = "Project id, references existing project if `project_create` is null."
type = string
}
variable "project_services" {
description = "List of core services enabled on all projects."
type = list(string)
default = [
"aiplatform.googleapis.com",
"artifactregistry.googleapis.com",
"bigquery.googleapis.com",
"cloudbuild.googleapis.com",
"compute.googleapis.com",
"datacatalog.googleapis.com",
"dataflow.googleapis.com",
"iam.googleapis.com",
"monitoring.googleapis.com",
"notebooks.googleapis.com",
"secretmanager.googleapis.com",
"servicenetworking.googleapis.com",
"serviceusage.googleapis.com"
]
}
variable "region" {
description = "Region used for regional resources."
type = string
default = "europe-west4"
}
variable "repo_name" {
description = "Cloud Source Repository name. null to avoid to create it."
type = string
default = null
}
variable "sa_mlops_name" {
description = "Name for the MLOPs Service Account."
type = string
default = "sa-mlops"
}
variable "service_encryption_keys" { # service encription key
description = "Cloud KMS to use to encrypt different services. Key location should match service region."
type = object({
bq = string
compute = string
storage = string
})
default = null
}

View File

@ -1,36 +1,18 @@
# Google Cloud BQ Factory
This module allows creation and management of BigQuery datasets and views as well as tables by defining them in well formatted `yaml` files.
This module allows creation and management of BigQuery datasets tables and views by defining them in well-formatted YAML files. YAML abstraction for BQ can simplify users onboarding and also makes creation of tables easier compared to HCL.
Yaml abstraction for BQ can simplify users onboarding and also makes creation of tables easier compared to HCL.
This factory is based on the [BQ dataset module](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/tree/master/modules/bigquery-dataset) which currently only supports tables and views. As soon as external table and materialized view support is added, this factory will be enhanced accordingly.
Subfolders distinguish between views and tables and ensures easier navigation for users.
This factory is based on the [BQ dataset module](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/tree/master/modules/bigquery-dataset) which currently only supports tables and views. As soon as external table and materialized view support is added, factory will be enhanced accordingly.
You can create as many files as you like, the code will loop through it and create the required variables in order to execute everything accordingly.
You can create as many files as you like, the code will loop through it and create everything accordingly.
## Example
### Terraform code
```hcl
module "bq" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric/modules/bigquery-dataset"
for_each = local.output
project_id = var.project_id
id = each.key
views = try(each.value.views, null)
tables = try(each.value.tables, null)
}
# tftest skip
```
### Configuration Structure
In this section we show how to create tables and views from a file structure simlar to the one shown below.
```bash
base_folder
bigquery
├── tables
│ ├── table_a.yaml
@ -40,32 +22,43 @@ base_folder
│ ├── view_b.yaml
```
## YAML structure and definition formatting
### Tables
Table definition to be placed in a set of yaml files in the corresponding subfolder. Structure should look as following:
First we create the table definition in `bigquery/tables/countries.yaml`.
```yaml
dataset: # required name of the dataset the table is to be placed in
table: # required descriptive name of the table
schema: # required schema in JSON FORMAT Example: [{name: "test", type: "STRING"},{name: "test2", type: "INT64"}]
labels: # not required, defaults to {}, Example: {"a":"thisislabela","b":"thisislabelb"}
use_legacy_sql: boolean # not required, defaults to false
deletion_protection: boolean # not required, defaults to false
# tftest-file id=table path=bigquery/tables/countries.yaml
dataset: my_dataset
table: countries
deletion_protection: true
labels:
env: prod
schema:
- name: country
type: STRING
- name: population
type: INT64
```
### Views
View definition to be placed in a set of yaml files in the corresponding subfolder. Structure should look as following:
And a view in `bigquery/views/population.yaml`.
```yaml
dataset: # required, name of the dataset the view is to be placed in
view: # required, descriptive name of the view
query: # required, SQL Query for the view in quotes
labels: # not required, defaults to {}, Example: {"a":"thisislabela","b":"thisislabelb"}
use_legacy_sql: bool # not required, defaults to false
deletion_protection: bool # not required, defaults to false
# tftest-file id=view path=bigquery/views/population.yaml
dataset: my_dataset
view: department
query: SELECT SUM(population) from my_dataset.countries
labels:
env: prod
```
With this file structure, we can use the factory as follows:
```hcl
module "bq" {
source = "./fabric/blueprints/factories/bigquery-factory"
project_id = var.project_id
tables_path = "bigquery/tables"
views_path = "bigquery/views"
}
# tftest modules=2 resources=3 files=table,view inventory=simple.yaml
```
<!-- BEGIN TFDOC -->
@ -74,8 +67,8 @@ deletion_protection: bool # not required, defaults to false
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [project_id](variables.tf#L17) | Project ID. | <code>string</code> | ✓ | |
| [tables_dir](variables.tf#L22) | Relative path for the folder storing table data. | <code>string</code> | ✓ | |
| [views_dir](variables.tf#L27) | Relative path for the folder storing view data. | <code>string</code> | ✓ | |
| [tables_path](variables.tf#L22) | Relative path for the folder storing table data. | <code>string</code> | ✓ | |
| [views_path](variables.tf#L27) | Relative path for the folder storing view data. | <code>string</code> | ✓ | |
<!-- END TFDOC -->
## TODO

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -16,17 +16,22 @@
locals {
views = {
for f in fileset("${var.views_dir}", "**/*.yaml") :
trimsuffix(f, ".yaml") => yamldecode(file("${var.views_dir}/${f}"))
for f in fileset(var.views_path, "**/*.yaml") :
trimsuffix(f, ".yaml") => yamldecode(file("${var.views_path}/${f}"))
}
tables = {
for f in fileset("${var.tables_dir}", "**/*.yaml") :
trimsuffix(f, ".yaml") => yamldecode(file("${var.tables_dir}/${f}"))
for f in fileset(var.tables_path, "**/*.yaml") :
trimsuffix(f, ".yaml") => yamldecode(file("${var.tables_path}/${f}"))
}
output = {
for dataset in distinct([for v in values(merge(local.views, local.tables)) : v.dataset]) :
all_datasets = distinct(concat(
[for x in values(local.tables) : x.dataset],
[for x in values(local.views) : x.dataset]
))
datasets = {
for dataset in local.all_datasets :
dataset => {
"views" = {
for k, v in local.views :
@ -57,9 +62,8 @@ locals {
}
module "bq" {
source = "../../../modules/bigquery-dataset"
for_each = local.output
source = "../../../modules/bigquery-dataset"
for_each = local.datasets
project_id = var.project_id
id = each.key
views = try(each.value.views, null)

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -19,12 +19,12 @@ variable "project_id" {
type = string
}
variable "tables_dir" {
variable "tables_path" {
description = "Relative path for the folder storing table data."
type = string
}
variable "views_dir" {
variable "views_path" {
description = "Relative path for the folder storing view data."
type = string
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -59,6 +59,7 @@ module "projects" {
for_each = local.projects
defaults = local.defaults
project_id = each.key
descriptive_name = try(each.value.descriptive_name, null)
billing_account_id = try(each.value.billing_account_id, null)
billing_alert = try(each.value.billing_alert, null)
dns_zones = try(each.value.dns_zones, [])
@ -222,28 +223,29 @@ vpc:
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [billing_account_id](variables.tf#L17) | Billing account id. | <code>string</code> | ✓ | |
| [prefix](variables.tf#L151) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L160) | Project id. | <code>string</code> | ✓ | |
| [prefix](variables.tf#L157) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L166) | Project id. | <code>string</code> | ✓ | |
| [billing_alert](variables.tf#L22) | Billing alert configuration. | <code title="object&#40;&#123;&#10; amount &#61; number&#10; thresholds &#61; object&#40;&#123;&#10; current &#61; list&#40;number&#41;&#10; forecasted &#61; list&#40;number&#41;&#10; &#125;&#41;&#10; credit_treatment &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [defaults](variables.tf#L35) | Project factory default values. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; billing_alert &#61; object&#40;&#123;&#10; amount &#61; number&#10; thresholds &#61; object&#40;&#123;&#10; current &#61; list&#40;number&#41;&#10; forecasted &#61; list&#40;number&#41;&#10; &#125;&#41;&#10; credit_treatment &#61; string&#10; &#125;&#41;&#10; environment_dns_zone &#61; string&#10; essential_contacts &#61; list&#40;string&#41;&#10; labels &#61; map&#40;string&#41;&#10; notification_channels &#61; list&#40;string&#41;&#10; shared_vpc_self_link &#61; string&#10; vpc_host_project &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [dns_zones](variables.tf#L57) | DNS private zones to create as child of var.defaults.environment_dns_zone. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [essential_contacts](variables.tf#L63) | Email contacts to be used for billing and GCP notifications. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [folder_id](variables.tf#L69) | Folder ID for the folder where the project will be created. | <code>string</code> | | <code>null</code> |
| [group_iam](variables.tf#L75) | Custom IAM settings in group => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [group_iam_additive](variables.tf#L81) | Custom additive IAM settings in group => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam](variables.tf#L87) | Custom IAM settings in role => [principal] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam_additive](variables.tf#L93) | Custom additive IAM settings in role => [principal] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [kms_service_agents](variables.tf#L99) | KMS IAM configuration in as service => [key]. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [labels](variables.tf#L105) | Labels to be assigned at project level. | <code>map&#40;string&#41;</code> | | <code>&#123;&#125;</code> |
| [org_policies](variables.tf#L111) | Org-policy overrides at project level. | <code title="map&#40;object&#40;&#123;&#10; inherit_from_parent &#61; optional&#40;bool&#41; &#35; for list policies only.&#10; reset &#61; optional&#40;bool&#41;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; deny &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; enforce &#61; optional&#40;bool, true&#41; &#35; for boolean policies only.&#10; rules &#61; optional&#40;list&#40;object&#40;&#123;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; deny &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; enforce &#61; optional&#40;bool, true&#41; &#35; for boolean policies only.&#10; condition &#61; object&#40;&#123;&#10; description &#61; optional&#40;string&#41;&#10; expression &#61; optional&#40;string&#41;&#10; location &#61; optional&#40;string&#41;&#10; title &#61; optional&#40;string&#41;&#10; &#125;&#41;&#10; &#125;&#41;&#41;, &#91;&#93;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts](variables.tf#L165) | Service accounts to be created, and roles assigned them on the project. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_additive](variables.tf#L171) | Service accounts to be created, and roles assigned them on the project additively. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_iam](variables.tf#L177) | IAM bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]}. | <code>map&#40;map&#40;list&#40;string&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_iam_additive](variables.tf#L184) | IAM additive bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]}. | <code>map&#40;map&#40;list&#40;string&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_identities_iam](variables.tf#L191) | Custom IAM settings for service identities in service => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_identities_iam_additive](variables.tf#L198) | Custom additive IAM settings for service identities in service => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [services](variables.tf#L205) | Services to be enabled for the project. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [vpc](variables.tf#L212) | VPC configuration for the project. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; gke_setup &#61; object&#40;&#123;&#10; enable_security_admin &#61; bool&#10; enable_host_service_agent &#61; bool&#10; &#125;&#41;&#10; subnets_iam &#61; map&#40;list&#40;string&#41;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [descriptive_name](variables.tf#L57) | Name of the project name. Used for project name instead of `name` variable. | <code>string</code> | | <code>null</code> |
| [dns_zones](variables.tf#L63) | DNS private zones to create as child of var.defaults.environment_dns_zone. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [essential_contacts](variables.tf#L69) | Email contacts to be used for billing and GCP notifications. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [folder_id](variables.tf#L75) | Folder ID for the folder where the project will be created. | <code>string</code> | | <code>null</code> |
| [group_iam](variables.tf#L81) | Custom IAM settings in group => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [group_iam_additive](variables.tf#L87) | Custom additive IAM settings in group => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam](variables.tf#L93) | Custom IAM settings in role => [principal] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [iam_additive](variables.tf#L99) | Custom additive IAM settings in role => [principal] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [kms_service_agents](variables.tf#L105) | KMS IAM configuration in as service => [key]. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [labels](variables.tf#L111) | Labels to be assigned at project level. | <code>map&#40;string&#41;</code> | | <code>&#123;&#125;</code> |
| [org_policies](variables.tf#L117) | Org-policy overrides at project level. | <code title="map&#40;object&#40;&#123;&#10; inherit_from_parent &#61; optional&#40;bool&#41; &#35; for list policies only.&#10; reset &#61; optional&#40;bool&#41;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; deny &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; enforce &#61; optional&#40;bool, true&#41; &#35; for boolean policies only.&#10; rules &#61; optional&#40;list&#40;object&#40;&#123;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; deny &#61; optional&#40;object&#40;&#123;&#10; all &#61; optional&#40;bool&#41;&#10; values &#61; optional&#40;list&#40;string&#41;&#41;&#10; &#125;&#41;&#41;&#10; enforce &#61; optional&#40;bool, true&#41; &#35; for boolean policies only.&#10; condition &#61; object&#40;&#123;&#10; description &#61; optional&#40;string&#41;&#10; expression &#61; optional&#40;string&#41;&#10; location &#61; optional&#40;string&#41;&#10; title &#61; optional&#40;string&#41;&#10; &#125;&#41;&#10; &#125;&#41;&#41;, &#91;&#93;&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts](variables.tf#L171) | Service accounts to be created, and roles assigned them on the project. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_additive](variables.tf#L177) | Service accounts to be created, and roles assigned them on the project additively. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_iam](variables.tf#L183) | IAM bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]}. | <code>map&#40;map&#40;list&#40;string&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_accounts_iam_additive](variables.tf#L190) | IAM additive bindings on service account resources. Format is KEY => {ROLE => [MEMBERS]}. | <code>map&#40;map&#40;list&#40;string&#41;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_identities_iam](variables.tf#L197) | Custom IAM settings for service identities in service => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [service_identities_iam_additive](variables.tf#L204) | Custom additive IAM settings for service identities in service => [role] format. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>&#123;&#125;</code> |
| [services](variables.tf#L211) | Services to be enabled for the project. | <code>list&#40;string&#41;</code> | | <code>&#91;&#93;</code> |
| [vpc](variables.tf#L218) | VPC configuration for the project. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; gke_setup &#61; object&#40;&#123;&#10; enable_security_admin &#61; bool&#10; enable_host_service_agent &#61; bool&#10; &#125;&#41;&#10; subnets_iam &#61; map&#40;list&#40;string&#41;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
## Outputs

View File

@ -180,6 +180,7 @@ module "project" {
source = "../../../modules/project"
billing_account = local.billing_account_id
name = var.project_id
descriptive_name = var.descriptive_name
prefix = var.prefix
contacts = { for c in local.essential_contacts : c => ["ALL"] }
iam = local.iam

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -54,6 +54,12 @@ variable "defaults" {
default = null
}
variable "descriptive_name" {
description = "Name of the project name. Used for project name instead of `name` variable."
type = string
default = null
}
variable "dns_zones" {
description = "DNS private zones to create as child of var.defaults.environment_dns_zone."
type = list(string)

View File

@ -4,7 +4,7 @@ This blueprint presents an opinionated architecture to handle multiple homogeneo
The pattern used in this design is useful, for blueprint, in cases where multiple clusters host/support the same workloads, such as in the case of a multi-regional deployment. Furthermore, combined with Anthos Config Sync and proper RBAC, this architecture can be used to host multiple tenants (e.g. teams, applications) sharing the clusters.
This blueprint is used as part of the [FAST GKE stage](../../../fast/stages/03-gke-multitenant/) but it can also be used independently if desired.
This blueprint is used as part of the [FAST GKE stage](../../../fast/stages/3-gke-multitenant/) but it can also be used independently if desired.
<p align="center">
<img src="diagram.png" alt="GKE multitenant">

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -7,7 +7,7 @@ A few additional features are also shown:
- [custom BGP advertisements](https://cloud.google.com/router/docs/how-to/advertising-overview) to implement transitivity between spokes
- [VPC Global Routing](https://cloud.google.com/network-connectivity/docs/router/how-to/configuring-routing-mode) to leverage a regional set of VPN gateways in different regions as next hops (used here for illustrative/study purpose, not usually done in real life)
The blueprint has been purposefully kept simple to show how to use and wire the VPC and VPN-HA modules together, and so that it can be used as a basis for experimentation. For a more complex scenario that better reflects real-life usage, including [Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) and [DNS cross-project binding](https://cloud.google.com/dns/docs/zones/cross-project-binding) please refer to the [FAST network stage](../../../fast/stages/02-networking-vpn/).
The blueprint has been purposefully kept simple to show how to use and wire the VPC and VPN-HA modules together, and so that it can be used as a basis for experimentation. For a more complex scenario that better reflects real-life usage, including [Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) and [DNS cross-project binding](https://cloud.google.com/dns/docs/zones/cross-project-binding) please refer to the [FAST network stage](../../../fast/stages/2-networking-b-vpn/).
This is the high level diagram of this blueprint:

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

View File

@ -17,11 +17,11 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.48.0" # tftest
version = ">= 4.50.0" # tftest
}
}
}

293
diagram.svg Normal file
View File

@ -0,0 +1,293 @@
<svg fill="none" viewBox="0 0 800 400" width="800" height="400" xmlns="http://www.w3.org/2000/svg">
<foreignObject width="100%" height="100%">
<div xmlns="http://www.w3.org/1999/xhtml">
<style>
.edgePaths path {
stroke: #bebebe !important;
}
.mermaidExternal > rect {
fill: #f6f6f6 !important;
stroke-dasharray: 5,5;
stroke: #bebebe !important;
}
.mermaidOrg > rect {
fill: #F6F6F6 !important;
}
.mermaidFolder > rect {
fill: #F1F8E9 !important;
stroke: #abd57b !important;
}
</style>
<style>@import url("https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.2.1/css/all.min.css");'</style>
<style>#graph-div{font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:16px;fill:#333;}#graph-div .error-icon{fill:hsl(25.3846153846, 86.6666666667%, 99.1176470588%);}#graph-div .error-text{fill:rgb(0.3, 2.5500000001, 4.2000000001);stroke:rgb(0.3, 2.5500000001, 4.2000000001);}#graph-div .edge-thickness-normal{stroke-width:2px;}#graph-div .edge-thickness-thick{stroke-width:3.5px;}#graph-div .edge-pattern-solid{stroke-dasharray:0;}#graph-div .edge-pattern-dashed{stroke-dasharray:3;}#graph-div .edge-pattern-dotted{stroke-dasharray:2;}#graph-div .marker{fill:#0b0b0b;stroke:#0b0b0b;}#graph-div .marker.cross{stroke:#0b0b0b;}#graph-div svg{font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:16px;}#graph-div g.classGroup text{fill:hsl(205.3846153846, 46.6666666667%, 84.1176470588%);fill:#333;stroke:none;font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:10px;}#graph-div g.classGroup text .title{font-weight:bolder;}#graph-div .nodeLabel,#graph-div .edgeLabel{color:#333;}#graph-div .edgeLabel .label rect{fill:#E3F2FD;}#graph-div .label text{fill:#333;}#graph-div .edgeLabel .label span{background:#E3F2FD;}#graph-div .classTitle{font-weight:bolder;}#graph-div .node rect,#graph-div .node circle,#graph-div .node ellipse,#graph-div .node polygon,#graph-div .node path{fill:#E3F2FD;stroke:hsl(205.3846153846, 46.6666666667%, 84.1176470588%);stroke-width:1px;}#graph-div .divider{stroke:hsl(205.3846153846, 46.6666666667%, 84.1176470588%);stroke:1;}#graph-div g.clickable{cursor:pointer;}#graph-div g.classGroup rect{fill:#E3F2FD;stroke:hsl(205.3846153846, 46.6666666667%, 84.1176470588%);}#graph-div g.classGroup line{stroke:hsl(205.3846153846, 46.6666666667%, 84.1176470588%);stroke-width:1;}#graph-div .classLabel .box{stroke:none;stroke-width:0;fill:#E3F2FD;opacity:0.5;}#graph-div .classLabel .label{fill:hsl(205.3846153846, 46.6666666667%, 84.1176470588%);font-size:10px;}#graph-div .relation{stroke:#0b0b0b;stroke-width:1;fill:none;}#graph-div .dashed-line{stroke-dasharray:3;}#graph-div .dotted-line{stroke-dasharray:1 2;}#graph-div #compositionStart,#graph-div .composition{fill:#0b0b0b!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #compositionEnd,#graph-div .composition{fill:#0b0b0b!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #dependencyStart,#graph-div .dependency{fill:#0b0b0b!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #dependencyStart,#graph-div .dependency{fill:#0b0b0b!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #extensionStart,#graph-div .extension{fill:#E3F2FD!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #extensionEnd,#graph-div .extension{fill:#E3F2FD!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #aggregationStart,#graph-div .aggregation{fill:#E3F2FD!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #aggregationEnd,#graph-div .aggregation{fill:#E3F2FD!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #lollipopStart,#graph-div .lollipop{fill:#E3F2FD!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div #lollipopEnd,#graph-div .lollipop{fill:#E3F2FD!important;stroke:#0b0b0b!important;stroke-width:1;}#graph-div .edgeTerminals{font-size:11px;}#graph-div .classTitleText{text-anchor:middle;font-size:18px;fill:#333;}#graph-div :root{--mermaid-font-family:"trebuchet ms",verdana,arial,sans-serif;}</style>
<svg aria-roledescription="classDiagram" viewBox="0 0 500.0283203125 640.9853515625" style="max-width: 100%;" xmlns="http://www.w3.org/2000/svg" width="100%" id="graph-div" height="100%" xmlns:xlink="http://www.w3.org/1999/xlink">
<g>
<defs>
<marker orient="auto" markerHeight="240" markerWidth="190" refY="7" refX="0" class="marker aggregation classDiagram" id="classDiagram-aggregationStart">
<path d="M 18,7 L9,13 L1,7 L9,1 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="28" markerWidth="20" refY="7" refX="19" class="marker aggregation classDiagram" id="classDiagram-aggregationEnd">
<path d="M 18,7 L9,13 L1,7 L9,1 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="240" markerWidth="190" refY="7" refX="0" class="marker extension classDiagram" id="classDiagram-extensionStart">
<path d="M 1,7 L18,13 V 1 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="28" markerWidth="20" refY="7" refX="19" class="marker extension classDiagram" id="classDiagram-extensionEnd">
<path d="M 1,1 V 13 L18,7 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="240" markerWidth="190" refY="7" refX="0" class="marker composition classDiagram" id="classDiagram-compositionStart">
<path d="M 18,7 L9,13 L1,7 L9,1 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="28" markerWidth="20" refY="7" refX="19" class="marker composition classDiagram" id="classDiagram-compositionEnd">
<path d="M 18,7 L9,13 L1,7 L9,1 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="240" markerWidth="190" refY="7" refX="0" class="marker dependency classDiagram" id="classDiagram-dependencyStart">
<path d="M 5,7 L9,13 L1,7 L9,1 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="28" markerWidth="20" refY="7" refX="19" class="marker dependency classDiagram" id="classDiagram-dependencyEnd">
<path d="M 18,7 L9,13 L14,7 L9,1 Z"></path>
</marker>
</defs>
<defs>
<marker orient="auto" markerHeight="240" markerWidth="190" refY="7" refX="0" class="marker lollipop classDiagram" id="classDiagram-lollipopStart">
<circle r="6" cy="7" cx="6" fill="white" stroke="black"></circle>
</marker>
</defs>
<g class="root">
<g class="clusters"></g>
<g class="edgePaths">
<path style="fill:none" class="edge-pattern-solid relation" id="id1" d="M162.64356231689453,145.95778602350617L154.95414861043295,151.63083476812363C147.26473490397134,157.3038835127411,131.8859074910482,168.64998100197602,124.1964937845866,178.48969641326016C116.507080078125,188.32941182454428,116.507080078125,196.6627451578776,116.507080078125,200.82941182454428L116.507080078125,204.99607849121094"></path>
<path style="fill:none" class="edge-pattern-solid relation" id="id2" d="M337.38475799560547,145.95778602350617L345.0741717020671,151.63083476812363C352.7635854085286,157.3038835127411,368.14241282145184,168.64998100197602,375.8318265279134,178.48969641326016C383.521240234375,188.32941182454428,383.521240234375,196.6627451578776,383.521240234375,200.82941182454428L383.521240234375,204.99607849121094"></path>
<path style="fill:none" class="edge-pattern-solid relation" id="id3" d="M116.507080078125,379.99119567871094L116.507080078125,384.1578623453776C116.507080078125,388.32452901204425,116.507080078125,396.6578623453776,116.507080078125,404.99119567871094C116.507080078125,413.32452901204425,116.507080078125,421.6578623453776,116.507080078125,425.82452901204425L116.507080078125,429.99119567871094"></path>
<path style="fill:none" class="edge-pattern-solid relation" id="id4" d="M383.521240234375,379.99119567871094L383.521240234375,384.1578623453776C383.521240234375,388.32452901204425,383.521240234375,396.6578623453776,383.521240234375,404.99119567871094C383.521240234375,413.32452901204425,383.521240234375,421.6578623453776,383.521240234375,425.82452901204425L383.521240234375,429.99119567871094"></path>
</g>
<g class="edgeLabels">
<g class="edgeLabel">
<g transform="translate(0, 0)" class="label">
<foreignObject height="0" width="0">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="edgeLabel"></span>
</div>
</foreignObject>
</g>
</g>
<g class="edgeLabel">
<g transform="translate(0, 0)" class="label">
<foreignObject height="0" width="0">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="edgeLabel"></span>
</div>
</foreignObject>
</g>
</g>
<g class="edgeLabel">
<g transform="translate(0, 0)" class="label">
<foreignObject height="0" width="0">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="edgeLabel"></span>
</div>
</foreignObject>
</g>
</g>
<g class="edgeLabel">
<g transform="translate(0, 0)" class="label">
<foreignObject height="0" width="0">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="edgeLabel"></span>
</div>
</foreignObject>
</g>
</g>
</g>
<g class="nodes">
<g transform="translate(250.01416015625, 81.49803924560547)" id="classid-Organization-10" class="node default mermaidExternal">
<rect height="146.99608612060547" width="174.74119567871094" y="-73.49804306030273" x="-87.37059783935547" class="outer title-state"></rect>
<line y2="-37.49902153015137" y1="-37.49902153015137" x2="87.37059783935547" x1="-87.37059783935547" class="divider"></line>
<line y2="6.5" y1="6.5" x2="87.37059783935547" x1="-87.37059783935547" class="divider"></line>
<g class="label">
<foreignObject height="0" width="0">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel"></span>
</div>
</foreignObject>
<foreignObject transform="translate( -47.705074310302734, -65.99804306030273)" height="23.999021530151367" width="95.41014862060547" class="classTitle">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">Organization</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, -25.999021530151367)" height="23.999021530151367" width="129.84617614746094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">tag value [tenant]</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 14)" height="23.999021530151367" width="100.70800018310547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">IAM bindings()</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 41.99902153015137)" height="23.999021530151367" width="159.74119567871094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">organization policies()</span>
</div>
</foreignObject>
</g>
</g>
<g transform="translate(116.507080078125, 292.49363708496094)" id="classid-Tenant0-11" class="node default mermaidFolder">
<rect height="174.99510765075684" width="174.74119567871094" y="-87.49755382537842" x="-87.37059783935547" class="outer title-state"></rect>
<line y2="-23.499510765075684" y1="-23.499510765075684" x2="87.37059783935547" x1="-87.37059783935547" class="divider"></line>
<line y2="-7.499510765075684" y1="-7.499510765075684" x2="87.37059783935547" x1="-87.37059783935547" class="divider"></line>
<g class="label">
<foreignObject transform="translate( -29.913328170776367, -79.99755382537842)" height="23.999021530151367" width="59.826656341552734">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">«folder»</span>
</div>
</foreignObject>
<foreignObject transform="translate( -30.157468795776367, -51.99853229522705)" height="23.999021530151367" width="60.314937591552734" class="classTitle">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">Tenant0</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 0.0004892349243164062)" height="23.999021530151367" width="100.70800018310547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">IAM bindings()</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 27.999510765075684)" height="23.999021530151367" width="159.74119567871094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">organization policies()</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 55.99853229522705)" height="23.999021530151367" width="98.25438690185547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">tag bindings()</span>
</div>
</foreignObject>
</g>
</g>
<g transform="translate(383.521240234375, 292.49363708496094)" id="classid-Tenant1-12" class="node default mermaidFolder">
<rect height="174.99510765075684" width="174.74119567871094" y="-87.49755382537842" x="-87.37059783935547" class="outer title-state"></rect>
<line y2="-23.499510765075684" y1="-23.499510765075684" x2="87.37059783935547" x1="-87.37059783935547" class="divider"></line>
<line y2="-7.499510765075684" y1="-7.499510765075684" x2="87.37059783935547" x1="-87.37059783935547" class="divider"></line>
<g class="label">
<foreignObject transform="translate( -29.913328170776367, -79.99755382537842)" height="23.999021530151367" width="59.826656341552734">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">«folder»</span>
</div>
</foreignObject>
<foreignObject transform="translate( -30.157468795776367, -51.99853229522705)" height="23.999021530151367" width="60.314937591552734" class="classTitle">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">Tenant1</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 0.0004892349243164062)" height="23.999021530151367" width="100.70800018310547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">IAM bindings()</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 27.999510765075684)" height="23.999021530151367" width="159.74119567871094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">organization policies()</span>
</div>
</foreignObject>
<foreignObject transform="translate( -79.87059783935547, 55.99853229522705)" height="23.999021530151367" width="98.25438690185547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">tag bindings()</span>
</div>
</foreignObject>
</g>
</g>
<g transform="translate(116.507080078125, 531.4882507324219)" id="classid-Tenant0_IaC-13" class="node default">
<rect height="202.9941291809082" width="217.01414489746094" y="-101.4970645904541" x="-108.50707244873047" class="outer title-state"></rect>
<line y2="-37.49902153015137" y1="-37.49902153015137" x2="108.50707244873047" x1="-108.50707244873047" class="divider"></line>
<line y2="62.498043060302734" y1="62.498043060302734" x2="108.50707244873047" x1="-108.50707244873047" class="divider"></line>
<g class="label">
<foreignObject transform="translate( -34.661861419677734, -93.9970645904541)" height="23.999021530151367" width="69.32372283935547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">«project»</span>
</div>
</foreignObject>
<foreignObject transform="translate( -46.221920013427734, -65.99804306030273)" height="23.999021530151367" width="92.44384002685547" class="classTitle">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">Tenant0_IaC</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, -25.999021530151367)" height="23.999021530151367" width="202.01414489746094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">service accounts [all stages]</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, 2)" height="23.999021530151367" width="197.25340270996094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">storage buckets [stage 0+1]</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, 29.999021530151367)" height="23.999021530151367" width="189.92918395996094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">optional CI/CD [stage 0+1]</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, 69.99804306030273)" height="23.999021530151367" width="100.70800018310547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">IAM bindings()</span>
</div>
</foreignObject>
</g>
</g>
<g transform="translate(383.521240234375, 531.4882507324219)" id="classid-Tenant1_IaC-14" class="node default">
<rect height="202.9941291809082" width="217.01414489746094" y="-101.4970645904541" x="-108.50707244873047" class="outer title-state"></rect>
<line y2="-37.49902153015137" y1="-37.49902153015137" x2="108.50707244873047" x1="-108.50707244873047" class="divider"></line>
<line y2="62.498043060302734" y1="62.498043060302734" x2="108.50707244873047" x1="-108.50707244873047" class="divider"></line>
<g class="label">
<foreignObject transform="translate( -34.661861419677734, -93.9970645904541)" height="23.999021530151367" width="69.32372283935547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">«project»</span>
</div>
</foreignObject>
<foreignObject transform="translate( -46.221920013427734, -65.99804306030273)" height="23.999021530151367" width="92.44384002685547" class="classTitle">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">Tenant1_IaC</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, -25.999021530151367)" height="23.999021530151367" width="202.01414489746094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">service accounts [all stages]</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, 2)" height="23.999021530151367" width="197.25340270996094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">storage buckets [stage 0+1]</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, 29.999021530151367)" height="23.999021530151367" width="189.92918395996094">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">optional CI/CD [stage 0+1]</span>
</div>
</foreignObject>
<foreignObject transform="translate( -101.00707244873047, 69.99804306030273)" height="23.999021530151367" width="100.70800018310547">
<div style="display: inline-block; white-space: nowrap;" xmlns="http://www.w3.org/1999/xhtml">
<span class="nodeLabel">IAM bindings()</span>
</div>
</foreignObject>
</g>
</g>
</g>
</g>
</g>
</svg>
</div>
</foreignObject>
</svg>

After

Width:  |  Height:  |  Size: 24 KiB

View File

@ -12,7 +12,7 @@ Fabric FAST was initially conceived to help enterprises quickly set up a GCP org
### Contracts and stages
FAST uses the concept of stages, which individually perform precise tasks but, taken together, build a functional, ready-to-use GCP organization. More importantly, stages are modeled around the security boundaries that typically appear in mature organizations. This arrangement allows delegating ownership of each stage to the team responsible for the types of resources it manages. For example, as its name suggests, the networking stage sets up all the networking elements and is usually the responsibility of a dedicated networking team within the organization.
FAST uses the concept of stages, which individually perform precise tasks but taken together build a functional, ready-to-use GCP organization. More importantly, stages are modeled around the security boundaries that typically appear in mature organizations. This arrangement allows delegating ownership of each stage to the team responsible for the types of resources it manages. For example, as its name suggests, the networking stage sets up all the networking elements and is usually the responsibility of a dedicated networking team within the organization.
From the perspective of FAST's overall design, stages also work as contacts or interfaces, defining a set of pre-requisites and inputs required to perform their designed task and generating outputs needed by other stages lower in the chain. The diagram below shows the relationships between stages.
@ -20,7 +20,7 @@ From the perspective of FAST's overall design, stages also work as contacts or i
<img src="stages.svg" alt="Stages diagram">
</p>
Please refer to the [stages](./stages/) section for further details on each stage.
Please refer to the [stages](./stages/) section for further details on each stage. For details on tenant-level stages which introduce a deeper level of autonomy via nested FAST setups rooted in a top-level folder, refer to the [multitenant stages](#multitenant-organizations) section below.
### Security-first design
@ -32,11 +32,21 @@ FAST also aims to minimize the number of permissions granted to principals accor
A resource factory consumes a simple representation of a resource (e.g., in YAML) and deploys it (e.g., using Terraform). Used correctly, factories can help decrease the management overhead of large-scale infrastructure deployments. See "[Resource Factories: A descriptive approach to Terraform](https://medium.com/google-cloud/resource-factories-a-descriptive-approach-to-terraform-581b3ebb59c)" for more details and the rationale behind factories.
FAST uses YAML-based factories to deploy subnets and firewall rules and, as its name suggests, in the [project factory](./stages/03-project-factory/) stage.
FAST uses YAML-based factories to deploy subnets and firewall rules and, as its name suggests, in the [project factory](./stages/3-project-factory/) stage.
### CI/CD
One of our objectives with FAST is to provide a lightweight reference design for the IaC repositories, and a built-in implementation for running our code in automated pipelines. Our CI/CD approach leverages [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation), and provides sample workflow configurations for several major providers. Refer to the [CI/CD section in the bootstrap stage](stages/00-bootstrap/README.md#cicd) for more details. We also provide separate [optional small stages](./extras/) to help you configure your CI/CD provider.
One of our objectives with FAST is to provide a lightweight reference design for the IaC repositories, and a built-in implementation for running our code in automated pipelines. Our CI/CD approach leverages [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation), and provides sample workflow configurations for several major providers. Refer to the [CI/CD section in the bootstrap stage](./stages/0-bootstrap/README.md#cicd) for more details. We also provide separate [optional small stages](./extras/) to help you configure your CI/CD provider.
### Multitenant organizations
FAST has built-in support for complex multitenant organizations, where each tenant has complete control over a separate hierarchy rooted in a top-level folder. This approach is particularly suited for large enterprises or governments, where country-level subsidiaries or government agencies have a wide degree of autonomy within a shared GCP organization managed by a central entity.
FAST implements multitenancy via [dedicated stages](stages-multitenant) for tenant-level bootstrap and resource management, which configure separate hierarchies within the organization rooted in top-level folders, so that subsequent FAST stages (networking, security, data, etc.) can be used directly for each tenant. The diagram below shows the relationships between organization-level and tenant-level stages.
<p align="center">
<img src="stages-multitenant/stages.svg" alt="Stages diagram">
</p>
## Implementation
@ -57,9 +67,9 @@ Those familiar with Python will note that FAST follows many of the maxims in the
## Roadmap
Besides the features already described, FAST roadmap includes:
Besides the features already described, FAST also includes:
- Stage to deploy environment-specific multitenant GKE clusters following Google's best practices
- Stage to deploy a fully featured data platform
- Reference implementation to use FAST in CI/CD pipelines (in progress)
- Static policy enforcement
- Reference implementation to use FAST in CI/CD pipelines
- Static policy enforcement (planned)

View File

@ -0,0 +1,139 @@
# FAST GitHub repository management
This small extra stage allows creating and populating GitHub repositories used to host FAST stage code, including rewriting of module sources and secrets used for private modules repository access.
It is designed for use in a GitHub organization, and is only meant as a one-shot solution with perishable state especially when used for initial population, as you don't want Terraform to keep overwriting your changes with initial versions of files.
Initial population is only meant to be used with actual stage, while populating the modules repository should be done by hand to avoid hitting the GitHub hourly limit for their API.
Once initial population is done, you need to manually push to the repository
- the `.tfvars` file with custom variable values for your stages
- the workflow configuration file generated by FAST stages
## GitHub provider credentials
A [GitHub token](https://github.com/settings/tokens) is needed to authenticate against their API. The token needs organization-level permissions, like shown in this screenshot:
<p align="center">
<img src="github_token.png" alt="GitHub token scopes.">
</p>
Once a token is available set it in the `GITHUB_TOKEN` environment variable before running Terraform.
## Variable configuration
The `organization` required variable sets the GitHub organization where repositories will be created, and is used to configure the Terraform provider.
### Modules repository and sources
The `modules_config` variable controls creation and management of the key and secret used to access the private modules repository, and indirectly control population of initial files: if the `modules_config` variable is not specified no module repository is know to the code, so module source paths cannot be replaced, and initial population of files cannot happen. If the variable is specified, an optional `source_ref` attribute can be set to the reference used to pin modules versions.
This is an example that configures the modules repository name and an optional reference, enabling initial population of repositories where the feature has been turned on:
```hcl
modules_config = {
repository_name = "GoogleCloudPlatform/cloud-foundation-fabric"
source_ref = "v19.0.0"
}
# tftest skip
```
In the above example, no key options are set so it's assumed modules will be fetched from a public repository. If modules repository authentication is needed the `key_config` attribute also needs to be set.
If no keypair path is specified an internally generated key will be stored as an access key in the modules repository, and as secrets in the stage repositories:
```hcl
modules_config = {
repository_name = "GoogleCloudPlatform/cloud-foundation-fabric"
key_config = {
create_key = true
create_secrets = true
}
}
# tftest skip
```
To use an existing keypair pass the path to the private key, the public key name is assumed to have the same name ending with the `.pub` suffix. This is useful in cases where the access key has already been set in the modules repository, and new repositories need to be created and their corresponding secret set:
```hcl
modules_config = {
repository_name = "GoogleCloudPlatform/cloud-foundation-fabric"
key_config = {
create_secrets = true
keypair_path = "~/modules-repository-key"
}
}
# tftest skip
```
### Repositories
The `repositories` variable is where you configure which repositories to create and whether initial population of files is desired.
This is an example that creates repositories for stages 00 and 01, and populates initial files for stages 00, 01, and 02:
```tfvars
repositories = {
fast_00_bootstrap = {
create_options = {
description = "FAST bootstrap."
features = {
issues = true
}
}
populate_from = "../../stages/0-bootstrap"
}
fast_01_resman = {
create_options = {
description = "FAST resource management."
features = {
issues = true
}
}
populate_from = "../../stages/1-resman"
}
fast_02_networking = {
populate_from = "../../stages/2-networking-peering"
}
}
# tftest skip
```
The `create_options` repository attribute controls creation: if the attribute is not present, the repository is assumed to be already existing.
Initial population depends on a modules repository being configured in the `modules_config` variable described in the preceding section and on the`populate_from` attributes in each repository where population is required, which point to the folder holding the files to be committed.
### Commit configuration
Finally, a `commit_config` variable is optional: it can be used to configure author, email and message used in commits for initial population of files, its defaults are probably fine for most use cases.
<!-- TFDOC OPTS files:1 -->
<!-- BEGIN TFDOC -->
## Files
| name | description | resources |
|---|---|---|
| [cicd-versions.tf](./cicd-versions.tf) | Provider version. | |
| [main.tf](./main.tf) | Module-level locals and resources. | <code>github_actions_secret</code> · <code>github_repository</code> · <code>github_repository_deploy_key</code> · <code>github_repository_file</code> · <code>tls_private_key</code> |
| [outputs.tf](./outputs.tf) | Module outputs. | |
| [providers.tf](./providers.tf) | Provider configuration. | |
| [variables.tf](./variables.tf) | Module variables. | |
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [organization](variables.tf#L50) | GitHub organization. | <code>string</code> | ✓ | |
| [commmit_config](variables.tf#L17) | Configure commit metadata. | <code title="object&#40;&#123;&#10; author &#61; optional&#40;string, &#34;FAST loader&#34;&#41;&#10; email &#61; optional&#40;string, &#34;fast-loader&#64;fast.gcp.tf&#34;&#41;&#10; message &#61; optional&#40;string, &#34;FAST initial loading&#34;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>&#123;&#125;</code> |
| [modules_config](variables.tf#L28) | Configure access to repository module via key, and replacement for modules sources in stage repositories. | <code title="object&#40;&#123;&#10; repository_name &#61; string&#10; source_ref &#61; optional&#40;string&#41;&#10; key_config &#61; optional&#40;object&#40;&#123;&#10; create_key &#61; optional&#40;bool, false&#41;&#10; create_secrets &#61; optional&#40;bool, false&#41;&#10; keypair_path &#61; optional&#40;string&#41;&#10; &#125;&#41;, &#123;&#125;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [repositories](variables.tf#L55) | Repositories to create. | <code title="map&#40;object&#40;&#123;&#10; create_options &#61; optional&#40;object&#40;&#123;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; auto_merge &#61; optional&#40;bool&#41;&#10; merge_commit &#61; optional&#40;bool&#41;&#10; rebase_merge &#61; optional&#40;bool&#41;&#10; squash_merge &#61; optional&#40;bool&#41;&#10; &#125;&#41;&#41;&#10; auto_init &#61; optional&#40;bool&#41;&#10; description &#61; optional&#40;string&#41;&#10; features &#61; optional&#40;object&#40;&#123;&#10; issues &#61; optional&#40;bool&#41;&#10; projects &#61; optional&#40;bool&#41;&#10; wiki &#61; optional&#40;bool&#41;&#10; &#125;&#41;&#41;&#10; templates &#61; optional&#40;object&#40;&#123;&#10; gitignore &#61; optional&#40;string, &#34;Terraform&#34;&#41;&#10; license &#61; optional&#40;string&#41;&#10; repository &#61; optional&#40;object&#40;&#123;&#10; name &#61; string&#10; owner &#61; string&#10; &#125;&#41;&#41;&#10; &#125;&#41;, &#123;&#125;&#41;&#10; visibility &#61; optional&#40;string, &#34;private&#34;&#41;&#10; &#125;&#41;&#41;&#10; populate_from &#61; optional&#40;string&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [clone](outputs.tf#L17) | Clone repository commands. | |
<!-- END TFDOC -->

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.

View File

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 55 KiB

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -15,9 +15,6 @@
*/
locals {
_modules_repository = [
for k, v in var.repositories : local.repositories[k] if v.has_modules
]
_repository_files = flatten([
for k, v in var.repositories : [
for f in concat(
@ -30,12 +27,12 @@ locals {
}
] if v.populate_from != null
])
modules_ref = var.modules_ref == null ? "" : "?ref=${var.modules_ref}"
modules_repository = (
length(local._modules_repository) > 0
? local._modules_repository.0
: null
modules_ref = (
try(var.modules_config.source_ref, null) == null
? ""
: "?ref=${var.modules_config.source_ref}"
)
modules_repo = try(var.modules_config.repository_name, null)
repositories = {
for k, v in var.repositories :
k => v.create_options == null ? k : github_repository.default[k].name
@ -56,6 +53,15 @@ locals {
name = "templates/providers.tf.tpl"
}
if v.populate_from != null
},
{
for k, v in var.repositories :
"${k}/templates/workflow-github.yaml" => {
repository = k
file = "../../assets/templates/workflow-github.yaml"
name = "templates/workflow-github.yaml"
}
if v.populate_from != null
}
)
}
@ -96,41 +102,49 @@ resource "github_repository" "default" {
}
resource "tls_private_key" "default" {
count = local.modules_repository != null ? 1 : 0
algorithm = "ED25519"
}
resource "github_repository_deploy_key" "default" {
count = local.modules_repository == null ? 0 : 1
count = (
try(var.modules_config.key_config.create_key, null) == true ? 1 : 0
)
title = "Modules repository access"
repository = local.modules_repository
key = tls_private_key.default.0.public_key_openssh
read_only = true
repository = local.modules_repo
key = (
try(var.modules_config.key_config.keypair_path, null) == null
? tls_private_key.default.public_key_openssh
: file(pathexpand("${var.modules_config.key_config.keypair_path}.pub"))
)
read_only = true
}
resource "github_actions_secret" "default" {
for_each = local.modules_repository == null ? {} : {
for k, v in local.repositories :
k => v if k != local.modules_repository
}
repository = each.key
secret_name = "CICD_MODULES_KEY"
plaintext_value = tls_private_key.default.0.private_key_openssh
for_each = (
try(var.modules_config.key_config.create_secrets, null) == true
? local.repositories
: {}
)
repository = each.key
secret_name = "CICD_MODULES_KEY"
plaintext_value = (
try(var.modules_config.key_config.keypair_path, null) == null
? tls_private_key.default.private_key_openssh
: file(pathexpand("${var.modules_config.key_config.keypair_path}"))
)
}
resource "github_repository_file" "default" {
for_each = (
local.modules_repository == null ? {} : local.repository_files
)
for_each = local.modules_repo == null ? {} : local.repository_files
repository = local.repositories[each.value.repository]
branch = "main"
file = each.value.name
content = (
endswith(each.value.name, ".tf") && local.modules_repository != null
endswith(each.value.name, ".tf") && local.modules_repo != null
? replace(
file(each.value.file),
"/source\\s*=\\s*\"../../../modules/([^/\"]+)\"/",
"source = \"git@github.com:${var.organization}/${local.modules_repository}.git//$1${local.modules_ref}\"" # "
"source = \"git@github.com:${local.modules_repo}.git//$1${local.modules_ref}\"" # "
)
: file(each.value.file)
)

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.

View File

@ -1,5 +1,5 @@
/**
* Copyright 2022 Google LLC
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -25,10 +25,26 @@ variable "commmit_config" {
nullable = false
}
variable "modules_ref" {
description = "Optional git ref used in module sources."
type = string
default = null
variable "modules_config" {
description = "Configure access to repository module via key, and replacement for modules sources in stage repositories."
type = object({
repository_name = string
source_ref = optional(string)
key_config = optional(object({
create_key = optional(bool, false)
create_secrets = optional(bool, false)
keypair_path = optional(string)
}), {})
})
default = null
validation {
condition = (
var.modules_config == null
||
try(var.modules_config.repository_name, null) != null
)
error_message = "Modules configuration requires a modules repository name."
}
}
variable "organization" {
@ -63,7 +79,6 @@ variable "repositories" {
}), {})
visibility = optional(string, "private")
}))
has_modules = optional(bool, false)
populate_from = optional(string)
}))
default = {}

View File

@ -1,105 +0,0 @@
# FAST GitHub repository management
This small extra stage allows creation and management of GitHub repositories used to host FAST stage code, including initial population of files and rewriting of module sources.
This stage is designed for quick repository creation in a GitHub organization, and is not suited for medium or long-term repository management especially if you enable initial population of files.
## Initial population caveats
Initial file population of repositories is controlled via the `populate_from` attribute, and needs a bit of care:
- never run this stage with the same variables used for population once the repository starts being used, as **Terraform will manage file state and revert any changes at each apply**, which is probably not what you want.
- initial population of the modules repository is discouraged, as the number of resulting files Terraform needs to manage is very close to the GitHub hourly limit for their API, it's much easier to populate modules via regular git commands
The scenario for which this stage has been designed is one-shot creation and/or population of stage repositories, running it multiple times with different variables and Terraform states if incremental creation is needed for subsequent FAST stages (e.g. GKE, data platform, etc.).
Once initial population is done, you need to manually push to the repository
- the `.tfvars` file with custom variable values for your stages
- the workflow configuration file generated by FAST stages
## GitHub provider credentials
A [GitHub token](https://github.com/settings/tokens) is needed to authenticate against their API. The token needs organization-level permissions, like shown in this screenshot:
<p align="center">
<img src="github_token.png" alt="GitHub token scopes.">
</p>
## Variable configuration
The `organization` required variable sets the GitHub organization where repositories will be created, and is used to configure the Terraform provider.
The `repositories` variable is where you configure which repositories to create, whether initial population of files is desired, and which repository is used to host modules.
This is an example that creates repositories for stages 00 and 01, defines an existing repositories as the source for modules, and populates initial files for stages 00, 01, and 02:
```tfvars
organization = "ludomagno"
repositories = {
fast_00_bootstrap = {
create_options = {
description = "FAST bootstrap."
features = {
issues = true
}
}
populate_from = "../../stages/00-bootstrap"
}
fast_01_resman = {
create_options = {
description = "FAST resource management."
features = {
issues = true
}
}
populate_from = "../../stages/01-resman"
}
fast_02_networking = {
populate_from = "../../stages/02-networking-peering"
}
fast_modules = {
has_modules = true
}
}
```
The `create_options` repository attribute controls creation: if the attribute is not present, the repository is assumed to be already existing.
Initial population depends on a modules repository being configured, identified by the `has_modules` attribute, and on `populate_from` attributes in each repository where population is required, pointing to the folder holding the files to be committed.
Finally, a `commit_config` variable is optional: it can be used to configure author, email and message used in commits for initial population of files, its defaults are probably fine for most use cases.
## Modules secret
When initial population is configured for a repository, this stage also adds a secret with the private key used to authenticate against the modules repository. This matches the configuration of the GitHub workflow files created for each FAST stage when CI/CD is enabled.
<!-- TFDOC OPTS files:1 -->
<!-- BEGIN TFDOC -->
## Files
| name | description | resources |
|---|---|---|
| [cicd-versions.tf](./cicd-versions.tf) | Provider version. | |
| [main.tf](./main.tf) | Module-level locals and resources. | <code>github_actions_secret</code> · <code>github_repository</code> · <code>github_repository_deploy_key</code> · <code>github_repository_file</code> · <code>tls_private_key</code> |
| [outputs.tf](./outputs.tf) | Module outputs. | |
| [providers.tf](./providers.tf) | Provider configuration. | |
| [variables.tf](./variables.tf) | Module variables. | |
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [organization](variables.tf#L34) | GitHub organization. | <code>string</code> | ✓ | |
| [commmit_config](variables.tf#L17) | Configure commit metadata. | <code title="object&#40;&#123;&#10; author &#61; optional&#40;string, &#34;FAST loader&#34;&#41;&#10; email &#61; optional&#40;string, &#34;fast-loader&#64;fast.gcp.tf&#34;&#41;&#10; message &#61; optional&#40;string, &#34;FAST initial loading&#34;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>&#123;&#125;</code> |
| [modules_ref](variables.tf#L28) | Optional git ref used in module sources. | <code>string</code> | | <code>null</code> |
| [repositories](variables.tf#L39) | Repositories to create. | <code title="map&#40;object&#40;&#123;&#10; create_options &#61; optional&#40;object&#40;&#123;&#10; allow &#61; optional&#40;object&#40;&#123;&#10; auto_merge &#61; optional&#40;bool&#41;&#10; merge_commit &#61; optional&#40;bool&#41;&#10; rebase_merge &#61; optional&#40;bool&#41;&#10; squash_merge &#61; optional&#40;bool&#41;&#10; &#125;&#41;&#41;&#10; auto_init &#61; optional&#40;bool&#41;&#10; description &#61; optional&#40;string&#41;&#10; features &#61; optional&#40;object&#40;&#123;&#10; issues &#61; optional&#40;bool&#41;&#10; projects &#61; optional&#40;bool&#41;&#10; wiki &#61; optional&#40;bool&#41;&#10; &#125;&#41;&#41;&#10; templates &#61; optional&#40;object&#40;&#123;&#10; gitignore &#61; optional&#40;string, &#34;Terraform&#34;&#41;&#10; license &#61; optional&#40;string&#41;&#10; repository &#61; optional&#40;object&#40;&#123;&#10; name &#61; string&#10; owner &#61; string&#10; &#125;&#41;&#41;&#10; &#125;&#41;, &#123;&#125;&#41;&#10; visibility &#61; optional&#40;string, &#34;private&#34;&#41;&#10; &#125;&#41;&#41;&#10; has_modules &#61; optional&#40;bool, false&#41;&#10; populate_from &#61; optional&#40;string&#41;&#10;&#125;&#41;&#41;">map&#40;object&#40;&#123;&#8230;&#125;&#41;&#41;</code> | | <code>&#123;&#125;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [clone](outputs.tf#L17) | Clone repository commands. | |
<!-- END TFDOC -->

View File

@ -2,4 +2,4 @@
This folder contains additional helper stages for FAST, which can be used to simplify specific operational tasks:
- [GitHub repository management](./00-cicd-github/)
- [GitHub repository management](./0-cicd-github/)

114
fast/stage-links.sh Executable file
View File

@ -0,0 +1,114 @@
#!/bin/bash
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ $# -eq 0 ]; then
echo "Error: no folder or GCS bucket specified. Use -h or --help for usage."
exit 1
fi
if [[ "$1" == "-h" || "$1" == "--help" ]]; then
cat <<END
Create commands to initialize stage provider and tfvars files.
Usage with GCS output files bucket:
stage-links.sh GCS_BUCKET_URI
Usage with local output files folder:
stage-links.sh FOLDER_PATH
END
exit 0
fi
if [[ "$1" == "gs://"* ]]; then
CMD="gcloud alpha storage cp $1"
CP_CMD=$CMD
elif [ ! -d "$1" ]; then
echo "folder $1 not found"
exit 1
else
CMD="ln -s $1"
CP_CMD="cp $1"
fi
GLOBALS="tfvars/globals.auto.tfvars.json"
PROVIDER_CMD=$CMD
STAGE_NAME=$(basename "$(pwd)")
case $STAGE_NAME in
"0-bootstrap")
unset GLOBALS
PROVIDER="providers/0-bootstrap-providers.tf"
TFVARS=""
;;
"0-bootstrap-tenant")
MESSAGE="remember to set the prefix in the provider file"
PROVIDER_CMD=$CP_CMD
PROVIDER="providers/0-bootstrap-tenant-providers.tf"
TFVARS="tfvars/0-bootstrap.auto.tfvars.json
tfvars/1-resman.auto.tfvars.json"
;;
"1-resman")
PROVIDER="providers/${STAGE_NAME}-providers.tf"
TFVARS="tfvars/0-bootstrap.auto.tfvars.json"
;;
"1-resman-tenant")
if [[ -z "$TENANT" ]]; then
echo "Please set a \$TENANT variable with the tenant shortname"
exit 1
fi
unset GLOBALS
PROVIDER="providers/1-resman-tenant-providers.tf"
TFVARS="tfvars/0-bootstrap-tenant.auto.tfvars.json"
;;
"2-networking"*)
PROVIDER="providers/2-networking-providers.tf"
TFVARS="tfvars/0-bootstrap.auto.tfvars.json
tfvars/1-resman.auto.tfvars.json"
;;
*)
# check for a "dev" stage 3
echo "no stage found, trying for parent stage 3..."
STAGE_NAME=$(basename $(dirname "$(pwd)"))
if [[ "$STAGE_NAME" == "3-"* ]]; then
PROVIDER="providers/${STAGE_NAME}-providers.tf"
TFVARS="tfvars/0-bootstrap.auto.tfvars.json
tfvars/1-resman.auto.tfvars.json
tfvars/2-networking.auto.tfvars.json
tfvars/2-security.auto.tfvars.json"
else
echo "stage '$STAGE_NAME' not found"
fi
;;
esac
echo -e "# copy and paste the following commands for '$STAGE_NAME'\n"
echo "$PROVIDER_CMD/$PROVIDER ./"
# if [[ -v GLOBALS ]]; then
# OSX uses an old bash version
if [[ ! -z ${GLOBALS+x} ]]; then
echo "$CMD/$GLOBALS ./"
fi
for f in $TFVARS; do
echo "$CMD/$f ./"
done
if [[ ! -z ${MESSAGE+x} ]]; then
echo -e "\n# ---> $MESSAGE <---"
fi

View File

@ -0,0 +1,49 @@
# IAM bindings reference
Legend: <code>+</code> additive, <code></code> conditional.
## Organization <i>[org_id #0]</i>
| members | roles |
|---|---|
|<b>tn0-admins</b><br><small><i>group</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code><br>[roles/resourcemanager.organizationViewer](https://cloud.google.com/iam/docs/understanding-roles#resourcemanager.organizationViewer) <code>+</code>|
|<b>tn0-gke-dev-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-gke-prod-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-networking-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-pf-dev-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-pf-prod-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-resman-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-sandbox-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-security-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
|<b>tn0-teams-0</b><br><small><i>serviceAccount</i></small>|[roles/orgpolicy.policyAdmin](https://cloud.google.com/iam/docs/understanding-roles#orgpolicy.policyAdmin) <code>+</code><code></code>|
## Folder <i>test tenant 0 [#1]</i>
| members | roles |
|---|---|
|<b>tn0-admins</b><br><small><i>group</i></small>|[roles/compute.xpnAdmin](https://cloud.google.com/iam/docs/understanding-roles#compute.xpnAdmin) <br>[roles/logging.admin](https://cloud.google.com/iam/docs/understanding-roles#logging.admin) <br>[roles/owner](https://cloud.google.com/iam/docs/understanding-roles#owner) <br>[roles/resourcemanager.folderAdmin](https://cloud.google.com/iam/docs/understanding-roles#resourcemanager.folderAdmin) <br>[roles/resourcemanager.projectCreator](https://cloud.google.com/iam/docs/understanding-roles#resourcemanager.projectCreator) |
|<b>tn0-networking-0</b><br><small><i>serviceAccount</i></small>|[roles/compute.xpnAdmin](https://cloud.google.com/iam/docs/understanding-roles#compute.xpnAdmin) |
|<b>tn0-resman-0</b><br><small><i>serviceAccount</i></small>|[roles/compute.xpnAdmin](https://cloud.google.com/iam/docs/understanding-roles#compute.xpnAdmin) <br>[roles/logging.admin](https://cloud.google.com/iam/docs/understanding-roles#logging.admin) <br>[roles/owner](https://cloud.google.com/iam/docs/understanding-roles#owner) <br>[roles/resourcemanager.folderAdmin](https://cloud.google.com/iam/docs/understanding-roles#resourcemanager.folderAdmin) <br>[roles/resourcemanager.projectCreator](https://cloud.google.com/iam/docs/understanding-roles#resourcemanager.projectCreator) |
## Project <i>prod-iac-core-0</i>
| members | roles |
|---|---|
|<b>tn0-bootstrap-1</b><br><small><i>serviceAccount</i></small>|[roles/logging.logWriter](https://cloud.google.com/iam/docs/understanding-roles#logging.logWriter) <code>+</code>|
## Project <i>tn0-audit-logs-0</i>
| members | roles |
|---|---|
|<b>f260055713332-284719</b><br><small><i>serviceAccount</i></small>|[roles/logging.bucketWriter](https://cloud.google.com/iam/docs/understanding-roles#logging.bucketWriter) <code>+</code><code></code>|
|<b>prod-resman-0</b><br><small><i>serviceAccount</i></small>|[roles/owner](https://cloud.google.com/iam/docs/understanding-roles#owner) |
|<b>tn0-resman-0</b><br><small><i>serviceAccount</i></small>|[roles/owner](https://cloud.google.com/iam/docs/understanding-roles#owner) |
## Project <i>tn0-iac-core-0</i>
| members | roles |
|---|---|
|<b>tn0-admins</b><br><small><i>group</i></small>|[roles/iam.serviceAccountTokenCreator](https://cloud.google.com/iam/docs/understanding-roles#iam.serviceAccountTokenCreator) <br>[roles/iam.workloadIdentityPoolAdmin](https://cloud.google.com/iam/docs/understanding-roles#iam.workloadIdentityPoolAdmin) |
|<b>SERVICE_IDENTITY_service-networking</b><br><small><i>serviceAccount</i></small>|[roles/servicenetworking.serviceAgent](https://cloud.google.com/iam/docs/understanding-roles#servicenetworking.serviceAgent) <code>+</code>|
|<b>prod-resman-0</b><br><small><i>serviceAccount</i></small>|[roles/owner](https://cloud.google.com/iam/docs/understanding-roles#owner) |
|<b>tn0-resman-0</b><br><small><i>serviceAccount</i></small>|[roles/cloudbuild.builds.editor](https://cloud.google.com/iam/docs/understanding-roles#cloudbuild.builds.editor) <br>[roles/iam.serviceAccountAdmin](https://cloud.google.com/iam/docs/understanding-roles#iam.serviceAccountAdmin) <br>[roles/iam.workloadIdentityPoolAdmin](https://cloud.google.com/iam/docs/understanding-roles#iam.workloadIdentityPoolAdmin) <br>[roles/owner](https://cloud.google.com/iam/docs/understanding-roles#owner) <br>[roles/source.admin](https://cloud.google.com/iam/docs/understanding-roles#source.admin) <br>[roles/storage.admin](https://cloud.google.com/iam/docs/understanding-roles#storage.admin) |

Some files were not shown because too many files have changed in this diff Show More