commit
10b9c9e2a6
85
CHANGELOG.md
85
CHANGELOG.md
|
@ -4,19 +4,61 @@ All notable changes to this project will be documented in this file.
|
|||
<!-- markdownlint-disable MD024 -->
|
||||
|
||||
## [Unreleased]
|
||||
<!-- None < 2023-08-09 17:02:13+00:00 -->
|
||||
<!-- None < 2023-09-18 07:03:09+00:00 -->
|
||||
|
||||
## [26.0.0] - 2023-09-18
|
||||
<!-- 2023-09-18 07:03:09+00:00 < 2023-08-09 17:02:13+00:00 -->
|
||||
|
||||
### BLUEPRINTS
|
||||
|
||||
- [[#1684](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1684)] **incompatible change:** Update resource-level IAM interface for kms and pubsub modules ([juliocc](https://github.com/juliocc)) <!-- 2023-09-17 08:48:09+00:00 -->
|
||||
- [[#1682](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1682)] GKE cluster modules: add optional kube state metrics ([olliefr](https://github.com/olliefr)) <!-- 2023-09-15 11:18:45+00:00 -->
|
||||
- [[#1681](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1681)] **incompatible change:** Embed subnet-level IAM in the variables controlling creation of subnets ([juliocc](https://github.com/juliocc)) <!-- 2023-09-15 06:42:24+00:00 -->
|
||||
- [[#1680](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1680)] Upgrades to `monitoring_config` in `gke-cluster-*`, docs update, and cosmetics fixes to GKE cluster modules ([olliefr](https://github.com/olliefr)) <!-- 2023-09-14 22:25:57+00:00 -->
|
||||
- [[#1679](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1679)] Add lineage on Minimal Data Platform blueprint ([lcaggio](https://github.com/lcaggio)) <!-- 2023-09-14 15:52:20+00:00 -->
|
||||
- [[#1678](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1678)] Allow only one of `secondary_range_blocks` or `secondary_range_names` when creating GKE clusters. ([juliocc](https://github.com/juliocc)) <!-- 2023-09-14 11:29:08+00:00 -->
|
||||
- [[#1671](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1671)] **incompatible change:** Fixed, added back environments to each instance, that way we can also… ([apichick](https://github.com/apichick)) <!-- 2023-09-13 14:58:04+00:00 -->
|
||||
- [[#1662](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1662)] merge labels from data_merges in project factory ([Tutuchan](https://github.com/Tutuchan)) <!-- 2023-09-08 10:27:46+00:00 -->
|
||||
- [[#1651](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1651)] add AIRFLOW_VAR_ prefix to environment variables in data-platform blueprints ([Tutuchan](https://github.com/Tutuchan)) <!-- 2023-09-08 07:38:29+00:00 -->
|
||||
- [[#1642](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1642)] New phpIPAM serverless third parties solution in blueprints ([simonebruzzechesse](https://github.com/simonebruzzechesse)) <!-- 2023-09-07 13:30:23+00:00 -->
|
||||
- [[#1654](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1654)] Fix project factory blueprint and fast stage ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-09-07 12:48:39+00:00 -->
|
||||
- [[#1647](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1647)] Bump provider version to 4.80.0 ([juliocc](https://github.com/juliocc)) <!-- 2023-09-05 10:06:19+00:00 -->
|
||||
- [[#1638](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1638)] gke-cluster-standard: change logging configuration ([olliefr](https://github.com/olliefr)) <!-- 2023-08-31 11:49:15+00:00 -->
|
||||
- [[#1636](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1636)] Delete api gateway blueprint ([juliodiez](https://github.com/juliodiez)) <!-- 2023-08-29 11:32:40+00:00 -->
|
||||
- [[#1607](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1607)] Trap requests timeout error in quota sync ([ludoo](https://github.com/ludoo)) <!-- 2023-08-21 16:37:55+00:00 -->
|
||||
- [[#1595](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1595)] **incompatible change:** IAM interface refactor ([ludoo](https://github.com/ludoo)) <!-- 2023-08-20 07:44:20+00:00 -->
|
||||
- [[#1601](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1601)] [Data Platform] Update README.md ([lcaggio](https://github.com/lcaggio)) <!-- 2023-08-18 16:27:43+00:00 -->
|
||||
|
||||
### DOCUMENTATION
|
||||
|
||||
- [[#1687](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1687)] Add IAM variables template to ADR ([juliocc](https://github.com/juliocc)) <!-- 2023-09-17 09:08:11+00:00 -->
|
||||
- [[#1686](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1686)] CONTRIBUTING guide: fix broken links and update "running tests for specific examples" section ([olliefr](https://github.com/olliefr)) <!-- 2023-09-16 19:46:46+00:00 -->
|
||||
- [[#1658](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1658)] **incompatible change:** Change type of `iam_bindings` variable to allow multiple conditional bindings ([ludoo](https://github.com/ludoo)) <!-- 2023-09-08 06:56:31+00:00 -->
|
||||
- [[#1642](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1642)] New phpIPAM serverless third parties solution in blueprints ([simonebruzzechesse](https://github.com/simonebruzzechesse)) <!-- 2023-09-07 13:30:23+00:00 -->
|
||||
- [[#1640](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1640)] Simplify linting output in workflow ([juliocc](https://github.com/juliocc)) <!-- 2023-08-31 09:16:37+00:00 -->
|
||||
- [[#1636](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1636)] Delete api gateway blueprint ([juliodiez](https://github.com/juliodiez)) <!-- 2023-08-29 11:32:40+00:00 -->
|
||||
- [[#1595](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1595)] **incompatible change:** IAM interface refactor ([ludoo](https://github.com/ludoo)) <!-- 2023-08-20 07:44:20+00:00 -->
|
||||
|
||||
### FAST
|
||||
|
||||
- [[#1684](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1684)] **incompatible change:** Update resource-level IAM interface for kms and pubsub modules ([juliocc](https://github.com/juliocc)) <!-- 2023-09-17 08:48:09+00:00 -->
|
||||
- [[#1685](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1685)] Fix psa routing variable in FAST net stages ([ludoo](https://github.com/ludoo)) <!-- 2023-09-16 08:31:03+00:00 -->
|
||||
- [[#1682](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1682)] GKE cluster modules: add optional kube state metrics ([olliefr](https://github.com/olliefr)) <!-- 2023-09-15 11:18:45+00:00 -->
|
||||
- [[#1681](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1681)] **incompatible change:** Embed subnet-level IAM in the variables controlling creation of subnets ([juliocc](https://github.com/juliocc)) <!-- 2023-09-15 06:42:24+00:00 -->
|
||||
- [[#1680](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1680)] Upgrades to `monitoring_config` in `gke-cluster-*`, docs update, and cosmetics fixes to GKE cluster modules ([olliefr](https://github.com/olliefr)) <!-- 2023-09-14 22:25:57+00:00 -->
|
||||
- [[#1678](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1678)] Allow only one of `secondary_range_blocks` or `secondary_range_names` when creating GKE clusters. ([juliocc](https://github.com/juliocc)) <!-- 2023-09-14 11:29:08+00:00 -->
|
||||
- [[#1664](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1664)] Align pf stage sample data to new format ([ludoo](https://github.com/ludoo)) <!-- 2023-09-09 08:04:19+00:00 -->
|
||||
- [[#1663](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1663)] [#1661] Make FAST stage 1 resman tf destroy more reliable ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-09-08 10:09:31+00:00 -->
|
||||
- [[#1659](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1659)] Link project factory documentation from FAST stage ([ludoo](https://github.com/ludoo)) <!-- 2023-09-08 07:14:16+00:00 -->
|
||||
- [[#1658](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1658)] **incompatible change:** Change type of `iam_bindings` variable to allow multiple conditional bindings ([ludoo](https://github.com/ludoo)) <!-- 2023-09-08 06:56:31+00:00 -->
|
||||
- [[#1654](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1654)] Fix project factory blueprint and fast stage ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-09-07 12:48:39+00:00 -->
|
||||
- [[#1638](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1638)] gke-cluster-standard: change logging configuration ([olliefr](https://github.com/olliefr)) <!-- 2023-08-31 11:49:15+00:00 -->
|
||||
- [[#1634](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1634)] [revert(revert(patch))] Remove unused ASN numbers for CloudNAT in FAST ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-08-28 15:32:30+00:00 -->
|
||||
- [[#1631](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1631)] Allow single hfw policy association in folder and organization modules ([juliocc](https://github.com/juliocc)) <!-- 2023-08-28 14:46:05+00:00 -->
|
||||
- [[#1626](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1626)] Revert "Remove unused ASN numbers from CloudNAT to avoid provider errors" ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-08-28 07:33:53+00:00 -->
|
||||
- [[#1623](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1623)] Fix role name for delegated grants in FAST bootstrap ([juliocc](https://github.com/juliocc)) <!-- 2023-08-25 06:43:20+00:00 -->
|
||||
- [[#1612](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1612)] Fix: align stage-2-e-nva-bgp to the latest APIs ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-08-23 11:34:11+00:00 -->
|
||||
- [[#1610](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1610)] Fix: use existing variable to optionally name fw policies ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-08-22 06:55:56+00:00 -->
|
||||
- [[#1595](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1595)] **incompatible change:** IAM interface refactor ([ludoo](https://github.com/ludoo)) <!-- 2023-08-20 07:44:20+00:00 -->
|
||||
- [[#1597](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1597)] fix null object exception in bootstrap output when using cloudsource ([sm3142](https://github.com/sm3142)) <!-- 2023-08-17 09:03:23+00:00 -->
|
||||
- [[#1593](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1593)] Fix FAST CI/CD for Gitlab ([ludoo](https://github.com/ludoo)) <!-- 2023-08-15 10:59:31+00:00 -->
|
||||
|
@ -24,6 +66,41 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
### MODULES
|
||||
|
||||
- [[#1684](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1684)] **incompatible change:** Update resource-level IAM interface for kms and pubsub modules ([juliocc](https://github.com/juliocc)) <!-- 2023-09-17 08:48:09+00:00 -->
|
||||
- [[#1683](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1683)] Fix subnet iam_bindings to use arbitrary keys ([juliocc](https://github.com/juliocc)) <!-- 2023-09-15 13:15:59+00:00 -->
|
||||
- [[#1682](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1682)] GKE cluster modules: add optional kube state metrics ([olliefr](https://github.com/olliefr)) <!-- 2023-09-15 11:18:45+00:00 -->
|
||||
- [[#1681](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1681)] **incompatible change:** Embed subnet-level IAM in the variables controlling creation of subnets ([juliocc](https://github.com/juliocc)) <!-- 2023-09-15 06:42:24+00:00 -->
|
||||
- [[#1680](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1680)] Upgrades to `monitoring_config` in `gke-cluster-*`, docs update, and cosmetics fixes to GKE cluster modules ([olliefr](https://github.com/olliefr)) <!-- 2023-09-14 22:25:57+00:00 -->
|
||||
- [[#1678](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1678)] Allow only one of `secondary_range_blocks` or `secondary_range_names` when creating GKE clusters. ([juliocc](https://github.com/juliocc)) <!-- 2023-09-14 11:29:08+00:00 -->
|
||||
- [[#1675](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1675)] GKE Autopilot module: add network tags ([olliefr](https://github.com/olliefr)) <!-- 2023-09-14 09:34:51+00:00 -->
|
||||
- [[#1676](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1676)] fixed up nit from PR 1666 ([dgulli](https://github.com/dgulli)) <!-- 2023-09-14 05:23:20+00:00 -->
|
||||
- [[#1672](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1672)] Added possibility to use gcs push endpoint on pubsub subscription ([luigi-bitonti](https://github.com/luigi-bitonti)) <!-- 2023-09-13 19:42:43+00:00 -->
|
||||
- [[#1671](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1671)] **incompatible change:** Fixed, added back environments to each instance, that way we can also… ([apichick](https://github.com/apichick)) <!-- 2023-09-13 14:58:04+00:00 -->
|
||||
- [[#1666](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1666)] added support for global proxy only subnets ([dgulli](https://github.com/dgulli)) <!-- 2023-09-13 08:46:09+00:00 -->
|
||||
- [[#1669](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1669)] Fix for partner interconnect ([apichick](https://github.com/apichick)) <!-- 2023-09-12 13:29:35+00:00 -->
|
||||
- [[#1668](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1668)] fix(compute-mig): add correct type optionality for metrics in autosca… ([NotArpit](https://github.com/NotArpit)) <!-- 2023-09-12 11:58:09+00:00 -->
|
||||
- [[#1667](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1667)] fix(compute-mig): add mode property to compute_region_autoscaler ([NotArpit](https://github.com/NotArpit)) <!-- 2023-09-11 11:25:32+00:00 -->
|
||||
- [[#1658](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1658)] **incompatible change:** Change type of `iam_bindings` variable to allow multiple conditional bindings ([ludoo](https://github.com/ludoo)) <!-- 2023-09-08 06:56:31+00:00 -->
|
||||
- [[#1653](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1653)] Fixes to the apigee module ([juliocc](https://github.com/juliocc)) <!-- 2023-09-07 15:02:56+00:00 -->
|
||||
- [[#1642](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1642)] New phpIPAM serverless third parties solution in blueprints ([simonebruzzechesse](https://github.com/simonebruzzechesse)) <!-- 2023-09-07 13:30:23+00:00 -->
|
||||
- [[#1650](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1650)] Make net-vpc variables non-nullable ([juliocc](https://github.com/juliocc)) <!-- 2023-09-06 08:52:29+00:00 -->
|
||||
- [[#1647](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1647)] Bump provider version to 4.80.0 ([juliocc](https://github.com/juliocc)) <!-- 2023-09-05 10:06:19+00:00 -->
|
||||
- [[#1646](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1646)] gke-cluster-autopilot: add monitoring configuration ([olliefr](https://github.com/olliefr)) <!-- 2023-09-04 15:43:59+00:00 -->
|
||||
- [[#1645](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1645)] gke-cluster-autopilot: add validation for release_channel input variable ([olliefr](https://github.com/olliefr)) <!-- 2023-09-03 00:37:50+00:00 -->
|
||||
- [[#1638](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1638)] gke-cluster-standard: change logging configuration ([olliefr](https://github.com/olliefr)) <!-- 2023-08-31 11:49:15+00:00 -->
|
||||
- [[#1625](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1625)] gke-cluster-autopilot: add logging configuration ([olliefr](https://github.com/olliefr)) <!-- 2023-08-31 11:06:57+00:00 -->
|
||||
- [[#1637](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1637)] GRPC variable is misnamed "GRCP" in `modules/cloud-run/variables.tf`, causing liveness probe and startup probe to fail ([zacharysmithdatatonic](https://github.com/zacharysmithdatatonic)) <!-- 2023-08-30 11:47:05+00:00 -->
|
||||
- [[#1632](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1632)] Vpc sc allow null for identity type ([LudovicEmo](https://github.com/LudovicEmo)) <!-- 2023-08-29 02:28:58+00:00 -->
|
||||
- [[#1633](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1633)] Do not set default ASN number ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-08-28 15:06:32+00:00 -->
|
||||
- [[#1631](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1631)] Allow single hfw policy association in folder and organization modules ([juliocc](https://github.com/juliocc)) <!-- 2023-08-28 14:46:05+00:00 -->
|
||||
- [[#1630](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1630)] [Fix] Add explicit dependency between CR peers and NCC RA spoke creation ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-08-28 13:50:46+00:00 -->
|
||||
- [[#1613](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1613)] Cloud SQL activation policy selectable ([cmvalla](https://github.com/cmvalla)) <!-- 2023-08-25 10:12:08+00:00 -->
|
||||
- [[#1619](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1619)] Adding support for NAT in Apigee ([billabongrob](https://github.com/billabongrob)) <!-- 2023-08-24 18:25:54+00:00 -->
|
||||
- [[#1620](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1620)] Remove net-firewall-policy match variable validation ([richard-olson](https://github.com/richard-olson)) <!-- 2023-08-24 17:45:32+00:00 -->
|
||||
- [[#1614](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1614)] Fix net-firewall-policy factory name and action ([richard-olson](https://github.com/richard-olson)) <!-- 2023-08-23 14:06:00+00:00 -->
|
||||
- [[#1584](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1584)] add support for object upload to gcs module ([ehorning](https://github.com/ehorning)) <!-- 2023-08-22 17:01:19+00:00 -->
|
||||
- [[#1609](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1609)] **incompatible change:** Use cloud run bindings for cf v2 invoker role, refactor iam handling in cf v2 and cloud run ([ludoo](https://github.com/ludoo)) <!-- 2023-08-22 07:23:49+00:00 -->
|
||||
- [[#1590](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1590)] GCVE module first release ([eliamaldini](https://github.com/eliamaldini)) <!-- 2023-08-21 07:05:45+00:00 -->
|
||||
- [[#1595](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1595)] **incompatible change:** IAM interface refactor ([ludoo](https://github.com/ludoo)) <!-- 2023-08-20 07:44:20+00:00 -->
|
||||
- [[#1600](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1600)] fix(cloud-run): move cpu boost annotation to revision ([LiuVII](https://github.com/LiuVII)) <!-- 2023-08-18 14:46:25+00:00 -->
|
||||
- [[#1599](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1599)] Fixing some typos ([bluPhy](https://github.com/bluPhy)) <!-- 2023-08-18 08:29:26+00:00 -->
|
||||
|
@ -38,6 +115,9 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
### TOOLS
|
||||
|
||||
- [[#1641](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1641)] Lint script ([juliocc](https://github.com/juliocc)) <!-- 2023-08-31 09:38:09+00:00 -->
|
||||
- [[#1640](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1640)] Simplify linting output in workflow ([juliocc](https://github.com/juliocc)) <!-- 2023-08-31 09:16:37+00:00 -->
|
||||
- [[#1635](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1635)] Silence FAST tests warnings ([juliocc](https://github.com/juliocc)) <!-- 2023-08-29 05:26:58+00:00 -->
|
||||
- [[#1595](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1595)] **incompatible change:** IAM interface refactor ([ludoo](https://github.com/ludoo)) <!-- 2023-08-20 07:44:20+00:00 -->
|
||||
- [[#1585](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1585)] Print inventory path when a test fails ([juliocc](https://github.com/juliocc)) <!-- 2023-08-11 10:28:08+00:00 -->
|
||||
|
||||
|
@ -1483,7 +1563,8 @@ All notable changes to this project will be documented in this file.
|
|||
- merge development branch with suite of new modules and end-to-end examples
|
||||
|
||||
<!-- markdown-link-check-disable -->
|
||||
[Unreleased]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v25.0.0...HEAD
|
||||
[Unreleased]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v26.0.0...HEAD
|
||||
[26.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v25.0.0...v26.0.0
|
||||
[25.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v24.0.0...v25.0.0
|
||||
[24.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v23.0.0...v24.0.0
|
||||
[23.0.0]: https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/compare/v22.0.0...v23.0.0
|
||||
|
|
|
@ -686,8 +686,8 @@ Writing `pytest` unit tests to check plan results is really easy, but since wrap
|
|||
In the following sections we describe the three testing approaches we currently have:
|
||||
|
||||
- [Example-based tests](#testing-via-readmemd-example-blocks): this is perhaps the easiest and most common way to test either a module or a blueprint. You simply have to provide an example call to your module and a few metadata values in the module's README.md.
|
||||
- [tfvars-based tests](#testing-via-tfvars-and-yaml): allows you to test a module or blueprint by providing variables via tfvar files and an expected plan result in form of an inventory. This type of test is useful, for example, for FAST stages that don't have any examples within their READMEs.
|
||||
- [Python-based (legacy) tests](#writing-tests-in-python--legacy-approach-): in some situations you might still want to interact directly with `tftest` via Python, if that's the case, use this method to write custom Python logic to test your module in any way you see fit.
|
||||
- [tfvars-based tests](#testing-via-tfvars-and-yaml-aka-tftest-based-tests): allows you to test a module or blueprint by providing variables via tfvar files and an expected plan result in form of an inventory. This type of test is useful, for example, for FAST stages that don't have any examples within their READMEs.
|
||||
- [Python-based (legacy) tests](#writing-tests-in-python-legacy-approach): in some situations you might still want to interact directly with `tftest` via Python, if that's the case, use this method to write custom Python logic to test your module in any way you see fit.
|
||||
|
||||
### Testing via README.md example blocks
|
||||
|
||||
|
@ -818,27 +818,47 @@ Example-based test are named based on the section within the README.md that cont
|
|||
Here we show a few commonly used selection commands:
|
||||
|
||||
- Run all examples:
|
||||
- `pytest tests/examples/`
|
||||
- Run all examples for modules:
|
||||
- `pytest -k modules/ tests/examples`
|
||||
- `pytest tests/examples`
|
||||
- Run all examples for blueprints only:
|
||||
- `pytest -k blueprints tests/examples`
|
||||
- Run all examples for modules only:
|
||||
- `pytest -k modules tests/examples`
|
||||
- Run all examples for the `net-vpc` module:
|
||||
- `pytest -k 'net and vpc' tests/examples`
|
||||
- Run a specific example in module `net-vpc`:
|
||||
- `pytest -k 'modules and dns and private'`
|
||||
- `pytest -v 'tests/examples/test_plan.py::test_example[modules/dns:Private Zone]'`
|
||||
- `pytest -k 'modules and net-vpc:' tests/examples`
|
||||
- Run a specific example (identified by a substring match on its name) from the `net-vpc` module:
|
||||
- `pytest -k 'modules and net-vpc: and ipv6' tests/examples`
|
||||
- Run a specific example (identified by its full name) from the `net-vpc` module:
|
||||
- `pytest -v 'tests/examples/test_plan.py::test_example[modules/net-vpc:IPv6:1]'`
|
||||
- Run tests for all blueprints except those under the gke directory:
|
||||
- `pytest -k 'blueprints and not gke'`
|
||||
- `pytest -k 'blueprints and not gke' tests/examples`
|
||||
|
||||
Tip: you can use `pytest --collect-only` to fine tune your selection query without actually running the tests. Once you find the expression matching your desired tests, remove the `collect-only` flag.
|
||||
> [!NOTE]
|
||||
> The colon symbol (`:`) in `pytest` keyword expression `'modules and net-vpc:'` makes sure that `net-vpc` is matched but `net-vpc-firewall` or `net-vpc-peering` are not.
|
||||
|
||||
Tip: to list all tests matched by your keyword expression (`-k ...`) without actually running them, you can use the `--collect-only` flag.
|
||||
|
||||
The following command executes a dry run that *lists* all example-based tests for the `gke-cluster-autopilot` module:
|
||||
|
||||
```bash
|
||||
pytest -k 'modules and gke-cluster-autopilot:' tests/examples --collect-only
|
||||
```
|
||||
|
||||
Once you find the expression matching your desired test(s), remove the `--collect-only` flag.
|
||||
|
||||
The next command executes an example-based test found in the *Monitoring Configuration* section of the README file for the `gke-cluster-autopilot` module. That section actually has two tests, so the `:2` part selects the second test only:
|
||||
|
||||
```bash
|
||||
pytest -k 'modules and gke-cluster-autopilot: and monitoring and :2' tests/examples
|
||||
```
|
||||
|
||||
#### Generating the inventory automatically
|
||||
|
||||
Building an inventory file by hand is difficult. To simplify this task, the default test runner for examples prints the inventory for the full plan if it succeeds. Therefore, you can start without an inventory and then run a test to get the full plan and extract the pieces you want to build the inventory file.
|
||||
|
||||
Suppose you want to generate the inventory for the last DNS example above (the one creating the recordsets from a YAML file). Assuming that example is under the "Private Zone" section in the README for the `dns`, you can run the following command to build the inventory:
|
||||
Suppose you want to generate the inventory for the last DNS example above (the one creating the recordsets from a YAML file). Assuming that example is the first code block under the "Private Zone" section in the README for the `dns` module, you can run the following command to build the inventory:
|
||||
|
||||
```bash
|
||||
pytest -s 'tests/examples/test_plan.py::test_example[modules/dns:Private Zone]'
|
||||
pytest -s 'tests/examples/test_plan.py::test_example[modules/dns:Private Zone:1]'
|
||||
```
|
||||
|
||||
which will generate a output similar to this:
|
||||
|
|
|
@ -53,14 +53,13 @@ Do the following to verify that everything works as expected.
|
|||
|
||||
4. At 4am (UTC) every day the Cloud Scheduler will run and will export the analytics to the BigQuery table. Double-check they are there.
|
||||
<!-- BEGIN TFDOC -->
|
||||
|
||||
## Variables
|
||||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [envgroups](variables.tf#L24) | Environment groups (NAME => [HOSTNAMES]). | <code>map(list(string))</code> | ✓ | |
|
||||
| [environments](variables.tf#L30) | Environments. | <code title="map(object({ display_name = optional(string) description = optional(string) node_config = optional(object({ min_node_count = optional(number) max_node_count = optional(number) })) iam = optional(map(list(string))) envgroups = optional(list(string)) regions = optional(list(string)) }))">map(object({…}))</code> | ✓ | |
|
||||
| [instances](variables.tf#L46) | Instance. | <code title="map(object({ display_name = optional(string) description = optional(string) runtime_ip_cidr_range = string troubleshooting_ip_cidr_range = string disk_encryption_key = optional(string) consumer_accept_list = optional(list(string)) }))">map(object({…}))</code> | ✓ | |
|
||||
| [environments](variables.tf#L30) | Environments. | <code title="map(object({ display_name = optional(string) description = optional(string) node_config = optional(object({ min_node_count = optional(number) max_node_count = optional(number) })) iam = optional(map(list(string))) envgroups = optional(list(string)) }))">map(object({…}))</code> | ✓ | |
|
||||
| [instances](variables.tf#L45) | Instance. | <code title="map(object({ display_name = optional(string) description = optional(string) runtime_ip_cidr_range = string troubleshooting_ip_cidr_range = string disk_encryption_key = optional(string) consumer_accept_list = optional(list(string)) environments = optional(list(string)) }))">map(object({…}))</code> | ✓ | |
|
||||
| [project_id](variables.tf#L91) | Project ID. | <code>string</code> | ✓ | |
|
||||
| [psc_config](variables.tf#L97) | PSC configuration. | <code>map(string)</code> | ✓ | |
|
||||
| [datastore_name](variables.tf#L17) | Datastore. | <code>string</code> | | <code>"gcs"</code> |
|
||||
|
@ -74,7 +73,6 @@ Do the following to verify that everything works as expected.
|
|||
| name | description | sensitive |
|
||||
|---|---|:---:|
|
||||
| [ip_address](outputs.tf#L17) | IP address. | |
|
||||
|
||||
<!-- END TFDOC -->
|
||||
## Test
|
||||
|
||||
|
@ -92,13 +90,13 @@ module "test" {
|
|||
environments = {
|
||||
apis-test = {
|
||||
envgroups = ["test"]
|
||||
regions = ["europe-west1"]
|
||||
}
|
||||
}
|
||||
instances = {
|
||||
europe-west1 = {
|
||||
runtime_ip_cidr_range = "10.0.4.0/22"
|
||||
troubleshooting_ip_cidr_range = "10.1.0.0/28"
|
||||
environments = ["apis-test"]
|
||||
}
|
||||
}
|
||||
psc_config = {
|
||||
|
|
|
@ -38,7 +38,6 @@ variable "environments" {
|
|||
}))
|
||||
iam = optional(map(list(string)))
|
||||
envgroups = optional(list(string))
|
||||
regions = optional(list(string))
|
||||
}))
|
||||
nullable = false
|
||||
}
|
||||
|
@ -52,6 +51,7 @@ variable "instances" {
|
|||
troubleshooting_ip_cidr_range = string
|
||||
disk_encryption_key = optional(string)
|
||||
consumer_accept_list = optional(list(string))
|
||||
environments = optional(list(string))
|
||||
}))
|
||||
nullable = false
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -20,12 +20,9 @@ module "cluster" {
|
|||
name = "cluster"
|
||||
location = var.region
|
||||
vpc_config = {
|
||||
network = module.vpc.self_link
|
||||
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet-apigee"]
|
||||
secondary_range_names = {
|
||||
pods = "pods"
|
||||
services = "services"
|
||||
}
|
||||
network = module.vpc.self_link
|
||||
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet-apigee"]
|
||||
secondary_range_names = {}
|
||||
master_authorized_ranges = var.cluster_network_config.master_authorized_cidr_blocks
|
||||
master_ipv4_cidr_block = var.cluster_network_config.master_cidr_block
|
||||
}
|
||||
|
@ -79,4 +76,4 @@ module "apigee-runtime-nodepool" {
|
|||
create = true
|
||||
}
|
||||
tags = ["node"]
|
||||
}
|
||||
}
|
||||
|
|
|
@ -76,11 +76,11 @@ module "apigee" {
|
|||
environments = {
|
||||
(local.environment) = {
|
||||
envgroups = [local.envgroup]
|
||||
regions = [var.region]
|
||||
}
|
||||
}
|
||||
instances = {
|
||||
(var.region) = {
|
||||
environments = [local.environment]
|
||||
runtime_ip_cidr_range = var.apigee_runtime_ip_cidr_range
|
||||
troubleshooting_ip_cidr_range = var.apigee_troubleshooting_ip_cidr_range
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -55,10 +55,12 @@ module "vpc" {
|
|||
}
|
||||
|
||||
module "pubsub" {
|
||||
source = "../../../modules/pubsub"
|
||||
project_id = module.project.project_id
|
||||
name = var.name
|
||||
subscriptions = { "${var.name}-default" = null }
|
||||
source = "../../../modules/pubsub"
|
||||
project_id = module.project.project_id
|
||||
name = var.name
|
||||
subscriptions = {
|
||||
"${var.name}-default" = {}
|
||||
}
|
||||
iam = {
|
||||
"roles/pubsub.publisher" = [
|
||||
"serviceAccount:${module.project.service_accounts.robots.cloudasset}"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -39,7 +39,7 @@ module "pubsub" {
|
|||
project_id = module.project.project_id
|
||||
name = var.name
|
||||
subscriptions = {
|
||||
"${var.name}-default" = null
|
||||
"${var.name}-default" = {}
|
||||
}
|
||||
# the Cloud Scheduler robot service account already has pubsub.topics.publish
|
||||
# at the project level via roles/cloudscheduler.serviceAgent
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -63,7 +63,7 @@ module "pubsub" {
|
|||
project_id = module.project.project_id
|
||||
name = var.name
|
||||
subscriptions = {
|
||||
"${var.name}-default" = null
|
||||
"${var.name}-default" = {}
|
||||
}
|
||||
# the Cloud Scheduler robot service account already has pubsub.topics.publish
|
||||
# at the project level via roles/cloudscheduler.serviceAgent
|
||||
|
@ -74,7 +74,7 @@ module "pubsub_file" {
|
|||
project_id = module.project.project_id
|
||||
name = var.name_cffile
|
||||
subscriptions = {
|
||||
"${var.name_cffile}-default" = null
|
||||
"${var.name_cffile}-default" = {}
|
||||
}
|
||||
# the Cloud Scheduler robot service account already has pubsub.topics.publish
|
||||
# at the project level via roles/cloudscheduler.serviceAgent
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -179,5 +179,5 @@ module "test" {
|
|||
}
|
||||
prefix = "prefix"
|
||||
}
|
||||
# tftest modules=9 resources=43
|
||||
# tftest modules=9 resources=44
|
||||
```
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -106,7 +106,10 @@ module "kms" {
|
|||
name = "${var.prefix}-${var.region}",
|
||||
location = var.region
|
||||
}
|
||||
keys = { key-gce = null, key-gcs = null }
|
||||
keys = {
|
||||
key-gce = {}
|
||||
key-gcs = {}
|
||||
}
|
||||
}
|
||||
|
||||
###############################################################################
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -139,5 +139,5 @@ module "test" {
|
|||
}
|
||||
prefix = "prefix"
|
||||
}
|
||||
# tftest modules=5 resources=28
|
||||
# tftest modules=5 resources=29
|
||||
```
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
# tfdoc:file:description Orchestration Cloud Composer definition.
|
||||
|
||||
locals {
|
||||
env_variables = {
|
||||
_env_variables = {
|
||||
BQ_LOCATION = var.location
|
||||
DATA_CAT_TAGS = try(jsonencode(module.common-datacatalog.tags), "{}")
|
||||
DF_KMS_KEY = try(var.service_encryption_keys.dataflow, "")
|
||||
|
@ -48,6 +48,12 @@ locals {
|
|||
TRF_SA_DF = module.transf-sa-df-0.email
|
||||
TRF_SA_BQ = module.transf-sa-bq-0.email
|
||||
}
|
||||
env_variables = {
|
||||
for k, v in merge(
|
||||
try(var.composer_config.software_config.env_variables, null),
|
||||
local._env_variables
|
||||
) : "AIRFLOW_VAR_${k}" => v
|
||||
}
|
||||
}
|
||||
module "orch-sa-cmp-0" {
|
||||
source = "../../../modules/iam-service-account"
|
||||
|
@ -70,7 +76,7 @@ resource "google_composer_environment" "orch-cmp-0" {
|
|||
software_config {
|
||||
airflow_config_overrides = try(var.composer_config.software_config.airflow_config_overrides, null)
|
||||
pypi_packages = try(var.composer_config.software_config.pypi_packages, null)
|
||||
env_variables = merge(try(var.composer_config.software_config.env_variables, null), local.env_variables)
|
||||
env_variables = local.env_variables
|
||||
image_version = try(var.composer_config.software_config.image_version, null)
|
||||
}
|
||||
dynamic "workloads_config" {
|
||||
|
|
|
@ -16,57 +16,52 @@
|
|||
# Load The Dependencies
|
||||
# --------------------------------------------------------------------------------
|
||||
|
||||
import csv
|
||||
import datetime
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.providers.google.cloud.operators.dataflow import DataflowTemplatedJobStartOperator
|
||||
from airflow.operators import dummy
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator, BigQueryUpsertTableOperator, BigQueryUpdateTableSchemaOperator
|
||||
from airflow.utils.task_group import TaskGroup
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = json.loads(os.environ.get("DATA_CAT_TAGS"))
|
||||
DWH_LAND_PRJ = os.environ.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = os.environ.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = os.environ.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = os.environ.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = os.environ.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = os.environ.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = os.environ.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = os.environ.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = os.environ.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = os.environ.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = os.environ.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = os.environ.get("DWH_PLG_GCS")
|
||||
GCP_REGION = os.environ.get("GCP_REGION")
|
||||
DRP_PRJ = os.environ.get("DRP_PRJ")
|
||||
DRP_BQ = os.environ.get("DRP_BQ")
|
||||
DRP_GCS = os.environ.get("DRP_GCS")
|
||||
DRP_PS = os.environ.get("DRP_PS")
|
||||
LOD_PRJ = os.environ.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = os.environ.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = os.environ.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = os.environ.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = os.environ.get("LOD_SA_DF")
|
||||
ORC_PRJ = os.environ.get("ORC_PRJ")
|
||||
ORC_GCS = os.environ.get("ORC_GCS")
|
||||
TRF_PRJ = os.environ.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = os.environ.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = os.environ.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = os.environ.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = os.environ.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = os.environ.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = os.environ.get("DF_KMS_KEY", "")
|
||||
DF_REGION = os.environ.get("GCP_REGION")
|
||||
DF_ZONE = os.environ.get("GCP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = Variable.get("DATA_CAT_TAGS", deserialize_json=True)
|
||||
DWH_LAND_PRJ = Variable.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = Variable.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = Variable.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = Variable.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = Variable.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = Variable.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = Variable.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = Variable.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = Variable.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = Variable.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = Variable.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = Variable.get("DWH_PLG_GCS")
|
||||
GCP_REGION = Variable.get("GCP_REGION")
|
||||
DRP_PRJ = Variable.get("DRP_PRJ")
|
||||
DRP_BQ = Variable.get("DRP_BQ")
|
||||
DRP_GCS = Variable.get("DRP_GCS")
|
||||
DRP_PS = Variable.get("DRP_PS")
|
||||
LOD_PRJ = Variable.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = Variable.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = Variable.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = Variable.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = Variable.get("LOD_SA_DF")
|
||||
ORC_PRJ = Variable.get("ORC_PRJ")
|
||||
ORC_GCS = Variable.get("ORC_GCS")
|
||||
TRF_PRJ = Variable.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = Variable.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = Variable.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = Variable.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = Variable.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = Variable.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = Variable.get("DF_KMS_KEY", "")
|
||||
DF_REGION = Variable.get("GCP_REGION")
|
||||
DF_ZONE = Variable.get("GCP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -106,12 +101,12 @@ with models.DAG(
|
|||
'data_pipeline_dag',
|
||||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
|
|
@ -16,57 +16,53 @@
|
|||
# Load The Dependencies
|
||||
# --------------------------------------------------------------------------------
|
||||
|
||||
import csv
|
||||
import datetime
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.providers.google.cloud.operators.dataflow import DataflowTemplatedJobStartOperator
|
||||
from airflow.operators import dummy
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator, BigQueryUpsertTableOperator, BigQueryUpdateTableSchemaOperator
|
||||
from airflow.utils.task_group import TaskGroup
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = json.loads(os.environ.get("DATA_CAT_TAGS"))
|
||||
DWH_LAND_PRJ = os.environ.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = os.environ.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = os.environ.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = os.environ.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = os.environ.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = os.environ.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = os.environ.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = os.environ.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = os.environ.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = os.environ.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = os.environ.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = os.environ.get("DWH_PLG_GCS")
|
||||
GCP_REGION = os.environ.get("GCP_REGION")
|
||||
DRP_PRJ = os.environ.get("DRP_PRJ")
|
||||
DRP_BQ = os.environ.get("DRP_BQ")
|
||||
DRP_GCS = os.environ.get("DRP_GCS")
|
||||
DRP_PS = os.environ.get("DRP_PS")
|
||||
LOD_PRJ = os.environ.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = os.environ.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = os.environ.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = os.environ.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = os.environ.get("LOD_SA_DF")
|
||||
ORC_PRJ = os.environ.get("ORC_PRJ")
|
||||
ORC_GCS = os.environ.get("ORC_GCS")
|
||||
TRF_PRJ = os.environ.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = os.environ.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = os.environ.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = os.environ.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = os.environ.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = os.environ.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = os.environ.get("DF_KMS_KEY", "")
|
||||
DF_REGION = os.environ.get("GCP_REGION")
|
||||
DF_ZONE = os.environ.get("GCP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = Variable.get("DATA_CAT_TAGS", deserialize_json=True)
|
||||
DWH_LAND_PRJ = Variable.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = Variable.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = Variable.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = Variable.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = Variable.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = Variable.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = Variable.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = Variable.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = Variable.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = Variable.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = Variable.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = Variable.get("DWH_PLG_GCS")
|
||||
GCP_REGION = Variable.get("GCP_REGION")
|
||||
DRP_PRJ = Variable.get("DRP_PRJ")
|
||||
DRP_BQ = Variable.get("DRP_BQ")
|
||||
DRP_GCS = Variable.get("DRP_GCS")
|
||||
DRP_PS = Variable.get("DRP_PS")
|
||||
LOD_PRJ = Variable.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = Variable.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = Variable.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = Variable.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = Variable.get("LOD_SA_DF")
|
||||
ORC_PRJ = Variable.get("ORC_PRJ")
|
||||
ORC_GCS = Variable.get("ORC_GCS")
|
||||
TRF_PRJ = Variable.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = Variable.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = Variable.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = Variable.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = Variable.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = Variable.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = Variable.get("DF_KMS_KEY", "")
|
||||
DF_REGION = Variable.get("GCP_REGION")
|
||||
DF_ZONE = Variable.get("GCP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -106,12 +102,12 @@ with models.DAG(
|
|||
'data_pipeline_dc_tags_dag',
|
||||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
|
|
@ -17,12 +17,11 @@
|
|||
# --------------------------------------------------------------------------------
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
|
||||
from airflow import models
|
||||
from airflow.operators import dummy
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.dataflow import DataflowStartFlexTemplateOperator
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator, BigQueryUpsertTableOperator, BigQueryUpdateTableSchemaOperator
|
||||
from airflow.utils.task_group import TaskGroup
|
||||
|
@ -30,42 +29,42 @@ from airflow.utils.task_group import TaskGroup
|
|||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = json.loads(os.environ.get("DATA_CAT_TAGS"))
|
||||
DWH_LAND_PRJ = os.environ.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = os.environ.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = os.environ.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = os.environ.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = os.environ.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = os.environ.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = os.environ.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = os.environ.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = os.environ.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = os.environ.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = os.environ.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = os.environ.get("DWH_PLG_GCS")
|
||||
GCP_REGION = os.environ.get("GCP_REGION")
|
||||
DRP_PRJ = os.environ.get("DRP_PRJ")
|
||||
DRP_BQ = os.environ.get("DRP_BQ")
|
||||
DRP_GCS = os.environ.get("DRP_GCS")
|
||||
DRP_PS = os.environ.get("DRP_PS")
|
||||
LOD_PRJ = os.environ.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = os.environ.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = os.environ.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = os.environ.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = os.environ.get("LOD_SA_DF")
|
||||
ORC_PRJ = os.environ.get("ORC_PRJ")
|
||||
ORC_GCS = os.environ.get("ORC_GCS")
|
||||
ORC_GCS_TMP_DF = os.environ.get("ORC_GCS_TMP_DF")
|
||||
TRF_PRJ = os.environ.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = os.environ.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = os.environ.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = os.environ.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = os.environ.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = os.environ.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = os.environ.get("DF_KMS_KEY", "")
|
||||
DF_REGION = os.environ.get("GCP_REGION")
|
||||
DF_ZONE = os.environ.get("GCP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = Variable.get("DATA_CAT_TAGS", deserialize_json=True)
|
||||
DWH_LAND_PRJ = Variable.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = Variable.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = Variable.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = Variable.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = Variable.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = Variable.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = Variable.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = Variable.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = Variable.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = Variable.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = Variable.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = Variable.get("DWH_PLG_GCS")
|
||||
GCP_REGION = Variable.get("GCP_REGION")
|
||||
DRP_PRJ = Variable.get("DRP_PRJ")
|
||||
DRP_BQ = Variable.get("DRP_BQ")
|
||||
DRP_GCS = Variable.get("DRP_GCS")
|
||||
DRP_PS = Variable.get("DRP_PS")
|
||||
LOD_PRJ = Variable.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = Variable.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = Variable.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = Variable.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = Variable.get("LOD_SA_DF")
|
||||
ORC_PRJ = Variable.get("ORC_PRJ")
|
||||
ORC_GCS = Variable.get("ORC_GCS")
|
||||
ORC_GCS_TMP_DF = Variable.get("ORC_GCS_TMP_DF")
|
||||
TRF_PRJ = Variable.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = Variable.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = Variable.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = Variable.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = Variable.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = Variable.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = Variable.get("DF_KMS_KEY", "")
|
||||
DF_REGION = Variable.get("GCP_REGION")
|
||||
DF_ZONE = Variable.get("GCP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -104,9 +103,9 @@ dataflow_environment = {
|
|||
with models.DAG('data_pipeline_dc_tags_dag_flex',
|
||||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
start = dummy.DummyOperator(task_id='start', trigger_rule='all_success')
|
||||
start = empty.EmptyOperator(task_id='start', trigger_rule='all_success')
|
||||
|
||||
end = dummy.DummyOperator(task_id='end', trigger_rule='all_success')
|
||||
end = empty.EmptyOperator(task_id='end', trigger_rule='all_success')
|
||||
|
||||
# Bigquery Tables created here for demo porpuse.
|
||||
# Consider a dedicated pipeline or tool for a real life scenario.
|
||||
|
|
|
@ -17,54 +17,53 @@
|
|||
# --------------------------------------------------------------------------------
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
|
||||
from airflow import models
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.providers.google.cloud.operators.dataflow import DataflowStartFlexTemplateOperator
|
||||
from airflow.operators import dummy
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = json.loads(os.environ.get("DATA_CAT_TAGS"))
|
||||
DWH_LAND_PRJ = os.environ.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = os.environ.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = os.environ.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = os.environ.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = os.environ.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = os.environ.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = os.environ.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = os.environ.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = os.environ.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = os.environ.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = os.environ.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = os.environ.get("DWH_PLG_GCS")
|
||||
GCP_REGION = os.environ.get("GCP_REGION")
|
||||
DRP_PRJ = os.environ.get("DRP_PRJ")
|
||||
DRP_BQ = os.environ.get("DRP_BQ")
|
||||
DRP_GCS = os.environ.get("DRP_GCS")
|
||||
DRP_PS = os.environ.get("DRP_PS")
|
||||
LOD_PRJ = os.environ.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = os.environ.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = os.environ.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = os.environ.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = os.environ.get("LOD_SA_DF")
|
||||
ORC_PRJ = os.environ.get("ORC_PRJ")
|
||||
ORC_GCS = os.environ.get("ORC_GCS")
|
||||
ORC_GCS_TMP_DF = os.environ.get("ORC_GCS_TMP_DF")
|
||||
TRF_PRJ = os.environ.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = os.environ.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = os.environ.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = os.environ.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = os.environ.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = os.environ.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = os.environ.get("DF_KMS_KEY", "")
|
||||
DF_REGION = os.environ.get("GCP_REGION")
|
||||
DF_ZONE = os.environ.get("GCP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = Variable.get("DATA_CAT_TAGS", deserialize_json=True)
|
||||
DWH_LAND_PRJ = Variable.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = Variable.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = Variable.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = Variable.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = Variable.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = Variable.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = Variable.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = Variable.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = Variable.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = Variable.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = Variable.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = Variable.get("DWH_PLG_GCS")
|
||||
GCP_REGION = Variable.get("GCP_REGION")
|
||||
DRP_PRJ = Variable.get("DRP_PRJ")
|
||||
DRP_BQ = Variable.get("DRP_BQ")
|
||||
DRP_GCS = Variable.get("DRP_GCS")
|
||||
DRP_PS = Variable.get("DRP_PS")
|
||||
LOD_PRJ = Variable.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = Variable.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = Variable.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = Variable.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = Variable.get("LOD_SA_DF")
|
||||
ORC_PRJ = Variable.get("ORC_PRJ")
|
||||
ORC_GCS = Variable.get("ORC_GCS")
|
||||
ORC_GCS_TMP_DF = Variable.get("ORC_GCS_TMP_DF")
|
||||
TRF_PRJ = Variable.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = Variable.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = Variable.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = Variable.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = Variable.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = Variable.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = Variable.get("DF_KMS_KEY", "")
|
||||
DF_REGION = Variable.get("GCP_REGION")
|
||||
DF_ZONE = Variable.get("GCP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -104,9 +103,9 @@ with models.DAG('data_pipeline_dag_flex',
|
|||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
|
||||
start = dummy.DummyOperator(task_id='start', trigger_rule='all_success')
|
||||
start = empty.EmptyOperator(task_id='start', trigger_rule='all_success')
|
||||
|
||||
end = dummy.DummyOperator(task_id='end', trigger_rule='all_success')
|
||||
end = empty.EmptyOperator(task_id='end', trigger_rule='all_success')
|
||||
|
||||
# Bigquery Tables automatically created for demo purposes.
|
||||
# Consider a dedicated pipeline or tool for a real life scenario.
|
||||
|
|
|
@ -24,49 +24,49 @@ import logging
|
|||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.providers.google.cloud.operators.dataflow import DataflowTemplatedJobStartOperator
|
||||
from airflow.operators import dummy
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryDeleteTableOperator
|
||||
from airflow.utils.task_group import TaskGroup
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = json.loads(os.environ.get("DATA_CAT_TAGS"))
|
||||
DWH_LAND_PRJ = os.environ.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = os.environ.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = os.environ.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = os.environ.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = os.environ.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = os.environ.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = os.environ.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = os.environ.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = os.environ.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = os.environ.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = os.environ.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = os.environ.get("DWH_PLG_GCS")
|
||||
GCP_REGION = os.environ.get("GCP_REGION")
|
||||
DRP_PRJ = os.environ.get("DRP_PRJ")
|
||||
DRP_BQ = os.environ.get("DRP_BQ")
|
||||
DRP_GCS = os.environ.get("DRP_GCS")
|
||||
DRP_PS = os.environ.get("DRP_PS")
|
||||
LOD_PRJ = os.environ.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = os.environ.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = os.environ.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = os.environ.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = os.environ.get("LOD_SA_DF")
|
||||
ORC_PRJ = os.environ.get("ORC_PRJ")
|
||||
ORC_GCS = os.environ.get("ORC_GCS")
|
||||
TRF_PRJ = os.environ.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = os.environ.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = os.environ.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = os.environ.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = os.environ.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = os.environ.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = os.environ.get("DF_KMS_KEY", "")
|
||||
DF_REGION = os.environ.get("GCP_REGION")
|
||||
DF_ZONE = os.environ.get("GCP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
DATA_CAT_TAGS = Variable.get("DATA_CAT_TAGS", deserialize_json=True)
|
||||
DWH_LAND_PRJ = Variable.get("DWH_LAND_PRJ")
|
||||
DWH_LAND_BQ_DATASET = Variable.get("DWH_LAND_BQ_DATASET")
|
||||
DWH_LAND_GCS = Variable.get("DWH_LAND_GCS")
|
||||
DWH_CURATED_PRJ = Variable.get("DWH_CURATED_PRJ")
|
||||
DWH_CURATED_BQ_DATASET = Variable.get("DWH_CURATED_BQ_DATASET")
|
||||
DWH_CURATED_GCS = Variable.get("DWH_CURATED_GCS")
|
||||
DWH_CONFIDENTIAL_PRJ = Variable.get("DWH_CONFIDENTIAL_PRJ")
|
||||
DWH_CONFIDENTIAL_BQ_DATASET = Variable.get("DWH_CONFIDENTIAL_BQ_DATASET")
|
||||
DWH_CONFIDENTIAL_GCS = Variable.get("DWH_CONFIDENTIAL_GCS")
|
||||
DWH_PLG_PRJ = Variable.get("DWH_PLG_PRJ")
|
||||
DWH_PLG_BQ_DATASET = Variable.get("DWH_PLG_BQ_DATASET")
|
||||
DWH_PLG_GCS = Variable.get("DWH_PLG_GCS")
|
||||
GCP_REGION = Variable.get("GCP_REGION")
|
||||
DRP_PRJ = Variable.get("DRP_PRJ")
|
||||
DRP_BQ = Variable.get("DRP_BQ")
|
||||
DRP_GCS = Variable.get("DRP_GCS")
|
||||
DRP_PS = Variable.get("DRP_PS")
|
||||
LOD_PRJ = Variable.get("LOD_PRJ")
|
||||
LOD_GCS_STAGING = Variable.get("LOD_GCS_STAGING")
|
||||
LOD_NET_VPC = Variable.get("LOD_NET_VPC")
|
||||
LOD_NET_SUBNET = Variable.get("LOD_NET_SUBNET")
|
||||
LOD_SA_DF = Variable.get("LOD_SA_DF")
|
||||
ORC_PRJ = Variable.get("ORC_PRJ")
|
||||
ORC_GCS = Variable.get("ORC_GCS")
|
||||
TRF_PRJ = Variable.get("TRF_PRJ")
|
||||
TRF_GCS_STAGING = Variable.get("TRF_GCS_STAGING")
|
||||
TRF_NET_VPC = Variable.get("TRF_NET_VPC")
|
||||
TRF_NET_SUBNET = Variable.get("TRF_NET_SUBNET")
|
||||
TRF_SA_DF = Variable.get("TRF_SA_DF")
|
||||
TRF_SA_BQ = Variable.get("TRF_SA_BQ")
|
||||
DF_KMS_KEY = Variable.get("DF_KMS_KEY", "")
|
||||
DF_REGION = Variable.get("GCP_REGION")
|
||||
DF_ZONE = Variable.get("GCP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -106,19 +106,19 @@ with models.DAG(
|
|||
'delete_tables_dag',
|
||||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
# Bigquery Tables deleted here for demo porpuse.
|
||||
# Consider a dedicated pipeline or tool for a real life scenario.
|
||||
with TaskGroup('delete_table') as delte_table:
|
||||
with TaskGroup('delete_table') as delete_table:
|
||||
delete_table_customers = BigQueryDeleteTableOperator(
|
||||
task_id="delete_table_customers",
|
||||
deletion_dataset_table=DWH_LAND_PRJ+"."+DWH_LAND_BQ_DATASET+".customers",
|
||||
|
@ -143,4 +143,4 @@ with models.DAG(
|
|||
impersonation_chain=[TRF_SA_DF]
|
||||
)
|
||||
|
||||
start >> delte_table >> end
|
||||
start >> delete_table >> end
|
||||
|
|
|
@ -64,6 +64,7 @@ module "land-project" {
|
|||
"bigquerystorage.googleapis.com",
|
||||
"cloudkms.googleapis.com",
|
||||
"cloudresourcemanager.googleapis.com",
|
||||
"datalineage.googleapis.com",
|
||||
"iam.googleapis.com",
|
||||
"serviceusage.googleapis.com",
|
||||
"stackdriver.googleapis.com",
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
# tfdoc:file:description Cloud Composer resources.
|
||||
|
||||
locals {
|
||||
env_variables = {
|
||||
_env_variables = {
|
||||
BQ_LOCATION = var.location
|
||||
CURATED_BQ_DATASET = module.cur-bq-0.dataset_id
|
||||
CURATED_GCS = module.cur-cs-0.url
|
||||
|
@ -31,6 +31,11 @@ locals {
|
|||
PROCESSING_SUBNET = local.processing_subnet
|
||||
PROCESSING_VPC = local.processing_vpc
|
||||
}
|
||||
env_variables = {
|
||||
for k, v in merge(
|
||||
var.composer_config.software_config.env_variables, local._env_variables
|
||||
) : "AIRFLOW_VAR_${k}" => v
|
||||
}
|
||||
}
|
||||
|
||||
module "processing-sa-cmp-0" {
|
||||
|
@ -46,18 +51,20 @@ module "processing-sa-cmp-0" {
|
|||
}
|
||||
|
||||
resource "google_composer_environment" "processing-cmp-0" {
|
||||
count = var.enable_services.composer == true ? 1 : 0
|
||||
project = module.processing-project.project_id
|
||||
name = "${var.prefix}-prc-cmp-0"
|
||||
region = var.region
|
||||
count = var.enable_services.composer == true ? 1 : 0
|
||||
provider = google-beta
|
||||
project = module.processing-project.project_id
|
||||
name = "${var.prefix}-prc-cmp-0"
|
||||
region = var.region
|
||||
config {
|
||||
software_config {
|
||||
airflow_config_overrides = var.composer_config.software_config.airflow_config_overrides
|
||||
pypi_packages = var.composer_config.software_config.pypi_packages
|
||||
env_variables = merge(
|
||||
var.composer_config.software_config.env_variables, local.env_variables
|
||||
)
|
||||
image_version = var.composer_config.software_config.image_version
|
||||
env_variables = local.env_variables
|
||||
image_version = var.composer_config.software_config.image_version
|
||||
cloud_data_lineage_integration {
|
||||
enabled = var.composer_config.software_config.cloud_data_lineage_integration
|
||||
}
|
||||
}
|
||||
workloads_config {
|
||||
scheduler {
|
||||
|
|
|
@ -118,6 +118,7 @@ module "processing-project" {
|
|||
"compute.googleapis.com",
|
||||
"container.googleapis.com",
|
||||
"dataflow.googleapis.com",
|
||||
"datalineage.googleapis.com",
|
||||
"dataproc.googleapis.com",
|
||||
"iam.googleapis.com",
|
||||
"servicenetworking.googleapis.com",
|
||||
|
|
|
@ -22,6 +22,7 @@ locals {
|
|||
"cloudkms.googleapis.com",
|
||||
"cloudresourcemanager.googleapis.com",
|
||||
"compute.googleapis.com",
|
||||
"datalineage.googleapis.com",
|
||||
"iam.googleapis.com",
|
||||
"servicenetworking.googleapis.com",
|
||||
"serviceusage.googleapis.com",
|
||||
|
|
|
@ -229,7 +229,7 @@ module "data-platform" {
|
|||
prefix = "myprefix"
|
||||
}
|
||||
|
||||
# tftest modules=23 resources=135
|
||||
# tftest modules=23 resources=138
|
||||
```
|
||||
|
||||
## Customizations
|
||||
|
@ -302,19 +302,19 @@ The application layer is out of scope of this script. As a demo purpuse only, on
|
|||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [organization_domain](variables.tf#L122) | Organization domain. | <code>string</code> | ✓ | |
|
||||
| [prefix](variables.tf#L127) | Prefix used for resource names. | <code>string</code> | ✓ | |
|
||||
| [project_config](variables.tf#L136) | Provide 'billing_account_id' value if project creation is needed, uses existing 'project_ids' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object({ billing_account_id = optional(string, null) parent = string project_ids = optional(object({ landing = string processing = string curated = string common = string }), { landing = "lnd" processing = "prc" curated = "cur" common = "cmn" } ) })">object({…})</code> | ✓ | |
|
||||
| [composer_config](variables.tf#L17) | Cloud Composer config. | <code title="object({ environment_size = optional(string, "ENVIRONMENT_SIZE_SMALL") software_config = optional(object({ airflow_config_overrides = optional(map(string), {}) pypi_packages = optional(map(string), {}) env_variables = optional(map(string), {}) image_version = optional(string, "composer-2-airflow-2") }), {}) web_server_access_control = optional(map(string), {}) workloads_config = optional(object({ scheduler = optional(object({ cpu = optional(number, 0.5) memory_gb = optional(number, 1.875) storage_gb = optional(number, 1) count = optional(number, 1) } ), {}) web_server = optional(object({ cpu = optional(number, 0.5) memory_gb = optional(number, 1.875) storage_gb = optional(number, 1) }), {}) worker = optional(object({ cpu = optional(number, 0.5) memory_gb = optional(number, 1.875) storage_gb = optional(number, 1) min_count = optional(number, 1) max_count = optional(number, 3) } ), {}) }), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [data_catalog_tags](variables.tf#L55) | List of Data Catalog Policy tags to be created with optional IAM binging configuration in {tag => {ROLE => [MEMBERS]}} format. | <code title="map(object({ description = optional(string) iam = optional(map(list(string)), {}) }))">map(object({…}))</code> | | <code title="{ "3_Confidential" = {} "2_Private" = {} "1_Sensitive" = {} }">{…}</code> |
|
||||
| [data_force_destroy](variables.tf#L69) | Flag to set 'force_destroy' on data services like BiguQery or Cloud Storage. | <code>bool</code> | | <code>false</code> |
|
||||
| [enable_services](variables.tf#L75) | Flag to enable or disable services in the Data Platform. | <code title="object({ composer = optional(bool, true) dataproc_history_server = optional(bool, true) })">object({…})</code> | | <code>{}</code> |
|
||||
| [groups](variables.tf#L84) | User groups. | <code>map(string)</code> | | <code title="{ data-analysts = "gcp-data-analysts" data-engineers = "gcp-data-engineers" data-security = "gcp-data-security" }">{…}</code> |
|
||||
| [location](variables.tf#L94) | Location used for multi-regional resources. | <code>string</code> | | <code>"eu"</code> |
|
||||
| [network_config](variables.tf#L100) | Shared VPC network configurations to use. If null networks will be created in projects. | <code title="object({ host_project = optional(string) network_self_link = optional(string) subnet_self_link = optional(string) composer_ip_ranges = optional(object({ connection_subnetwork = optional(string) cloud_sql = optional(string, "10.20.10.0/24") gke_master = optional(string, "10.20.11.0/28") pods_range_name = optional(string, "pods") services_range_name = optional(string, "services") }), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [project_suffix](variables.tf#L160) | Suffix used only for project ids. | <code>string</code> | | <code>null</code> |
|
||||
| [region](variables.tf#L166) | Region used for regional resources. | <code>string</code> | | <code>"europe-west1"</code> |
|
||||
| [service_encryption_keys](variables.tf#L172) | Cloud KMS to use to encrypt different services. Key location should match service region. | <code title="object({ bq = optional(string) composer = optional(string) compute = optional(string) storage = optional(string) })">object({…})</code> | | <code>{}</code> |
|
||||
| [organization_domain](variables.tf#L123) | Organization domain. | <code>string</code> | ✓ | |
|
||||
| [prefix](variables.tf#L128) | Prefix used for resource names. | <code>string</code> | ✓ | |
|
||||
| [project_config](variables.tf#L137) | Provide 'billing_account_id' value if project creation is needed, uses existing 'project_ids' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object({ billing_account_id = optional(string, null) parent = string project_ids = optional(object({ landing = string processing = string curated = string common = string }), { landing = "lnd" processing = "prc" curated = "cur" common = "cmn" } ) })">object({…})</code> | ✓ | |
|
||||
| [composer_config](variables.tf#L17) | Cloud Composer config. | <code title="object({ environment_size = optional(string, "ENVIRONMENT_SIZE_SMALL") software_config = optional(object({ airflow_config_overrides = optional(map(string), {}) pypi_packages = optional(map(string), {}) env_variables = optional(map(string), {}) image_version = optional(string, "composer-2-airflow-2") cloud_data_lineage_integration = optional(bool, true) }), {}) web_server_access_control = optional(map(string), {}) workloads_config = optional(object({ scheduler = optional(object({ cpu = optional(number, 0.5) memory_gb = optional(number, 1.875) storage_gb = optional(number, 1) count = optional(number, 1) } ), {}) web_server = optional(object({ cpu = optional(number, 0.5) memory_gb = optional(number, 1.875) storage_gb = optional(number, 1) }), {}) worker = optional(object({ cpu = optional(number, 0.5) memory_gb = optional(number, 1.875) storage_gb = optional(number, 1) min_count = optional(number, 1) max_count = optional(number, 3) } ), {}) }), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [data_catalog_tags](variables.tf#L56) | List of Data Catalog Policy tags to be created with optional IAM binging configuration in {tag => {ROLE => [MEMBERS]}} format. | <code title="map(object({ description = optional(string) iam = optional(map(list(string)), {}) }))">map(object({…}))</code> | | <code title="{ "3_Confidential" = {} "2_Private" = {} "1_Sensitive" = {} }">{…}</code> |
|
||||
| [data_force_destroy](variables.tf#L70) | Flag to set 'force_destroy' on data services like BiguQery or Cloud Storage. | <code>bool</code> | | <code>false</code> |
|
||||
| [enable_services](variables.tf#L76) | Flag to enable or disable services in the Data Platform. | <code title="object({ composer = optional(bool, true) dataproc_history_server = optional(bool, true) })">object({…})</code> | | <code>{}</code> |
|
||||
| [groups](variables.tf#L85) | User groups. | <code>map(string)</code> | | <code title="{ data-analysts = "gcp-data-analysts" data-engineers = "gcp-data-engineers" data-security = "gcp-data-security" }">{…}</code> |
|
||||
| [location](variables.tf#L95) | Location used for multi-regional resources. | <code>string</code> | | <code>"eu"</code> |
|
||||
| [network_config](variables.tf#L101) | Shared VPC network configurations to use. If null networks will be created in projects. | <code title="object({ host_project = optional(string) network_self_link = optional(string) subnet_self_link = optional(string) composer_ip_ranges = optional(object({ connection_subnetwork = optional(string) cloud_sql = optional(string, "10.20.10.0/24") gke_master = optional(string, "10.20.11.0/28") pods_range_name = optional(string, "pods") services_range_name = optional(string, "services") }), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [project_suffix](variables.tf#L161) | Suffix used only for project ids. | <code>string</code> | | <code>null</code> |
|
||||
| [region](variables.tf#L167) | Region used for regional resources. | <code>string</code> | | <code>"europe-west1"</code> |
|
||||
| [service_encryption_keys](variables.tf#L173) | Cloud KMS to use to encrypt different services. Key location should match service region. | <code title="object({ bq = optional(string) composer = optional(string) compute = optional(string) storage = optional(string) })">object({…})</code> | | <code>{}</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
|
|
@ -54,5 +54,5 @@ source ./env.sh
|
|||
gsutil -i $LND_SA cp demo/data/*.csv gs://$LND_GCS
|
||||
gsutil -i $CMP_SA cp demo/data/*.j* gs://$PRC_GCS
|
||||
gsutil -i $CMP_SA cp demo/pyspark_* gs://$PRC_GCS
|
||||
gsutil -i $CMP_SA cp demo/dag_*.py $CMP_GCS
|
||||
gsutil -i $CMP_SA cp demo/dag_*.py gs://$CMP_GCS/dags
|
||||
```
|
||||
|
|
|
@ -16,34 +16,30 @@
|
|||
# Load The Dependencies
|
||||
# --------------------------------------------------------------------------------
|
||||
|
||||
import csv
|
||||
import datetime
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.operators import dummy
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.transfers.gcs_to_bigquery import GCSToBigQueryOperator
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
CURATED_PRJ = os.environ.get("CURATED_PRJ")
|
||||
CURATED_BQ_DATASET = os.environ.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = os.environ.get("CURATED_GCS")
|
||||
LAND_PRJ = os.environ.get("LAND_PRJ")
|
||||
LAND_GCS = os.environ.get("LAND_GCS")
|
||||
PROCESSING_GCS = os.environ.get("PROCESSING_GCS")
|
||||
PROCESSING_SA = os.environ.get("PROCESSING_SA")
|
||||
PROCESSING_PRJ = os.environ.get("PROCESSING_PRJ")
|
||||
PROCESSING_SUBNET = os.environ.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = os.environ.get("PROCESSING_VPC")
|
||||
DP_KMS_KEY = os.environ.get("DP_KMS_KEY", "")
|
||||
DP_REGION = os.environ.get("DP_REGION")
|
||||
DP_ZONE = os.environ.get("DP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
CURATED_PRJ = Variable.get("CURATED_PRJ")
|
||||
CURATED_BQ_DATASET = Variable.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = Variable.get("CURATED_GCS")
|
||||
LAND_PRJ = Variable.get("LAND_PRJ")
|
||||
LAND_GCS = Variable.get("LAND_GCS")
|
||||
PROCESSING_GCS = Variable.get("PROCESSING_GCS")
|
||||
PROCESSING_SA = Variable.get("PROCESSING_SA")
|
||||
PROCESSING_PRJ = Variable.get("PROCESSING_PRJ")
|
||||
PROCESSING_SUBNET = Variable.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = Variable.get("PROCESSING_VPC")
|
||||
DP_KMS_KEY = Variable.get("DP_KMS_KEY", "")
|
||||
DP_REGION = Variable.get("DP_REGION")
|
||||
DP_ZONE = Variable.get("DP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -73,12 +69,12 @@ with models.DAG(
|
|||
'bq_gcs2bq',
|
||||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
@ -96,7 +92,7 @@ with models.DAG(
|
|||
schema_update_options=['ALLOW_FIELD_RELAXATION', 'ALLOW_FIELD_ADDITION'],
|
||||
schema_object="customers.json",
|
||||
schema_object_bucket=PROCESSING_GCS[5:],
|
||||
project_id=PROCESSING_PRJ, # The process will continue to run on the dataset project until the Apache Airflow bug is fixed. https://github.com/apache/airflow/issues/32106
|
||||
project_id=PROCESSING_PRJ,
|
||||
impersonation_chain=[PROCESSING_SA]
|
||||
)
|
||||
|
||||
|
|
|
@ -16,36 +16,30 @@
|
|||
# Load The Dependencies
|
||||
# --------------------------------------------------------------------------------
|
||||
|
||||
import csv
|
||||
import datetime
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.dataflow import DataflowTemplatedJobStartOperator
|
||||
from airflow.operators import dummy
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator, BigQueryUpsertTableOperator, BigQueryUpdateTableSchemaOperator
|
||||
from airflow.utils.task_group import TaskGroup
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
CURATED_PRJ = os.environ.get("CURATED_PRJ")
|
||||
CURATED_BQ_DATASET = os.environ.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = os.environ.get("CURATED_GCS")
|
||||
LAND_PRJ = os.environ.get("LAND_PRJ")
|
||||
LAND_GCS = os.environ.get("LAND_GCS")
|
||||
PROCESSING_GCS = os.environ.get("PROCESSING_GCS")
|
||||
PROCESSING_SA = os.environ.get("PROCESSING_SA")
|
||||
PROCESSING_PRJ = os.environ.get("PROCESSING_PRJ")
|
||||
PROCESSING_SUBNET = os.environ.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = os.environ.get("PROCESSING_VPC")
|
||||
DP_KMS_KEY = os.environ.get("DP_KMS_KEY", "")
|
||||
DP_REGION = os.environ.get("DP_REGION")
|
||||
DP_ZONE = os.environ.get("DP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
CURATED_PRJ = Variable.get("CURATED_PRJ")
|
||||
CURATED_BQ_DATASET = Variable.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = Variable.get("CURATED_GCS")
|
||||
LAND_PRJ = Variable.get("LAND_PRJ")
|
||||
LAND_GCS = Variable.get("LAND_GCS")
|
||||
PROCESSING_GCS = Variable.get("PROCESSING_GCS")
|
||||
PROCESSING_SA = Variable.get("PROCESSING_SA")
|
||||
PROCESSING_PRJ = Variable.get("PROCESSING_PRJ")
|
||||
PROCESSING_SUBNET = Variable.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = Variable.get("PROCESSING_VPC")
|
||||
DP_KMS_KEY = Variable.get("DP_KMS_KEY", "")
|
||||
DP_REGION = Variable.get("DP_REGION")
|
||||
DP_ZONE = Variable.get("DP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -85,12 +79,12 @@ with models.DAG(
|
|||
'dataflow_gcs2bq',
|
||||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
|
|
@ -14,14 +14,13 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import datetime
|
||||
import time
|
||||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.operators import dummy
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.dataproc import (
|
||||
DataprocCreateBatchOperator, DataprocDeleteBatchOperator, DataprocGetBatchOperator, DataprocListBatchesOperator
|
||||
DataprocCreateBatchOperator
|
||||
|
||||
)
|
||||
from airflow.utils.dates import days_ago
|
||||
|
@ -29,22 +28,21 @@ from airflow.utils.dates import days_ago
|
|||
# --------------------------------------------------------------------------------
|
||||
# Get variables
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
CURATED_BQ_DATASET = os.environ.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = os.environ.get("CURATED_GCS")
|
||||
CURATED_PRJ = os.environ.get("CURATED_PRJ")
|
||||
DP_KMS_KEY = os.environ.get("DP_KMS_KEY", "")
|
||||
DP_REGION = os.environ.get("DP_REGION")
|
||||
GCP_REGION = os.environ.get("GCP_REGION")
|
||||
LAND_PRJ = os.environ.get("LAND_PRJ")
|
||||
LAND_BQ_DATASET = os.environ.get("LAND_BQ_DATASET")
|
||||
LAND_GCS = os.environ.get("LAND_GCS")
|
||||
PHS_CLUSTER_NAME = os.environ.get("PHS_CLUSTER_NAME")
|
||||
PROCESSING_GCS = os.environ.get("PROCESSING_GCS")
|
||||
PROCESSING_PRJ = os.environ.get("PROCESSING_PRJ")
|
||||
PROCESSING_SA = os.environ.get("PROCESSING_SA")
|
||||
PROCESSING_SUBNET = os.environ.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = os.environ.get("PROCESSING_VPC")
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
CURATED_BQ_DATASET = Variable.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = Variable.get("CURATED_GCS")
|
||||
CURATED_PRJ = Variable.get("CURATED_PRJ")
|
||||
DP_KMS_KEY = Variable.get("DP_KMS_KEY", "")
|
||||
DP_REGION = Variable.get("DP_REGION")
|
||||
LAND_PRJ = Variable.get("LAND_PRJ")
|
||||
LAND_BQ_DATASET = Variable.get("LAND_BQ_DATASET")
|
||||
LAND_GCS = Variable.get("LAND_GCS")
|
||||
PHS_CLUSTER_NAME = Variable.get("PHS_CLUSTER_NAME")
|
||||
PROCESSING_GCS = Variable.get("PROCESSING_GCS")
|
||||
PROCESSING_PRJ = Variable.get("PROCESSING_PRJ")
|
||||
PROCESSING_SA = Variable.get("PROCESSING_SA")
|
||||
PROCESSING_SUBNET = Variable.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = Variable.get("PROCESSING_VPC")
|
||||
|
||||
PYTHON_FILE_LOCATION = PROCESSING_GCS+"/pyspark_gcs2bq.py"
|
||||
PHS_CLUSTER_PATH = "projects/"+PROCESSING_PRJ+"/regions/"+DP_REGION+"/clusters/"+PHS_CLUSTER_NAME
|
||||
|
@ -61,12 +59,12 @@ with models.DAG(
|
|||
default_args=default_args, # The interval with which to schedule the DAG
|
||||
schedule_interval=None, # Override to match your needs
|
||||
) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
|
|
@ -16,36 +16,31 @@
|
|||
# Load The Dependencies
|
||||
# --------------------------------------------------------------------------------
|
||||
|
||||
import csv
|
||||
import datetime
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.providers.google.cloud.operators.dataflow import DataflowTemplatedJobStartOperator
|
||||
from airflow.operators import dummy
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.bigquery import BigQueryDeleteTableOperator
|
||||
from airflow.utils.task_group import TaskGroup
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set variables - Needed for the DEMO
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
CURATED_PRJ = os.environ.get("CURATED_PRJ")
|
||||
CURATED_BQ_DATASET = os.environ.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = os.environ.get("CURATED_GCS")
|
||||
LAND_PRJ = os.environ.get("LAND_PRJ")
|
||||
LAND_GCS = os.environ.get("LAND_GCS")
|
||||
PROCESSING_GCS = os.environ.get("PROCESSING_GCS")
|
||||
PROCESSING_SA = os.environ.get("PROCESSING_SA")
|
||||
PROCESSING_PRJ = os.environ.get("PROCESSING_PRJ")
|
||||
PROCESSING_SUBNET = os.environ.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = os.environ.get("PROCESSING_VPC")
|
||||
DP_KMS_KEY = os.environ.get("DP_KMS_KEY", "")
|
||||
DP_REGION = os.environ.get("DP_REGION")
|
||||
DP_ZONE = os.environ.get("DP_REGION") + "-b"
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
CURATED_PRJ = Variable.get("CURATED_PRJ")
|
||||
CURATED_BQ_DATASET = Variable.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = Variable.get("CURATED_GCS")
|
||||
LAND_PRJ = Variable.get("LAND_PRJ")
|
||||
LAND_GCS = Variable.get("LAND_GCS")
|
||||
PROCESSING_GCS = Variable.get("PROCESSING_GCS")
|
||||
PROCESSING_SA = Variable.get("PROCESSING_SA")
|
||||
PROCESSING_PRJ = Variable.get("PROCESSING_PRJ")
|
||||
PROCESSING_SUBNET = Variable.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = Variable.get("PROCESSING_VPC")
|
||||
DP_KMS_KEY = Variable.get("DP_KMS_KEY", "")
|
||||
DP_REGION = Variable.get("DP_REGION")
|
||||
DP_ZONE = Variable.get("DP_REGION") + "-b"
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Set default arguments
|
||||
|
@ -75,23 +70,23 @@ with models.DAG(
|
|||
'delete_tables_dag',
|
||||
default_args=default_args,
|
||||
schedule_interval=None) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
# Bigquery Tables deleted here for demo porpuse.
|
||||
# Consider a dedicated pipeline or tool for a real life scenario.
|
||||
with TaskGroup('delete_table') as delte_table:
|
||||
with TaskGroup('delete_table') as delete_table:
|
||||
delete_table_customers = BigQueryDeleteTableOperator(
|
||||
task_id="delete_table_customers",
|
||||
deletion_dataset_table=CURATED_PRJ+"."+CURATED_BQ_DATASET+".customers",
|
||||
impersonation_chain=[PROCESSING_SA]
|
||||
)
|
||||
|
||||
start >> delte_table >> end
|
||||
start >> delete_table >> end
|
||||
|
|
|
@ -14,41 +14,38 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import datetime
|
||||
import time
|
||||
import os
|
||||
|
||||
from airflow import models
|
||||
from airflow.operators import dummy
|
||||
from airflow.models.variable import Variable
|
||||
from airflow.operators import empty
|
||||
from airflow.providers.google.cloud.operators.dataproc import (
|
||||
DataprocCreateBatchOperator, DataprocDeleteBatchOperator, DataprocGetBatchOperator, DataprocListBatchesOperator
|
||||
|
||||
DataprocCreateBatchOperator
|
||||
)
|
||||
from airflow.utils.dates import days_ago
|
||||
|
||||
# --------------------------------------------------------------------------------
|
||||
# Get variables
|
||||
# --------------------------------------------------------------------------------
|
||||
BQ_LOCATION = os.environ.get("BQ_LOCATION")
|
||||
CURATED_BQ_DATASET = os.environ.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = os.environ.get("CURATED_GCS")
|
||||
CURATED_PRJ = os.environ.get("CURATED_PRJ")
|
||||
DP_KMS_KEY = os.environ.get("DP_KMS_KEY", "")
|
||||
DP_REGION = os.environ.get("DP_REGION")
|
||||
GCP_REGION = os.environ.get("GCP_REGION")
|
||||
LAND_PRJ = os.environ.get("LAND_PRJ")
|
||||
LAND_BQ_DATASET = os.environ.get("LAND_BQ_DATASET")
|
||||
LAND_GCS = os.environ.get("LAND_GCS")
|
||||
PHS_CLUSTER_NAME = os.environ.get("PHS_CLUSTER_NAME")
|
||||
PROCESSING_GCS = os.environ.get("PROCESSING_GCS")
|
||||
PROCESSING_PRJ = os.environ.get("PROCESSING_PRJ")
|
||||
PROCESSING_SA = os.environ.get("PROCESSING_SA")
|
||||
PROCESSING_SUBNET = os.environ.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = os.environ.get("PROCESSING_VPC")
|
||||
BQ_LOCATION = Variable.get("BQ_LOCATION")
|
||||
CURATED_BQ_DATASET = Variable.get("CURATED_BQ_DATASET")
|
||||
CURATED_GCS = Variable.get("CURATED_GCS")
|
||||
CURATED_PRJ = Variable.get("CURATED_PRJ")
|
||||
DP_KMS_KEY = Variable.get("DP_KMS_KEY", "")
|
||||
DP_REGION = Variable.get("DP_REGION")
|
||||
LAND_PRJ = Variable.get("LAND_PRJ")
|
||||
LAND_BQ_DATASET = Variable.get("LAND_BQ_DATASET")
|
||||
LAND_GCS = Variable.get("LAND_GCS")
|
||||
PHS_CLUSTER_NAME = Variable.get("PHS_CLUSTER_NAME")
|
||||
PROCESSING_GCS = Variable.get("PROCESSING_GCS")
|
||||
PROCESSING_PRJ = Variable.get("PROCESSING_PRJ")
|
||||
PROCESSING_SA = Variable.get("PROCESSING_SA")
|
||||
PROCESSING_SUBNET = Variable.get("PROCESSING_SUBNET")
|
||||
PROCESSING_VPC = Variable.get("PROCESSING_VPC")
|
||||
|
||||
PYTHON_FILE_LOCATION = PROCESSING_GCS+"/pyspark_sort.py"
|
||||
PHS_CLUSTER_PATH = "projects/"+PROCESSING_PRJ+"/regions/"+DP_REGION+"/clusters/"+PHS_CLUSTER_NAME
|
||||
BATCH_ID = "batch-create-phs-"+str(int(time.time()))
|
||||
PYTHON_FILE_LOCATION = PROCESSING_GCS + "/pyspark_sort.py"
|
||||
PHS_CLUSTER_PATH = f"projects/{PROCESSING_PRJ}/regions/{DP_REGION}/clusters/{PHS_CLUSTER_NAME}"
|
||||
BATCH_ID = "batch-create-phs-" + str(int(time.time()))
|
||||
|
||||
default_args = {
|
||||
# Tell airflow to start one day ago, so that it runs as soon as you upload it
|
||||
|
@ -60,12 +57,12 @@ with models.DAG(
|
|||
default_args=default_args, # The interval with which to schedule the DAG
|
||||
schedule_interval=None, # Override to match your needs
|
||||
) as dag:
|
||||
start = dummy.DummyOperator(
|
||||
start = empty.EmptyOperator(
|
||||
task_id='start',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
||||
end = dummy.DummyOperator(
|
||||
end = empty.EmptyOperator(
|
||||
task_id='end',
|
||||
trigger_rule='all_success'
|
||||
)
|
||||
|
|
|
@ -19,10 +19,11 @@ variable "composer_config" {
|
|||
type = object({
|
||||
environment_size = optional(string, "ENVIRONMENT_SIZE_SMALL")
|
||||
software_config = optional(object({
|
||||
airflow_config_overrides = optional(map(string), {})
|
||||
pypi_packages = optional(map(string), {})
|
||||
env_variables = optional(map(string), {})
|
||||
image_version = optional(string, "composer-2-airflow-2")
|
||||
airflow_config_overrides = optional(map(string), {})
|
||||
pypi_packages = optional(map(string), {})
|
||||
env_variables = optional(map(string), {})
|
||||
image_version = optional(string, "composer-2-airflow-2")
|
||||
cloud_data_lineage_integration = optional(bool, true)
|
||||
}), {})
|
||||
web_server_access_control = optional(map(string), {})
|
||||
workloads_config = optional(object({
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -21,26 +21,27 @@ module "kms" {
|
|||
location = var.region
|
||||
}
|
||||
keys = {
|
||||
key-df = null
|
||||
key-gcs = null
|
||||
key-bq = null
|
||||
}
|
||||
key_iam = {
|
||||
key-gcs = {
|
||||
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
|
||||
"serviceAccount:${module.project.service_accounts.robots.storage}"
|
||||
]
|
||||
},
|
||||
key-bq = {
|
||||
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
|
||||
"serviceAccount:${module.project.service_accounts.robots.bq}"
|
||||
]
|
||||
},
|
||||
key-df = {
|
||||
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
|
||||
"serviceAccount:${module.project.service_accounts.robots.dataflow}",
|
||||
"serviceAccount:${module.project.service_accounts.robots.compute}",
|
||||
]
|
||||
iam = {
|
||||
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
|
||||
"serviceAccount:${module.project.service_accounts.robots.dataflow}",
|
||||
"serviceAccount:${module.project.service_accounts.robots.compute}",
|
||||
]
|
||||
}
|
||||
}
|
||||
key-gcs = {
|
||||
iam = {
|
||||
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
|
||||
"serviceAccount:${module.project.service_accounts.robots.storage}"
|
||||
]
|
||||
}
|
||||
}
|
||||
key-bq = {
|
||||
iam = {
|
||||
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
|
||||
"serviceAccount:${module.project.service_accounts.robots.bq}"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -159,18 +159,18 @@ terraform apply
|
|||
|---|---|:---:|:---:|:---:|
|
||||
| [access_policy_config](variables.tf#L17) | Provide 'access_policy_create' values if a folder scoped Access Policy creation is needed, uses existing 'policy_name' otherwise. Parent is in 'organizations/123456' format. Policy will be created scoped to the folder. | <code title="object({ policy_name = optional(string, null) access_policy_create = optional(object({ parent = string title = string }), null) })">object({…})</code> | ✓ | |
|
||||
| [folder_config](variables.tf#L49) | Provide 'folder_create' values if folder creation is needed, uses existing 'folder_id' otherwise. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object({ folder_id = optional(string, null) folder_create = optional(object({ display_name = string parent = string }), null) })">object({…})</code> | ✓ | |
|
||||
| [organization](variables.tf#L129) | Organization details. | <code title="object({ domain = string id = string })">object({…})</code> | ✓ | |
|
||||
| [prefix](variables.tf#L137) | Prefix used for resources that need unique names. | <code>string</code> | ✓ | |
|
||||
| [project_config](variables.tf#L142) | Provide 'billing_account_id' value if project creation is needed, uses existing 'project_ids' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object({ billing_account_id = optional(string, null) project_ids = optional(object({ sec-core = string audit-logs = string }), { sec-core = "sec-core" audit-logs = "audit-logs" } ) })">object({…})</code> | ✓ | |
|
||||
| [organization](variables.tf#L148) | Organization details. | <code title="object({ domain = string id = string })">object({…})</code> | ✓ | |
|
||||
| [prefix](variables.tf#L156) | Prefix used for resources that need unique names. | <code>string</code> | ✓ | |
|
||||
| [project_config](variables.tf#L161) | Provide 'billing_account_id' value if project creation is needed, uses existing 'project_ids' if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object({ billing_account_id = optional(string, null) project_ids = optional(object({ sec-core = string audit-logs = string }), { sec-core = "sec-core" audit-logs = "audit-logs" } ) })">object({…})</code> | ✓ | |
|
||||
| [data_dir](variables.tf#L29) | Relative path for the folder storing configuration data. | <code>string</code> | | <code>"data"</code> |
|
||||
| [enable_features](variables.tf#L35) | Flag to enable features on the solution. | <code title="object({ encryption = optional(bool, false) log_sink = optional(bool, true) vpc_sc = optional(bool, true) })">object({…})</code> | | <code title="{ encryption = false log_sink = true vpc_sc = true }">{…}</code> |
|
||||
| [groups](variables.tf#L65) | User groups. | <code title="object({ workload-engineers = optional(string, "gcp-data-engineers") workload-security = optional(string, "gcp-data-security") })">object({…})</code> | | <code>{}</code> |
|
||||
| [kms_keys](variables.tf#L75) | KMS keys to create, keyed by name. | <code title="map(object({ iam = optional(map(list(string)), {}) iam_bindings_additive = optional(map(map(any)), {}) labels = optional(map(string), {}) locations = optional(list(string), ["global", "europe", "europe-west1"]) rotation_period = optional(string, "7776000s") }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [log_locations](variables.tf#L87) | Optional locations for GCS, BigQuery, and logging buckets created here. | <code title="object({ bq = optional(string, "europe") storage = optional(string, "europe") logging = optional(string, "global") pubsub = optional(string, "global") })">object({…})</code> | | <code title="{ bq = "europe" storage = "europe" logging = "global" pubsub = null }">{…}</code> |
|
||||
| [log_sinks](variables.tf#L104) | Org-level log sinks, in name => {type, filter} format. | <code title="map(object({ filter = string type = string }))">map(object({…}))</code> | | <code title="{ audit-logs = { filter = "logName:\"/logs/cloudaudit.googleapis.com%2Factivity\" OR logName:\"/logs/cloudaudit.googleapis.com%2Fsystem_event\"" type = "bigquery" } vpc-sc = { filter = "protoPayload.metadata.@type=\"type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata\"" type = "bigquery" } }">{…}</code> |
|
||||
| [vpc_sc_access_levels](variables.tf#L162) | VPC SC access level definitions. | <code title="map(object({ combining_function = optional(string) conditions = optional(list(object({ device_policy = optional(object({ allowed_device_management_levels = optional(list(string)) allowed_encryption_statuses = optional(list(string)) require_admin_approval = bool require_corp_owned = bool require_screen_lock = optional(bool) os_constraints = optional(list(object({ os_type = string minimum_version = optional(string) require_verified_chrome_os = optional(bool) }))) })) ip_subnetworks = optional(list(string), []) members = optional(list(string), []) negate = optional(bool) regions = optional(list(string), []) required_access_levels = optional(list(string), []) })), []) description = optional(string) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [vpc_sc_egress_policies](variables.tf#L191) | VPC SC egress policy definitions. | <code title="map(object({ from = object({ identity_type = optional(string, "ANY_IDENTITY") identities = optional(list(string)) }) to = object({ operations = optional(list(object({ method_selectors = optional(list(string)) service_name = string })), []) resources = optional(list(string)) resource_type_external = optional(bool, false) }) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [vpc_sc_ingress_policies](variables.tf#L211) | VPC SC ingress policy definitions. | <code title="map(object({ from = object({ access_levels = optional(list(string), []) identity_type = optional(string) identities = optional(list(string)) resources = optional(list(string), []) }) to = object({ operations = optional(list(object({ method_selectors = optional(list(string)) service_name = string })), []) resources = optional(list(string)) }) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [kms_keys](variables.tf#L75) | KMS keys to create, keyed by name. | <code title="map(object({ labels = optional(map(string)) locations = optional(list(string), ["global", "europe", "europe-west1"]) rotation_period = optional(string, "7776000s") purpose = optional(string, "ENCRYPT_DECRYPT") skip_initial_version_creation = optional(bool, false) version_template = optional(object({ algorithm = string protection_level = optional(string, "SOFTWARE") })) iam = optional(map(list(string)), {}) iam_bindings = optional(map(object({ members = list(string) condition = optional(object({ expression = string title = string description = optional(string) })) })), {}) iam_bindings_additive = optional(map(object({ member = string role = string condition = optional(object({ expression = string title = string description = optional(string) })) })), {}) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [log_locations](variables.tf#L111) | Optional locations for GCS, BigQuery, and logging buckets created here. | <code title="object({ bq = optional(string, "europe") storage = optional(string, "europe") logging = optional(string, "global") pubsub = optional(string, "global") })">object({…})</code> | | <code>{}</code> |
|
||||
| [log_sinks](variables.tf#L123) | Org-level log sinks, in name => {type, filter} format. | <code title="map(object({ filter = string type = string }))">map(object({…}))</code> | | <code title="{ audit-logs = { filter = "logName:\"/logs/cloudaudit.googleapis.com%2Factivity\" OR logName:\"/logs/cloudaudit.googleapis.com%2Fsystem_event\"" type = "bigquery" } vpc-sc = { filter = "protoPayload.metadata.@type=\"type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata\"" type = "bigquery" } }">{…}</code> |
|
||||
| [vpc_sc_access_levels](variables.tf#L181) | VPC SC access level definitions. | <code title="map(object({ combining_function = optional(string) conditions = optional(list(object({ device_policy = optional(object({ allowed_device_management_levels = optional(list(string)) allowed_encryption_statuses = optional(list(string)) require_admin_approval = bool require_corp_owned = bool require_screen_lock = optional(bool) os_constraints = optional(list(object({ os_type = string minimum_version = optional(string) require_verified_chrome_os = optional(bool) }))) })) ip_subnetworks = optional(list(string), []) members = optional(list(string), []) negate = optional(bool) regions = optional(list(string), []) required_access_levels = optional(list(string), []) })), []) description = optional(string) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [vpc_sc_egress_policies](variables.tf#L210) | VPC SC egress policy definitions. | <code title="map(object({ from = object({ identity_type = optional(string, "ANY_IDENTITY") identities = optional(list(string)) }) to = object({ operations = optional(list(object({ method_selectors = optional(list(string)) service_name = string })), []) resources = optional(list(string)) resource_type_external = optional(bool, false) }) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [vpc_sc_ingress_policies](variables.tf#L230) | VPC SC ingress policy definitions. | <code title="map(object({ from = object({ access_levels = optional(list(string), []) identity_type = optional(string) identities = optional(list(string)) resources = optional(list(string), []) }) to = object({ operations = optional(list(object({ method_selectors = optional(list(string)) service_name = string })), []) resources = optional(list(string)) }) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
|
|
@ -17,12 +17,17 @@
|
|||
# tfdoc:file:description Security project, Cloud KMS and Secret Manager resources.
|
||||
|
||||
locals {
|
||||
# list of locations with keys
|
||||
kms_locations = distinct(flatten([
|
||||
for k, v in var.kms_keys : v.locations
|
||||
]))
|
||||
# map { location -> { key_name -> key_details } }
|
||||
kms_locations_keys = {
|
||||
for loc in local.kms_locations : loc => {
|
||||
for k, v in var.kms_keys : k => v if contains(v.locations, loc)
|
||||
for loc in local.kms_locations :
|
||||
loc => {
|
||||
for k, v in var.kms_keys :
|
||||
k => v
|
||||
if contains(v.locations, loc)
|
||||
}
|
||||
}
|
||||
kms_log_locations = distinct(flatten([
|
||||
|
@ -30,17 +35,14 @@ locals {
|
|||
]))
|
||||
kms_log_sink_keys = {
|
||||
"storage" = {
|
||||
labels = {}
|
||||
locations = [var.log_locations.storage]
|
||||
rotation_period = "7776000s"
|
||||
}
|
||||
"bq" = {
|
||||
labels = {}
|
||||
locations = [var.log_locations.bq]
|
||||
rotation_period = "7776000s"
|
||||
}
|
||||
"pubsub" = {
|
||||
labels = {}
|
||||
locations = [var.log_locations.pubsub]
|
||||
rotation_period = "7776000s"
|
||||
}
|
||||
|
@ -88,12 +90,6 @@ module "sec-kms" {
|
|||
location = each.key
|
||||
name = "sec-${each.key}"
|
||||
}
|
||||
key_iam = {
|
||||
for k, v in local.kms_locations_keys[each.key] : k => v.iam
|
||||
}
|
||||
key_iam_bindings_additive = {
|
||||
for k, v in local.kms_locations_keys[each.key] : k => v.iam_bindings_additive
|
||||
}
|
||||
keys = local.kms_locations_keys[each.key]
|
||||
}
|
||||
|
||||
|
|
|
@ -75,11 +75,35 @@ variable "groups" {
|
|||
variable "kms_keys" {
|
||||
description = "KMS keys to create, keyed by name."
|
||||
type = map(object({
|
||||
iam = optional(map(list(string)), {})
|
||||
iam_bindings_additive = optional(map(map(any)), {})
|
||||
labels = optional(map(string), {})
|
||||
locations = optional(list(string), ["global", "europe", "europe-west1"])
|
||||
rotation_period = optional(string, "7776000s")
|
||||
labels = optional(map(string))
|
||||
locations = optional(list(string), ["global", "europe", "europe-west1"])
|
||||
rotation_period = optional(string, "7776000s")
|
||||
purpose = optional(string, "ENCRYPT_DECRYPT")
|
||||
skip_initial_version_creation = optional(bool, false)
|
||||
version_template = optional(object({
|
||||
algorithm = string
|
||||
protection_level = optional(string, "SOFTWARE")
|
||||
}))
|
||||
|
||||
iam = optional(map(list(string)), {})
|
||||
iam_bindings = optional(map(object({
|
||||
members = list(string)
|
||||
condition = optional(object({
|
||||
expression = string
|
||||
title = string
|
||||
description = optional(string)
|
||||
}))
|
||||
})), {})
|
||||
iam_bindings_additive = optional(map(object({
|
||||
member = string
|
||||
role = string
|
||||
condition = optional(object({
|
||||
expression = string
|
||||
title = string
|
||||
description = optional(string)
|
||||
}))
|
||||
})), {})
|
||||
|
||||
}))
|
||||
default = {}
|
||||
}
|
||||
|
@ -92,12 +116,7 @@ variable "log_locations" {
|
|||
logging = optional(string, "global")
|
||||
pubsub = optional(string, "global")
|
||||
})
|
||||
default = {
|
||||
bq = "europe"
|
||||
storage = "europe"
|
||||
logging = "global"
|
||||
pubsub = null
|
||||
}
|
||||
default = {}
|
||||
nullable = false
|
||||
}
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -55,6 +55,7 @@ billing_account: 012345-67890A-BCDEF0
|
|||
labels:
|
||||
app: app-1
|
||||
team: foo
|
||||
parent: folders/12345678
|
||||
service_encryption_key_ids:
|
||||
compute:
|
||||
- projects/kms-central-prj/locations/europe-west3/keyRings/my-keyring/cryptoKeys/europe3-gce
|
||||
|
@ -71,6 +72,7 @@ service_accounts:
|
|||
labels:
|
||||
app: app-1
|
||||
team: foo
|
||||
parent: folders/12345678
|
||||
service_accounts:
|
||||
app-2-be: {}
|
||||
|
||||
|
@ -81,10 +83,10 @@ service_accounts:
|
|||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [factory_data](variables.tf#L83) | Project data from either YAML files or externally parsed data. | <code title="object({ data = optional(map(any)) data_path = optional(string) })">object({…})</code> | ✓ | |
|
||||
| [data_defaults](variables.tf#L17) | Optional default values used when corresponding project data from files are missing. | <code title="object({ billing_account = optional(string) contacts = optional(map(list(string)), {}) labels = optional(map(string), {}) metric_scopes = optional(list(string), []) prefix = optional(string) service_encryption_key_ids = optional(map(list(string)), {}) service_perimeter_bridges = optional(list(string), []) service_perimeter_standard = optional(string) services = optional(list(string), []) shared_vpc_service_config = optional(object({ host_project = string service_identity_iam = optional(map(list(string)), {}) service_iam_grants = optional(list(string), []) }), { host_project = null }) tag_bindings = optional(map(string), {}) service_accounts = optional(map(object({ default_roles = optional(bool, true) })), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [data_merges](variables.tf#L44) | Optional values that will be merged with corresponding data from files. Combines with `data_defaults`, file data, and `data_overrides`. | <code title="object({ contacts = optional(map(list(string)), {}) labels = optional(map(string), {}) metric_scopes = optional(list(string), []) service_encryption_key_ids = optional(map(list(string)), {}) service_perimeter_bridges = optional(list(string), []) services = optional(list(string), []) tag_bindings = optional(map(string), {}) service_accounts = optional(map(object({ default_roles = optional(bool, true) })), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [data_overrides](variables.tf#L63) | Optional values that override corresponding data from files. Takes precedence over file data and `data_defaults`. | <code title="object({ billing_account = optional(string) contacts = optional(map(list(string))) prefix = optional(string) service_encryption_key_ids = optional(map(list(string))) service_perimeter_bridges = optional(list(string)) service_perimeter_standard = optional(string) tag_bindings = optional(map(string)) services = optional(list(string)) service_accounts = optional(map(object({ default_roles = optional(bool, true) }))) })">object({…})</code> | | <code>{}</code> |
|
||||
| [factory_data](variables.tf#L85) | Project data from either YAML files or externally parsed data. | <code title="object({ data = optional(map(any)) data_path = optional(string) })">object({…})</code> | ✓ | |
|
||||
| [data_defaults](variables.tf#L17) | Optional default values used when corresponding project data from files are missing. | <code title="object({ billing_account = optional(string) contacts = optional(map(list(string)), {}) labels = optional(map(string), {}) metric_scopes = optional(list(string), []) parent = optional(string) prefix = optional(string) service_encryption_key_ids = optional(map(list(string)), {}) service_perimeter_bridges = optional(list(string), []) service_perimeter_standard = optional(string) services = optional(list(string), []) shared_vpc_service_config = optional(object({ host_project = string service_identity_iam = optional(map(list(string)), {}) service_iam_grants = optional(list(string), []) }), { host_project = null }) tag_bindings = optional(map(string), {}) service_accounts = optional(map(object({ default_roles = optional(bool, true) })), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [data_merges](variables.tf#L45) | Optional values that will be merged with corresponding data from files. Combines with `data_defaults`, file data, and `data_overrides`. | <code title="object({ contacts = optional(map(list(string)), {}) labels = optional(map(string), {}) metric_scopes = optional(list(string), []) service_encryption_key_ids = optional(map(list(string)), {}) service_perimeter_bridges = optional(list(string), []) services = optional(list(string), []) tag_bindings = optional(map(string), {}) service_accounts = optional(map(object({ default_roles = optional(bool, true) })), {}) })">object({…})</code> | | <code>{}</code> |
|
||||
| [data_overrides](variables.tf#L64) | Optional values that override corresponding data from files. Takes precedence over file data and `data_defaults`. | <code title="object({ billing_account = optional(string) contacts = optional(map(list(string))) parent = optional(string) prefix = optional(string) service_encryption_key_ids = optional(map(list(string))) service_perimeter_bridges = optional(list(string)) service_perimeter_standard = optional(string) tag_bindings = optional(map(string)) services = optional(list(string)) service_accounts = optional(map(object({ default_roles = optional(bool, true) }))) })">object({…})</code> | | <code>{}</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
|
|
@ -28,11 +28,11 @@ locals {
|
|||
)
|
||||
projects = {
|
||||
for k, v in local._data : k => merge(v, {
|
||||
billing_account = coalesce(
|
||||
billing_account = try(coalesce(
|
||||
var.data_overrides.billing_account,
|
||||
try(v.billing_account, null),
|
||||
var.data_defaults.billing_account
|
||||
)
|
||||
), null)
|
||||
contacts = coalesce(
|
||||
var.data_overrides.contacts,
|
||||
try(v.contacts, null),
|
||||
|
@ -46,6 +46,11 @@ locals {
|
|||
try(v.metric_scopes, null),
|
||||
var.data_defaults.metric_scopes
|
||||
)
|
||||
parent = coalesce(
|
||||
var.data_overrides.parent,
|
||||
try(v.parent, null),
|
||||
var.data_defaults.parent
|
||||
)
|
||||
prefix = coalesce(
|
||||
var.data_overrides.prefix,
|
||||
try(v.prefix, null),
|
||||
|
|
|
@ -33,11 +33,13 @@ module "projects" {
|
|||
iam = try(each.value.iam, {})
|
||||
iam_bindings = try(each.value.iam_bindings, {})
|
||||
iam_bindings_additive = try(each.value.iam_bindings_additive, {})
|
||||
labels = each.value.labels
|
||||
lien_reason = try(each.value.lien_reason, null)
|
||||
logging_data_access = try(each.value.logging_data_access, {})
|
||||
logging_exclusions = try(each.value.logging_exclusions, {})
|
||||
logging_sinks = try(each.value.logging_sinks, {})
|
||||
labels = merge(
|
||||
each.value.labels, var.data_merges.labels
|
||||
)
|
||||
lien_reason = try(each.value.lien_reason, null)
|
||||
logging_data_access = try(each.value.logging_data_access, {})
|
||||
logging_exclusions = try(each.value.logging_exclusions, {})
|
||||
logging_sinks = try(each.value.logging_sinks, {})
|
||||
metric_scopes = distinct(concat(
|
||||
each.value.metric_scopes, var.data_merges.metric_scopes
|
||||
))
|
||||
|
|
|
@ -21,6 +21,7 @@ variable "data_defaults" {
|
|||
contacts = optional(map(list(string)), {})
|
||||
labels = optional(map(string), {})
|
||||
metric_scopes = optional(list(string), [])
|
||||
parent = optional(string)
|
||||
prefix = optional(string)
|
||||
service_encryption_key_ids = optional(map(list(string)), {})
|
||||
service_perimeter_bridges = optional(list(string), [])
|
||||
|
@ -65,6 +66,7 @@ variable "data_overrides" {
|
|||
type = object({
|
||||
billing_account = optional(string)
|
||||
contacts = optional(map(list(string)))
|
||||
parent = optional(string)
|
||||
prefix = optional(string)
|
||||
service_encryption_key_ids = optional(map(list(string)))
|
||||
service_perimeter_bridges = optional(list(string))
|
||||
|
|
|
@ -20,12 +20,9 @@ module "cluster" {
|
|||
name = "cluster"
|
||||
location = var.region
|
||||
vpc_config = {
|
||||
network = module.vpc.self_link
|
||||
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet-cluster"]
|
||||
secondary_range_names = {
|
||||
pods = "pods"
|
||||
services = "services"
|
||||
}
|
||||
network = module.vpc.self_link
|
||||
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet-cluster"]
|
||||
secondary_range_names = {}
|
||||
master_authorized_ranges = var.cluster_network_config.master_authorized_cidr_blocks
|
||||
master_ipv4_cidr_block = var.cluster_network_config.master_cidr_block
|
||||
}
|
||||
|
@ -33,8 +30,17 @@ module "cluster" {
|
|||
# autopilot = true
|
||||
# }
|
||||
# monitoring_config = {
|
||||
# enenable_components = ["SYSTEM_COMPONENTS"]
|
||||
# managed_prometheus = true
|
||||
# # (Optional) control plane metrics
|
||||
# enable_api_server_metrics = true
|
||||
# enable_controller_manager_metrics = true
|
||||
# enable_scheduler_metrics = true
|
||||
# # (Optional) kube state metrics
|
||||
# enable_daemonset_metrics = true
|
||||
# enable_deployment_metrics = true
|
||||
# enable_hpa_metrics = true
|
||||
# enable_pod_metrics = true
|
||||
# enable_statefulset_metrics = true
|
||||
# enable_storage_metrics = true
|
||||
# }
|
||||
# cluster_autoscaling = {
|
||||
# auto_provisioning_defaults = {
|
||||
|
@ -51,4 +57,4 @@ module "node_sa" {
|
|||
source = "../../../modules/iam-service-account"
|
||||
project_id = module.project.project_id
|
||||
name = "sa-node"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
* Copyright 2023 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
|
@ -115,20 +115,16 @@ module "kms" {
|
|||
project_id = module.project.project_id
|
||||
keyring = { location = var.region, name = "test-keyring" }
|
||||
keyring_create = true
|
||||
keys = { test-key = null }
|
||||
key_purpose = {
|
||||
keys = {
|
||||
test-key = {
|
||||
purpose = "ASYMMETRIC_SIGN"
|
||||
version_template = {
|
||||
algorithm = "RSA_SIGN_PKCS1_4096_SHA512"
|
||||
protection_level = null
|
||||
algorithm = "RSA_SIGN_PKCS1_4096_SHA512"
|
||||
}
|
||||
iam = {
|
||||
"roles/cloudkms.publicKeyViewer" = [module.image_cb_sa.iam_email]
|
||||
"roles/cloudkms.signer" = [module.image_cb_sa.iam_email]
|
||||
}
|
||||
}
|
||||
}
|
||||
key_iam = {
|
||||
test-key = {
|
||||
"roles/cloudkms.publicKeyViewer" = [module.image_cb_sa.iam_email]
|
||||
"roles/cloudkms.signer" = [module.image_cb_sa.iam_email]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -244,21 +244,21 @@ module "gke" {
|
|||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [billing_account_id](variables.tf#L17) | Billing account id. | <code>string</code> | ✓ | |
|
||||
| [folder_id](variables.tf#L138) | Folder used for the GKE project in folders/nnnnnnnnnnn format. | <code>string</code> | ✓ | |
|
||||
| [prefix](variables.tf#L189) | Prefix used for resource names. | <code>string</code> | ✓ | |
|
||||
| [project_id](variables.tf#L198) | ID of the project that will contain all the clusters. | <code>string</code> | ✓ | |
|
||||
| [vpc_config](variables.tf#L210) | Shared VPC project and VPC details. | <code title="object({ host_project_id = string vpc_self_link = string })">object({…})</code> | ✓ | |
|
||||
| [clusters](variables.tf#L22) | Clusters configuration. Refer to the gke-cluster module for type details. | <code title="map(object({ cluster_autoscaling = optional(any) description = optional(string) enable_addons = optional(any, { horizontal_pod_autoscaling = true, http_load_balancing = true }) enable_features = optional(any, { workload_identity = true }) issue_client_certificate = optional(bool, false) labels = optional(map(string)) location = string logging_config = optional(object({ enable_system_logs = optional(bool, true) enable_workloads_logs = optional(bool, true) enable_api_server_logs = optional(bool, false) enable_scheduler_logs = optional(bool, false) enable_controller_manager_logs = optional(bool, false) }), {}) maintenance_config = optional(any, { daily_window_start_time = "03:00" recurring_window = null maintenance_exclusion = [] }) max_pods_per_node = optional(number, 110) min_master_version = optional(string) monitoring_config = optional(object({ enable_components = optional(list(string), ["SYSTEM_COMPONENTS"]) managed_prometheus = optional(bool) })) node_locations = optional(list(string)) private_cluster_config = optional(any) release_channel = optional(string) vpc_config = object({ subnetwork = string network = optional(string) secondary_range_blocks = optional(object({ pods = string services = string })) secondary_range_names = optional(object({ pods = string services = string }), { pods = "pods", services = "services" }) master_authorized_ranges = optional(map(string)) master_ipv4_cidr_block = optional(string) }) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [fleet_configmanagement_clusters](variables.tf#L76) | Config management features enabled on specific sets of member clusters, in config name => [cluster name] format. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [fleet_configmanagement_templates](variables.tf#L83) | Sets of config management configurations that can be applied to member clusters, in config name => {options} format. | <code title="map(object({ binauthz = bool config_sync = object({ git = object({ gcp_service_account_email = string https_proxy = string policy_dir = string secret_type = string sync_branch = string sync_repo = string sync_rev = string sync_wait_secs = number }) prevent_drift = string source_format = string }) hierarchy_controller = object({ enable_hierarchical_resource_quota = bool enable_pod_tree_labels = bool }) policy_controller = object({ audit_interval_seconds = number exemptable_namespaces = list(string) log_denies_enabled = bool referential_rules_enabled = bool template_library_installed = bool }) version = string }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [fleet_features](variables.tf#L118) | Enable and configure fleet features. Set to null to disable GKE Hub if fleet workload identity is not used. | <code title="object({ appdevexperience = bool configmanagement = bool identityservice = bool multiclusteringress = string multiclusterservicediscovery = bool servicemesh = bool })">object({…})</code> | | <code>null</code> |
|
||||
| [fleet_workload_identity](variables.tf#L131) | Use Fleet Workload Identity for clusters. Enables GKE Hub if set to true. | <code>bool</code> | | <code>false</code> |
|
||||
| [group_iam](variables.tf#L143) | Project-level IAM bindings for groups. Use group emails as keys, list of roles as values. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [iam](variables.tf#L150) | Project-level authoritative IAM bindings for users and service accounts in {ROLE => [MEMBERS]} format. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [labels](variables.tf#L157) | Project-level labels. | <code>map(string)</code> | | <code>{}</code> |
|
||||
| [nodepools](variables.tf#L163) | Nodepools configuration. Refer to the gke-nodepool module for type details. | <code title="map(map(object({ gke_version = optional(string) labels = optional(map(string), {}) max_pods_per_node = optional(number) name = optional(string) node_config = optional(any, { disk_type = "pd-balanced" }) node_count = optional(map(number), { initial = 1 }) node_locations = optional(list(string)) nodepool_config = optional(any) pod_range = optional(any) reservation_affinity = optional(any) service_account = optional(any) sole_tenant_nodegroup = optional(string) tags = optional(list(string)) taints = optional(list(object({ key = string value = string effect = string }))) })))">map(map(object({…})))</code> | | <code>{}</code> |
|
||||
| [project_services](variables.tf#L203) | Additional project services to enable. | <code>list(string)</code> | | <code>[]</code> |
|
||||
| [billing_account_id](variables.tf#L17) | Billing account ID. | <code>string</code> | ✓ | |
|
||||
| [folder_id](variables.tf#L154) | Folder used for the GKE project in folders/nnnnnnnnnnn format. | <code>string</code> | ✓ | |
|
||||
| [prefix](variables.tf#L205) | Prefix used for resource names. | <code>string</code> | ✓ | |
|
||||
| [project_id](variables.tf#L214) | ID of the project that will contain all the clusters. | <code>string</code> | ✓ | |
|
||||
| [vpc_config](variables.tf#L226) | Shared VPC project and VPC details. | <code title="object({ host_project_id = string vpc_self_link = string })">object({…})</code> | ✓ | |
|
||||
| [clusters](variables.tf#L22) | Clusters configuration. Refer to the gke-cluster module for type details. | <code title="map(object({ cluster_autoscaling = optional(any) description = optional(string) enable_addons = optional(any, { horizontal_pod_autoscaling = true, http_load_balancing = true }) enable_features = optional(any, { workload_identity = true }) issue_client_certificate = optional(bool, false) labels = optional(map(string)) location = string logging_config = optional(object({ enable_system_logs = optional(bool, true) enable_workloads_logs = optional(bool, true) enable_api_server_logs = optional(bool, false) enable_scheduler_logs = optional(bool, false) enable_controller_manager_logs = optional(bool, false) }), {}) maintenance_config = optional(any, { daily_window_start_time = "03:00" recurring_window = null maintenance_exclusion = [] }) max_pods_per_node = optional(number, 110) min_master_version = optional(string) monitoring_config = optional(object({ enable_system_metrics = optional(bool, true) enable_api_server_metrics = optional(bool, false) enable_controller_manager_metrics = optional(bool, false) enable_scheduler_metrics = optional(bool, false) enable_daemonset_metrics = optional(bool, false) enable_deployment_metrics = optional(bool, false) enable_hpa_metrics = optional(bool, false) enable_pod_metrics = optional(bool, false) enable_statefulset_metrics = optional(bool, false) enable_storage_metrics = optional(bool, false) enable_managed_prometheus = optional(bool, true) }), {}) node_locations = optional(list(string)) private_cluster_config = optional(any) release_channel = optional(string) vpc_config = object({ subnetwork = string network = optional(string) secondary_range_blocks = optional(object({ pods = string services = string })) secondary_range_names = optional(object({ pods = string services = string }), { pods = "pods", services = "services" }) master_authorized_ranges = optional(map(string)) master_ipv4_cidr_block = optional(string) }) }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [fleet_configmanagement_clusters](variables.tf#L92) | Config management features enabled on specific sets of member clusters, in config name => [cluster name] format. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [fleet_configmanagement_templates](variables.tf#L99) | Sets of config management configurations that can be applied to member clusters, in config name => {options} format. | <code title="map(object({ binauthz = bool config_sync = object({ git = object({ gcp_service_account_email = string https_proxy = string policy_dir = string secret_type = string sync_branch = string sync_repo = string sync_rev = string sync_wait_secs = number }) prevent_drift = string source_format = string }) hierarchy_controller = object({ enable_hierarchical_resource_quota = bool enable_pod_tree_labels = bool }) policy_controller = object({ audit_interval_seconds = number exemptable_namespaces = list(string) log_denies_enabled = bool referential_rules_enabled = bool template_library_installed = bool }) version = string }))">map(object({…}))</code> | | <code>{}</code> |
|
||||
| [fleet_features](variables.tf#L134) | Enable and configure fleet features. Set to null to disable GKE Hub if fleet workload identity is not used. | <code title="object({ appdevexperience = bool configmanagement = bool identityservice = bool multiclusteringress = string multiclusterservicediscovery = bool servicemesh = bool })">object({…})</code> | | <code>null</code> |
|
||||
| [fleet_workload_identity](variables.tf#L147) | Use Fleet Workload Identity for clusters. Enables GKE Hub if set to true. | <code>bool</code> | | <code>false</code> |
|
||||
| [group_iam](variables.tf#L159) | Project-level IAM bindings for groups. Use group emails as keys, list of roles as values. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [iam](variables.tf#L166) | Project-level authoritative IAM bindings for users and service accounts in {ROLE => [MEMBERS]} format. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [labels](variables.tf#L173) | Project-level labels. | <code>map(string)</code> | | <code>{}</code> |
|
||||
| [nodepools](variables.tf#L179) | Nodepools configuration. Refer to the gke-nodepool module for type details. | <code title="map(map(object({ gke_version = optional(string) labels = optional(map(string), {}) max_pods_per_node = optional(number) name = optional(string) node_config = optional(any, { disk_type = "pd-balanced" }) node_count = optional(map(number), { initial = 1 }) node_locations = optional(list(string)) nodepool_config = optional(any) pod_range = optional(any) reservation_affinity = optional(any) service_account = optional(any) sole_tenant_nodegroup = optional(string) tags = optional(list(string)) taints = optional(list(object({ key = string value = string effect = string }))) })))">map(map(object({…})))</code> | | <code>{}</code> |
|
||||
| [project_services](variables.tf#L219) | Additional project services to enable. | <code>list(string)</code> | | <code>[]</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
*/
|
||||
|
||||
variable "billing_account_id" {
|
||||
description = "Billing account id."
|
||||
description = "Billing account ID."
|
||||
type = string
|
||||
}
|
||||
|
||||
|
@ -48,9 +48,25 @@ variable "clusters" {
|
|||
max_pods_per_node = optional(number, 110)
|
||||
min_master_version = optional(string)
|
||||
monitoring_config = optional(object({
|
||||
enable_components = optional(list(string), ["SYSTEM_COMPONENTS"])
|
||||
managed_prometheus = optional(bool)
|
||||
}))
|
||||
enable_system_metrics = optional(bool, true)
|
||||
|
||||
# (Optional) control plane metrics
|
||||
enable_api_server_metrics = optional(bool, false)
|
||||
enable_controller_manager_metrics = optional(bool, false)
|
||||
enable_scheduler_metrics = optional(bool, false)
|
||||
|
||||
# (Optional) kube state metrics
|
||||
enable_daemonset_metrics = optional(bool, false)
|
||||
enable_deployment_metrics = optional(bool, false)
|
||||
enable_hpa_metrics = optional(bool, false)
|
||||
enable_pod_metrics = optional(bool, false)
|
||||
enable_statefulset_metrics = optional(bool, false)
|
||||
enable_storage_metrics = optional(bool, false)
|
||||
|
||||
# Google Cloud Managed Service for Prometheus
|
||||
enable_managed_prometheus = optional(bool, true)
|
||||
}), {})
|
||||
|
||||
node_locations = optional(list(string))
|
||||
private_cluster_config = optional(any)
|
||||
release_channel = optional(string)
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -102,6 +102,11 @@ module "vpc-shared" {
|
|||
ip_cidr_range = var.ip_ranges.gce
|
||||
name = "gce"
|
||||
region = var.region
|
||||
iam = {
|
||||
"roles/compute.networkUser" = concat(var.owners_gce, [
|
||||
"serviceAccount:${module.project-svc-gce.service_accounts.cloud_services}",
|
||||
])
|
||||
}
|
||||
},
|
||||
{
|
||||
ip_cidr_range = var.ip_ranges.gke
|
||||
|
@ -111,24 +116,17 @@ module "vpc-shared" {
|
|||
pods = var.ip_secondary_ranges.gke-pods
|
||||
services = var.ip_secondary_ranges.gke-services
|
||||
}
|
||||
iam = {
|
||||
"roles/compute.networkUser" = concat(var.owners_gke, [
|
||||
"serviceAccount:${module.project-svc-gke.service_accounts.cloud_services}",
|
||||
"serviceAccount:${module.project-svc-gke.service_accounts.robots.container-engine}",
|
||||
])
|
||||
"roles/compute.securityAdmin" = [
|
||||
"serviceAccount:${module.project-svc-gke.service_accounts.robots.container-engine}",
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
subnet_iam = {
|
||||
"${var.region}/gce" = {
|
||||
"roles/compute.networkUser" = concat(var.owners_gce, [
|
||||
"serviceAccount:${module.project-svc-gce.service_accounts.cloud_services}",
|
||||
])
|
||||
}
|
||||
"${var.region}/gke" = {
|
||||
"roles/compute.networkUser" = concat(var.owners_gke, [
|
||||
"serviceAccount:${module.project-svc-gke.service_accounts.cloud_services}",
|
||||
"serviceAccount:${module.project-svc-gke.service_accounts.robots.container-engine}",
|
||||
])
|
||||
"roles/compute.securityAdmin" = [
|
||||
"serviceAccount:${module.project-svc-gke.service_accounts.robots.container-engine}",
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module "vpc-shared-firewall" {
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -214,5 +214,5 @@ module "test" {
|
|||
}
|
||||
}
|
||||
|
||||
# tftest modules=4 resources=18
|
||||
# tftest modules=4 resources=19
|
||||
```
|
||||
|
|
|
@ -6,12 +6,18 @@ The blueprints in this folder show how to automate installation of specific thir
|
|||
|
||||
### OpenShift cluster bootstrap on Shared VPC
|
||||
|
||||
<a href="./openshift/" title="HubOpenShift bootstrap example"><img src="./openshift/diagram.png" align="left" width="280px"></a> This [example](./openshift/) shows how to quickly bootstrap an OpenShift 4.7 cluster on GCP, using typical enterprise features like Shared VPC and CMEK for instance disks.
|
||||
<a href="./openshift/" title="HubOpenShift bootstrap example"><img src="./openshift/diagram.png" align="left" width="320px"></a> <p style="margin-left: 340px"> This [example](./openshift/) shows how to quickly bootstrap an OpenShift 4.7 cluster on GCP, using typical enterprise features like Shared VPC and CMEK for instance disks. </p>
|
||||
|
||||
<br clear="left">
|
||||
|
||||
### Wordpress deployment on Cloud Run
|
||||
|
||||
<a href="./wordpress/cloudrun/" title="Wordpress deployment on Cloud Run"><img src="./wordpress/cloudrun/architecture.png" align="left" width="280px"></a> This [example](./wordpress/cloudrun/) shows how to deploy a functioning new Wordpress website exposed to the public internet via CloudRun and Cloud SQL, with minimal technical overhead.
|
||||
<a href="./wordpress/cloudrun/" title="Wordpress deployment on Cloud Run"><img src="./wordpress/cloudrun/images/architecture.png" align="left" width="320px"></a> <p style="margin-left: 340px"> This [example](./wordpress/cloudrun/) shows how to deploy a functioning new Wordpress website exposed to the public internet via CloudRun and Cloud SQL, with minimal technical overhead. </p>
|
||||
|
||||
<br clear="left">
|
||||
|
||||
### Serverless phpIPAM on Cloud Run
|
||||
|
||||
<a href="./phpipam/" title="phpIPAM bootstrap example"><img src="./phpipam/images/phpipam.png" align="left" width="320px"></a> <p style="margin-left: 340px">This [example](./phpipam/) shows how to quickly bootstrap a serverless phpIPAM instance on GCP using Cloud Run. This comes with typical enterprise features like Shared VPC, Cloud Armor with IAP and, possibly, private exposure via Internal Application Load Balancer. Indeed, the script supports deploying the application either publicly via Global Application Load Balancer with restricted access based on IPs (Cloud Armor) and identities (Identity Aware Proxy) or privately via Internal Application Load Balancer.</p>
|
||||
|
||||
<br clear="left">
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2022 Google LLC
|
||||
# Copyright 2023 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,239 @@
|
|||
# Serverless phpIPAM on Cloud Run
|
||||
|
||||
[phpIPAM](https://phpipam.net/) is an open-source IP address management (IPAM)
|
||||
system that can be used to manage IP addresses in both on-premises and cloud
|
||||
environments. It is a powerful tool that can help businesses to automate IP
|
||||
address management, proactively identify and resolve IP address conflicts, and
|
||||
plan for future IP address needs.
|
||||
|
||||
This repository aims to speed up deployment of phpIPAM software on Google Cloud
|
||||
Platform Cloud Run serverless product. The web application can be exposed either
|
||||
publicly via Global Application Load Balancer or internally via Internal
|
||||
Application Load Balancer. More information on the architecture section.
|
||||
|
||||
## Architecture
|
||||
|
||||
![Serverless phpIPAM on Cloud Run](images/phpipam.png "Wordpress on Cloud Run")
|
||||
|
||||
The main components that are deployed in this architecture are the following (
|
||||
you can learn about them by following the hyperlinks):
|
||||
|
||||
- [Cloud Run](https://cloud.google.com/run): serverless PaaS offering to host
|
||||
containers for web-oriented applications, while offering security, scalability
|
||||
and easy versioning
|
||||
- [Cloud SQL](https://cloud.google.com/sql): Managed solution for SQL databases
|
||||
- [VPC Serverless Connector](https://cloud.google.com/vpc/docs/serverless-vpc-access):
|
||||
Solution to access the CloudSQL VPC from Cloud Run, using only internal IP
|
||||
addresses
|
||||
- [Global Application Load Balancer](https://cloud.google.com/load-balancing/docs/https) (\*):
|
||||
An external Application Load Balancer is a proxy-based Layer 7 load balancer
|
||||
that enables you to run and scale your services behind a single external IP
|
||||
address.
|
||||
- [Cloud Armor](https://cloud.google.com/armor/docs/cloud-armor-overview) (\*):
|
||||
Help protect your applications and websites against denial of service and web
|
||||
attacks.
|
||||
- [Identity Aware Proxy](https://cloud.google.com/iap/docs/concepts-overview) (\*):
|
||||
IAP lets you establish a central authorization layer for applications accessed
|
||||
by HTTPS, so you can use an application-level access control model instead of
|
||||
relying on network-level firewalls.
|
||||
- [Regional Internal Application Load Balancer](https://cloud.google.com/load-balancing/docs/l7-internal) (\*):
|
||||
A Google Cloud internal Application Load Balancer is a regional proxy-based
|
||||
layer 7 load balancer that enables you expose your services behind a single
|
||||
internal IP address.
|
||||
|
||||
> (\*) Product deployment depends on input variables
|
||||
|
||||
## Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
#### Setting up the project for the deployment
|
||||
|
||||
This example will deploy all its resources into the project defined by
|
||||
the `project_id` variable. Please note that we assume this project already
|
||||
exists. However, if you provide the appropriate values to the `project_create`
|
||||
variable, the project will be created as part of the deployment.
|
||||
|
||||
If `project_create` is left to null, the identity performing the deployment
|
||||
needs the `owner` role on the project defined by the `project_id` variable.
|
||||
Otherwise, the identity performing the deployment
|
||||
needs `resourcemanager.projectCreator` on the resource hierarchy node specified
|
||||
by `project_create.parent` and `billing.user` on the billing account specified
|
||||
by `project_create.billing_account_id`.
|
||||
|
||||
### Deployment
|
||||
|
||||
#### Step 0: Cloning the repository
|
||||
|
||||
If you want to deploy from your Cloud Shell, click on the image below, sign in
|
||||
if required and when the prompt appears, click on “confirm”.
|
||||
|
||||
[![Open Cloudshell](../../../assets/images/cloud-shell-button.png)](https://shell.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fcloud-foundation-fabric&cloudshell_workspace=blueprints%2Fthird-party-solutions%2Fwordpress%2Fcloudrun)
|
||||
|
||||
Otherwise, in your console of choice:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/GoogleCloudPlatform/cloud-foundation-fabric
|
||||
```
|
||||
|
||||
Before you deploy the architecture, you will need at least the following
|
||||
information (for more precise configuration see the Variables section):
|
||||
|
||||
* The project ID.
|
||||
|
||||
#### Step 2: Prepare the variables
|
||||
|
||||
Once you have the required information, head back to your cloned repository.
|
||||
Make sure you’re in the directory of this tutorial (where this README is in).
|
||||
|
||||
Configure the Terraform variables in your `terraform.tfvars` file.
|
||||
See [terraform.tfvars.sample](terraform.tfvars.sample) as starting point - just
|
||||
copy it to `terraform.tfvars` and edit the latter. See the variables
|
||||
documentation below.
|
||||
|
||||
**Notes**:
|
||||
|
||||
1. If you have
|
||||
the [domain restriction org. policy](https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains)
|
||||
on your organization, you have to edit the `cloud_run_invoker` variable and
|
||||
give it a value that will be accepted in accordance to your policy.
|
||||
2. By default, the application will be exposed externally through Global
|
||||
Application Load Balancer, for restricting access to specific identities
|
||||
please check IAP configuration or deploy the application internally via the
|
||||
ILB
|
||||
3. Setting the `phpipam_exposure` variable to "INTERNAL" will deploy an Internal
|
||||
Application Load Balancer on the same VPC. This might be the preferred option
|
||||
for enterprises since it prevents exposing the application publicly still
|
||||
allowing internal access through private network (via either VPN and/or
|
||||
Interconnect)
|
||||
|
||||
#### Step 3: Deploy resources
|
||||
|
||||
Initialize your Terraform environment and deploy the resources:
|
||||
|
||||
```shell
|
||||
terraform init
|
||||
terraform apply
|
||||
```
|
||||
|
||||
#### Step 4: Use the created resources
|
||||
|
||||
Upon completion, you will see the output with the values for the Cloud Run
|
||||
service and the user and password to access the application.
|
||||
You can also view it later with:
|
||||
|
||||
```shell
|
||||
terraform output
|
||||
# or for the concrete variable:
|
||||
terraform output cloud_run_service
|
||||
```
|
||||
|
||||
Please be aware that the password created in the script is not yet configured in the
|
||||
application, you will be prompted to insert that during phpIPAM installation
|
||||
process at first login.
|
||||
To access the newly deployed application follow these instructions:
|
||||
|
||||
1. Get the default phpIPAM url from the terraform output in the form
|
||||
{IP_ADDRESS}.nip.io
|
||||
2. Open your browser at that URL and you will see your phpIPAM installation page
|
||||
like the following one:
|
||||
|
||||
![phpIPAM Installation page](images/phpipam_install.png "phpIPAM installation page")
|
||||
|
||||
3. Click on "New phpipam installation". On the next page click "Automatic
|
||||
database installation", you will be prompted to the following form:
|
||||
|
||||
![phpIPAM DB install](images/phpipam_db.png "phpIPAM DB installation")
|
||||
|
||||
4. Insert "admin" as the MySQL username and the password available on the
|
||||
terraform output of this command below (without quotes).
|
||||
Untick the "Create new database" otherwise you'll get an error during
|
||||
installation, leave all the other values as default and then click on "
|
||||
Install phpipam database"
|
||||
|
||||
```
|
||||
terraform output cloudsql_password
|
||||
```
|
||||
|
||||
5. After some time a "Database installed successfully!" message should pop up.
|
||||
Then click "continue" and you'll be prompted to the last form for configuring
|
||||
admin credentials:
|
||||
|
||||
![phpIPAM Admin setup](images/phpipam_admin.png "phpIPAM DB installation")
|
||||
|
||||
6. Insert the phpipam password available in the output of the following command
|
||||
and choose a site title. Then insert the site url and click "Save
|
||||
settings". "A Settings updated, installation complete!" message should pop up
|
||||
and clicking "Proceed to login." will redirect you to the login page.
|
||||
Be aware this is just a convenient way to have a backup admin password in
|
||||
terraform, you could use whatever password you prefer.
|
||||
|
||||
```
|
||||
terraform output phpipam_password
|
||||
```
|
||||
|
||||
7. Insert "admin" as username and the password configured on the previous step
|
||||
and after login you'll finally get to the phpIPAM homepage.
|
||||
|
||||
![phpIPAM Homepage](images/phpipam_home.png "phpIPAM Homepage")
|
||||
|
||||
### Cleaning up your environment
|
||||
|
||||
The easiest way to remove all the deployed resources is to run the following
|
||||
command in Cloud Shell:
|
||||
|
||||
``` {shell}
|
||||
terraform destroy
|
||||
```
|
||||
|
||||
The above command will delete the associated resources so there will be no
|
||||
billable charges made afterwards.
|
||||
<!-- BEGIN TFDOC -->
|
||||
## Variables
|
||||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [prefix](variables.tf#L109) | Prefix used for resource names. | <code>string</code> | ✓ | |
|
||||
| [project_id](variables.tf#L128) | Project id, references existing project if `project_create` is null. | <code>string</code> | ✓ | |
|
||||
| [admin_principals](variables.tf#L19) | Users, groups and/or service accounts that are assigned roles, in IAM format (`group:foo@example.com`). | <code>list(string)</code> | | <code>[]</code> |
|
||||
| [cloud_run_invoker](variables.tf#L25) | IAM member authorized to access the end-point (for example, 'user:YOUR_IAM_USER' for only you or 'allUsers' for everyone). | <code>string</code> | | <code>"allUsers"</code> |
|
||||
| [cloudsql_password](variables.tf#L31) | CloudSQL password (will be randomly generated by default). | <code>string</code> | | <code>null</code> |
|
||||
| [connector](variables.tf#L37) | Existing VPC serverless connector to use if not creating a new one. | <code>string</code> | | <code>null</code> |
|
||||
| [create_connector](variables.tf#L43) | Should a VPC serverless connector be created or not. | <code>bool</code> | | <code>true</code> |
|
||||
| [custom_domain](variables.tf#L49) | Cloud Run service custom domain for GLB. | <code>string</code> | | <code>null</code> |
|
||||
| [iap](variables.tf#L55) | Identity-Aware Proxy for Cloud Run in the LB. | <code title="object({ enabled = optional(bool, false) app_title = optional(string, "Cloud Run Explore Application") oauth2_client_name = optional(string, "Test Client") email = optional(string) })">object({…})</code> | | <code>{}</code> |
|
||||
| [ip_ranges](variables.tf#L67) | CIDR blocks: VPC serverless connector, Private Service Access(PSA) for CloudSQL, CloudSQL VPC. | <code title="object({ connector = string psa = string ilb = string })">object({…})</code> | | <code title="{ connector = "10.8.0.0/28" psa = "10.60.0.0/24" ilb = "10.128.0.0/28" }">{…}</code> |
|
||||
| [phpipam_config](variables.tf#L81) | PHPIpam configuration. | <code title="object({ image = optional(string, "phpipam/phpipam-www:latest") port = optional(number, 80) })">object({…})</code> | | <code title="{ image = "phpipam/phpipam-www:latest" port = 80 }">{…}</code> |
|
||||
| [phpipam_exposure](variables.tf#L93) | Whether to expose the application publicly via GLB or internally via ILB, default GLB. | <code>string</code> | | <code>"EXTERNAL"</code> |
|
||||
| [phpipam_password](variables.tf#L103) | Password for the phpipam user (will be randomly generated by default). | <code>string</code> | | <code>null</code> |
|
||||
| [project_create](variables.tf#L119) | Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object({ billing_account_id = string parent = string })">object({…})</code> | | <code>null</code> |
|
||||
| [region](variables.tf#L133) | Region for the created resources. | <code>string</code> | | <code>"europe-west4"</code> |
|
||||
| [security_policy](variables.tf#L139) | Security policy (Cloud Armor) to enforce in the LB. | <code title="object({ enabled = optional(bool, false) ip_blacklist = optional(list(string), ["*"]) path_blocked = optional(string, "/login.html") })">object({…})</code> | | <code>{}</code> |
|
||||
| [vpc_config](variables.tf#L149) | VPC Network and subnetwork self links for internal LB setup. | <code title="object({ network = string subnetwork = string })">object({…})</code> | | <code>null</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
| name | description | sensitive |
|
||||
|---|---|:---:|
|
||||
| [cloud_run_service](outputs.tf#L17) | CloudRun service URL. | ✓ |
|
||||
| [cloudsql_password](outputs.tf#L23) | CloudSQL password. | ✓ |
|
||||
| [phpipam_ip_address](outputs.tf#L29) | PHPIPAM IP Address either external or internal according to app exposure. | |
|
||||
| [phpipam_password](outputs.tf#L34) | PHPIPAM user password. | ✓ |
|
||||
| [phpipam_url](outputs.tf#L40) | PHPIPAM website url. | |
|
||||
| [phpipam_user](outputs.tf#L45) | PHPIPAM username. | |
|
||||
<!-- END TFDOC -->
|
||||
## Test
|
||||
|
||||
```hcl
|
||||
module "test" {
|
||||
source = "./fabric/blueprints/third-party-solutions/phpipam"
|
||||
admin_principals = ["group:foo@example.com"]
|
||||
prefix = "test"
|
||||
project_create = {
|
||||
billing_account_id = "1234-ABCD-1234"
|
||||
parent = "folders/1234563"
|
||||
}
|
||||
project_id = "test-prj"
|
||||
}
|
||||
# tftest modules=7 resources=43
|
||||
```
|
|
@ -0,0 +1,31 @@
|
|||
/**
|
||||
* Copyright 2023 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
# Set up CloudSQL
|
||||
module "cloudsql" {
|
||||
source = "../../../modules/cloudsql-instance"
|
||||
project_id = module.project.project_id
|
||||
name = "${var.prefix}-mysql"
|
||||
database_version = local.cloudsql_conf.database_version
|
||||
databases = [local.cloudsql_conf.db]
|
||||
network = local.network
|
||||
prefix = var.prefix
|
||||
region = var.region
|
||||
tier = local.cloudsql_conf.tier
|
||||
users = {
|
||||
"${local.cloudsql_conf.user}" = var.cloudsql_password
|
||||
}
|
||||
}
|
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,153 @@
|
|||
/**
|
||||
* Copyright 2023 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
locals {
|
||||
glb_create = var.phpipam_exposure == "EXTERNAL"
|
||||
iap_sa_email = try(module.project.service_accounts.robots["iap"].email, "")
|
||||
}
|
||||
|
||||
# Reserved static IP for the Load Balancer
|
||||
module "addresses" {
|
||||
source = "../../../modules/net-address"
|
||||
count = local.glb_create ? 1 : 0
|
||||
project_id = var.project_id
|
||||
global_addresses = ["phpipam"]
|
||||
}
|
||||
|
||||
# Global L7 HTTPS Load Balancer in front of Cloud Run
|
||||
module "glb" {
|
||||
source = "../../../modules/net-lb-app-ext"
|
||||
count = local.glb_create ? 1 : 0
|
||||
project_id = module.project.project_id
|
||||
name = "phpipam-glb"
|
||||
address = module.addresses.0.global_addresses["phpipam"].address
|
||||
protocol = "HTTPS"
|
||||
|
||||
backend_service_configs = {
|
||||
default = {
|
||||
backends = [
|
||||
{ backend = "phpipam" }
|
||||
]
|
||||
health_checks = []
|
||||
port_name = "http"
|
||||
security_policy = try(google_compute_security_policy.policy[0].name,
|
||||
null)
|
||||
iap_config = try({
|
||||
oauth2_client_id = google_iap_client.iap_client[0].client_id,
|
||||
oauth2_client_secret = google_iap_client.iap_client[0].secret
|
||||
}, null)
|
||||
}
|
||||
}
|
||||
health_check_configs = {}
|
||||
neg_configs = {
|
||||
phpipam = {
|
||||
cloudrun = {
|
||||
region = var.region
|
||||
target_service = {
|
||||
name = module.cloud_run.service_name
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
ssl_certificates = {
|
||||
managed_configs = {
|
||||
default = {
|
||||
domains = [local.domain]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Cloud Armor configuration
|
||||
resource "google_compute_security_policy" "policy" {
|
||||
count = local.glb_create && var.security_policy.enabled ? 1 : 0
|
||||
project = module.project.project_id
|
||||
name = "cloud-run-policy"
|
||||
|
||||
rule {
|
||||
action = "deny(403)"
|
||||
priority = 1000
|
||||
match {
|
||||
versioned_expr = "SRC_IPS_V1"
|
||||
config {
|
||||
src_ip_ranges = var.security_policy.ip_blacklist
|
||||
}
|
||||
}
|
||||
description = "Deny access to list of IPs"
|
||||
}
|
||||
rule {
|
||||
action = "deny(403)"
|
||||
priority = 900
|
||||
match {
|
||||
expr {
|
||||
expression = "request.path.matches(\"${var.security_policy.path_blocked}\")"
|
||||
}
|
||||
}
|
||||
description = "Deny access to specific URL paths"
|
||||
}
|
||||
rule {
|
||||
action = "allow"
|
||||
priority = "2147483647"
|
||||
match {
|
||||
versioned_expr = "SRC_IPS_V1"
|
||||
config {
|
||||
src_ip_ranges = ["*"]
|
||||
}
|
||||
}
|
||||
description = "Default rule"
|
||||
}
|
||||
}
|
||||
|
||||
# Identity-Aware Proxy (IAP) or OAuth brand (see OAuth consent screen)
|
||||
# Note:
|
||||
# Only "Organization Internal" brands can be created programmatically
|
||||
# via API. To convert it into an external brand please use the GCP
|
||||
# Console.
|
||||
# Brands can only be created once for a Google Cloud project and the
|
||||
# underlying Google API doesn't support DELETE or PATCH methods.
|
||||
# Destroying a Terraform-managed Brand will remove it from state but
|
||||
# will not delete it from Google Cloud.
|
||||
resource "google_iap_brand" "iap_brand" {
|
||||
count = local.glb_create && var.iap.enabled ? 1 : 0
|
||||
project = module.project.project_id
|
||||
# Support email displayed on the OAuth consent screen. The caller must be
|
||||
# the user with the associated email address, or if a group email is
|
||||
# specified, the caller can be either a user or a service account which
|
||||
# is an owner of the specified group in Cloud Identity.
|
||||
support_email = var.iap.email
|
||||
application_title = var.iap.app_title
|
||||
}
|
||||
|
||||
# IAP owned OAuth2 client
|
||||
# Note:
|
||||
# Only internal org clients can be created via declarative tools.
|
||||
# External clients must be manually created via the GCP console.
|
||||
# Warning:
|
||||
# All arguments including secret will be stored in the raw state as plain-text.
|
||||
resource "google_iap_client" "iap_client" {
|
||||
count = local.glb_create && var.iap.enabled ? 1 : 0
|
||||
display_name = var.iap.oauth2_client_name
|
||||
brand = google_iap_brand.iap_brand[0].name
|
||||
}
|
||||
|
||||
# IAM policy for IAP
|
||||
# For simplicity we use the same email as support_email and authorized member
|
||||
resource "google_iap_web_iam_member" "iap_iam" {
|
||||
count = local.glb_create && var.iap.enabled ? 1 : 0
|
||||
project = module.project.project_id
|
||||
role = "roles/iap.httpsResourceAccessor"
|
||||
member = "user:${var.iap.email}"
|
||||
}
|
|
@ -0,0 +1,89 @@
|
|||
/**
|
||||
* Copyright 2023 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
locals {
|
||||
ilb_create = var.phpipam_exposure == "INTERNAL"
|
||||
}
|
||||
|
||||
# default ssl certificate
|
||||
resource "tls_private_key" "default" {
|
||||
algorithm = "RSA"
|
||||
rsa_bits = 2048
|
||||
}
|
||||
|
||||
resource "tls_self_signed_cert" "default" {
|
||||
private_key_pem = tls_private_key.default.private_key_pem
|
||||
validity_period_hours = 720
|
||||
allowed_uses = [
|
||||
"key_encipherment",
|
||||
"digital_signature",
|
||||
"server_auth",
|
||||
]
|
||||
subject {
|
||||
common_name = local.domain
|
||||
organization = "ACME Examples, Inc"
|
||||
}
|
||||
}
|
||||
|
||||
module "ilb-l7" {
|
||||
source = "../../../modules/net-lb-app-int"
|
||||
count = local.ilb_create ? 1 : 0
|
||||
project_id = var.project_id
|
||||
name = "ilb-l7-cr"
|
||||
protocol = "HTTPS"
|
||||
region = var.region
|
||||
|
||||
backend_service_configs = {
|
||||
default = {
|
||||
project_id = var.project_id
|
||||
backends = [
|
||||
{
|
||||
group = "phpipam"
|
||||
}
|
||||
]
|
||||
health_checks = []
|
||||
}
|
||||
}
|
||||
health_check_configs = {
|
||||
default = {
|
||||
https = { port = 443 }
|
||||
}
|
||||
}
|
||||
neg_configs = {
|
||||
phpipam = {
|
||||
project_id = var.project_id
|
||||
cloudrun = {
|
||||
region = var.region
|
||||
target_service = {
|
||||
name = module.cloud_run.service_name
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
ssl_certificates = {
|
||||
create_configs = {
|
||||
default = {
|
||||
# certificate and key could also be read via file() from external files
|
||||
certificate = tls_self_signed_cert.default.cert_pem
|
||||
private_key = tls_private_key.default.private_key_pem
|
||||
}
|
||||
}
|
||||
}
|
||||
vpc_config = {
|
||||
network = local.network
|
||||
subnetwork = local.subnetwork
|
||||
}
|
||||
}
|
Binary file not shown.
After Width: | Height: | Size: 291 KiB |
Binary file not shown.
After Width: | Height: | Size: 1.5 MiB |
Binary file not shown.
After Width: | Height: | Size: 2.1 MiB |
Binary file not shown.
After Width: | Height: | Size: 3.6 MiB |
Binary file not shown.
After Width: | Height: | Size: 1.7 MiB |
|
@ -0,0 +1,144 @@
|
|||
/**
|
||||
* Copyright 2023 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
locals {
|
||||
cloudsql_conf = {
|
||||
database_version = "MYSQL_8_0"
|
||||
tier = "db-g1-small"
|
||||
db = "phpipam"
|
||||
user = "admin"
|
||||
}
|
||||
connector = var.connector == null ? module.cloud_run.vpc_connector : var.connector
|
||||
domain = (
|
||||
var.custom_domain != null ? var.custom_domain : (
|
||||
var.phpipam_exposure == "EXTERNAL" ?
|
||||
"${module.addresses.0.global_addresses["phpipam"].address}.nip.io" : "phpipam.internal")
|
||||
)
|
||||
iam = {
|
||||
# CloudSQL
|
||||
"roles/cloudsql.admin" = var.admin_principals
|
||||
"roles/cloudsql.client" = var.admin_principals
|
||||
"roles/cloudsql.instanceUser" = var.admin_principals
|
||||
# common roles
|
||||
"roles/logging.admin" = var.admin_principals
|
||||
"roles/iam.serviceAccountUser" = var.admin_principals
|
||||
"roles/iam.serviceAccountTokenCreator" = var.admin_principals
|
||||
}
|
||||
network = var.vpc_config == null ? module.vpc.0.self_link : var.vpc_config.network
|
||||
phpipam_password = var.phpipam_password == null ? random_password.phpipam_password.result : var.phpipam_password
|
||||
subnetwork = var.vpc_config == null ? module.vpc.0.subnet_self_links["${var.region}/ilb"] : var.vpc_config.subnetwork
|
||||
}
|
||||
|
||||
|
||||
# either create a project or set up the given one
|
||||
module "project" {
|
||||
source = "../../../modules/project"
|
||||
billing_account = try(var.project_create.billing_account_id, null)
|
||||
iam = var.project_create != null ? local.iam : {}
|
||||
name = var.project_id
|
||||
parent = try(var.project_create.parent, null)
|
||||
prefix = var.project_create == null ? null : var.prefix
|
||||
project_create = var.project_create != null
|
||||
services = [
|
||||
"iap.googleapis.com",
|
||||
"logging.googleapis.com",
|
||||
"monitoring.googleapis.com",
|
||||
"run.googleapis.com",
|
||||
"servicenetworking.googleapis.com",
|
||||
"sqladmin.googleapis.com",
|
||||
"sql-component.googleapis.com",
|
||||
"vpcaccess.googleapis.com"
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
# create a VPC for CloudSQL and ILB
|
||||
module "vpc" {
|
||||
source = "../../../modules/net-vpc"
|
||||
count = var.vpc_config == null ? 1 : 0
|
||||
project_id = module.project.project_id
|
||||
name = "${var.prefix}-sql-vpc"
|
||||
|
||||
psa_config = {
|
||||
ranges = {
|
||||
cloud-sql = var.ip_ranges.psa
|
||||
}
|
||||
}
|
||||
subnets = [
|
||||
{
|
||||
ip_cidr_range = var.ip_ranges.ilb
|
||||
name = "ilb"
|
||||
region = var.region
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
resource "random_password" "phpipam_password" {
|
||||
length = 8
|
||||
}
|
||||
|
||||
# create the Cloud Run service
|
||||
module "cloud_run" {
|
||||
source = "../../../modules/cloud-run"
|
||||
project_id = module.project.project_id
|
||||
name = "${var.prefix}-cr-phpipam"
|
||||
prefix = var.prefix
|
||||
ingress_settings = "all"
|
||||
region = var.region
|
||||
|
||||
containers = {
|
||||
phpipam = {
|
||||
image = var.phpipam_config.image
|
||||
ports = {
|
||||
http = {
|
||||
name = "http1"
|
||||
protocol = null
|
||||
container_port = var.phpipam_config.port
|
||||
}
|
||||
}
|
||||
env_from = null
|
||||
# set up the database connection
|
||||
env = {
|
||||
"TZ" = "Europe/Rome"
|
||||
"IPAM_DATABASE_HOST" = module.cloudsql.ip
|
||||
"IPAM_DATABASE_USER" = local.cloudsql_conf.user
|
||||
"IPAM_DATABASE_PASS" = var.cloudsql_password == null ? module.cloudsql.user_passwords[local.cloudsql_conf.user] : var.cloudsql_password
|
||||
"IPAM_DATABASE_NAME" = local.cloudsql_conf.db
|
||||
"IPAM_DATABASE_PORT" = "3306"
|
||||
}
|
||||
}
|
||||
}
|
||||
iam = local.glb_create && var.iap.enabled ? {
|
||||
"roles/run.invoker" : ["serviceAccount:${local.iap_sa_email}"]
|
||||
} : {
|
||||
"roles/run.invoker" : [var.cloud_run_invoker]
|
||||
}
|
||||
revision_annotations = {
|
||||
autoscaling = {
|
||||
min_scale = 1
|
||||
max_scale = 2
|
||||
}
|
||||
# connect to CloudSQL
|
||||
cloudsql_instances = [module.cloudsql.connection_name]
|
||||
# allow all traffic
|
||||
vpcaccess_egress = "private-ranges-only"
|
||||
vpcaccess_connector = local.connector
|
||||
}
|
||||
vpc_connector_create = var.create_connector ? {
|
||||
ip_cidr_range = var.ip_ranges.connector
|
||||
vpc_self_link = local.network
|
||||
} : null
|
||||
}
|
|
@ -0,0 +1,48 @@
|
|||
/**
|
||||
* Copyright 2023 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
output "cloud_run_service" {
|
||||
description = "CloudRun service URL."
|
||||
value = module.cloud_run.service.status[0].url
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
output "cloudsql_password" {
|
||||
description = "CloudSQL password."
|
||||
value = var.cloudsql_password == null ? module.cloudsql.user_passwords[local.cloudsql_conf.user] : var.cloudsql_password
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
output "phpipam_ip_address" {
|
||||
description = "PHPIPAM IP Address either external or internal according to app exposure."
|
||||
value = local.glb_create ? module.addresses.0.global_addresses["phpipam"].address : module.ilb-l7.0.address
|
||||
}
|
||||
|
||||
output "phpipam_password" {
|
||||
description = "PHPIPAM user password."
|
||||
value = local.phpipam_password
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
output "phpipam_url" {
|
||||
description = "PHPIPAM website url."
|
||||
value = local.domain
|
||||
}
|
||||
|
||||
output "phpipam_user" {
|
||||
description = "PHPIPAM username."
|
||||
value = "admin"
|
||||
}
|
|
@ -0,0 +1,2 @@
|
|||
prefix = "phpipam"
|
||||
project_id = "my-phpipam-project"
|
|
@ -0,0 +1,156 @@
|
|||
/**
|
||||
* Copyright 2023 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
# Documentation: https://cloud.google.com/run/docs/securing/managing-access#making_a_service_public
|
||||
|
||||
variable "admin_principals" {
|
||||
description = "Users, groups and/or service accounts that are assigned roles, in IAM format (`group:foo@example.com`)."
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "cloud_run_invoker" {
|
||||
description = "IAM member authorized to access the end-point (for example, 'user:YOUR_IAM_USER' for only you or 'allUsers' for everyone)."
|
||||
type = string
|
||||
default = "allUsers"
|
||||
}
|
||||
|
||||
variable "cloudsql_password" {
|
||||
description = "CloudSQL password (will be randomly generated by default)."
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "connector" {
|
||||
description = "Existing VPC serverless connector to use if not creating a new one."
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "create_connector" {
|
||||
description = "Should a VPC serverless connector be created or not."
|
||||
type = bool
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "custom_domain" {
|
||||
description = "Cloud Run service custom domain for GLB."
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "iap" {
|
||||
description = "Identity-Aware Proxy for Cloud Run in the LB."
|
||||
type = object({
|
||||
enabled = optional(bool, false)
|
||||
app_title = optional(string, "Cloud Run Explore Application")
|
||||
oauth2_client_name = optional(string, "Test Client")
|
||||
email = optional(string)
|
||||
})
|
||||
default = {}
|
||||
}
|
||||
|
||||
# PSA: documentation: https://cloud.google.com/vpc/docs/configure-private-services-access#allocating-range
|
||||
variable "ip_ranges" {
|
||||
description = "CIDR blocks: VPC serverless connector, Private Service Access(PSA) for CloudSQL, CloudSQL VPC."
|
||||
type = object({
|
||||
connector = string
|
||||
psa = string
|
||||
ilb = string
|
||||
})
|
||||
default = {
|
||||
connector = "10.8.0.0/28"
|
||||
psa = "10.60.0.0/24"
|
||||
ilb = "10.128.0.0/28"
|
||||
}
|
||||
}
|
||||
|
||||
variable "phpipam_config" {
|
||||
description = "PHPIpam configuration."
|
||||
type = object({
|
||||
image = optional(string, "phpipam/phpipam-www:latest")
|
||||
port = optional(number, 80)
|
||||
})
|
||||
default = {
|
||||
image = "phpipam/phpipam-www:latest"
|
||||
port = 80
|
||||
}
|
||||
}
|
||||
|
||||
variable "phpipam_exposure" {
|
||||
description = "Whether to expose the application publicly via GLB or internally via ILB, default GLB."
|
||||
type = string
|
||||
default = "EXTERNAL"
|
||||
validation {
|
||||
condition = var.phpipam_exposure == "INTERNAL" || var.phpipam_exposure == "EXTERNAL"
|
||||
error_message = "phpipam_exposure supports only 'INTERNAL' or 'EXTERNAL'"
|
||||
}
|
||||
}
|
||||
|
||||
variable "phpipam_password" {
|
||||
description = "Password for the phpipam user (will be randomly generated by default)."
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "prefix" {
|
||||
description = "Prefix used for resource names."
|
||||
type = string
|
||||
nullable = false
|
||||
validation {
|
||||
condition = var.prefix != ""
|
||||
error_message = "Prefix cannot be empty."
|
||||
}
|
||||
}
|
||||
|
||||
variable "project_create" {
|
||||
description = "Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format."
|
||||
type = object({
|
||||
billing_account_id = string
|
||||
parent = string
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "project_id" {
|
||||
description = "Project id, references existing project if `project_create` is null."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "region" {
|
||||
description = "Region for the created resources."
|
||||
type = string
|
||||
default = "europe-west4"
|
||||
}
|
||||
|
||||
variable "security_policy" {
|
||||
description = "Security policy (Cloud Armor) to enforce in the LB."
|
||||
type = object({
|
||||
enabled = optional(bool, false)
|
||||
ip_blacklist = optional(list(string), ["*"])
|
||||
path_blocked = optional(string, "/login.html")
|
||||
})
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "vpc_config" {
|
||||
description = "VPC Network and subnetwork self links for internal LB setup."
|
||||
type = object({
|
||||
network = string
|
||||
subnetwork = string
|
||||
})
|
||||
default = null
|
||||
}
|
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.80.0" # tftest
|
||||
version = ">= 4.82.0" # tftest
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -88,8 +88,9 @@ module "organization" {
|
|||
)
|
||||
# delegated role grant for resource manager service account
|
||||
iam_bindings = {
|
||||
(module.organization.custom_role_id[var.custom_role_names.organization_iam_admin]) = {
|
||||
organization_iam_admin_conditional = {
|
||||
members = [module.automation-tf-resman-sa.iam_email]
|
||||
role = module.organization.custom_role_id[var.custom_role_names.organization_iam_admin]
|
||||
condition = {
|
||||
expression = format(
|
||||
"api.getAttribute('iam.googleapis.com/modifiedGrantsByRole', []).hasOnly([%s])",
|
||||
|
|
|
@ -223,9 +223,9 @@ locals {
|
|||
tfvars = {
|
||||
folder_ids = local.folder_ids
|
||||
service_accounts = local.service_accounts
|
||||
tag_keys = { for k, v in module.organization.tag_keys : k => v.id }
|
||||
tag_keys = { for k, v in try(module.organization.tag_keys, {}) : k => v.id }
|
||||
tag_names = var.tag_names
|
||||
tag_values = { for k, v in module.organization.tag_values : k => v.id }
|
||||
tag_values = { for k, v in try(module.organization.tag_values, {}) : k => v.id }
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -406,10 +406,10 @@ DNS configurations are centralised in the `dns-*.tf` files. Spokes delegate DNS
|
|||
| [factories_config](variables.tf#L80) | Configuration for network resource factories. | <code title="object({ data_dir = optional(string, "data") dns_policy_rules_file = optional(string, "data/dns-policy-rules.yaml") firewall_policy_name = optional(string, "net-default") })">object({…})</code> | | <code title="{ data_dir = "data" }">{…}</code> | |
|
||||
| [outputs_location](variables.tf#L121) | Path where providers and tfvars files for the following stages are written. Leave empty to disable. | <code>string</code> | | <code>null</code> | |
|
||||
| [peering_configs](variables-peerings.tf#L19) | Peering configurations. | <code title="object({ dev = optional(object({ export = optional(bool, true) import = optional(bool, true) public_export = optional(bool) public_import = optional(bool) }), {}) prod = optional(object({ export = optional(bool, true) import = optional(bool, true) public_export = optional(bool) public_import = optional(bool) }), {}) })">object({…})</code> | | <code>{}</code> | |
|
||||
| [psa_ranges](variables.tf#L138) | IP ranges used for Private Service Access (CloudSQL, etc.). | <code title="object({ dev = object({ ranges = map(string) routes = object({ export = bool import = bool }) }) prod = object({ ranges = map(string) routes = object({ export = bool import = bool }) }) })">object({…})</code> | | <code>null</code> | |
|
||||
| [regions](variables.tf#L159) | Region definitions. | <code title="object({ primary = string secondary = string })">object({…})</code> | | <code title="{ primary = "europe-west1" secondary = "europe-west4" }">{…}</code> | |
|
||||
| [service_accounts](variables.tf#L171) | Automation service accounts in name => email format. | <code title="object({ data-platform-dev = string data-platform-prod = string gke-dev = string gke-prod = string project-factory-dev = string project-factory-prod = string })">object({…})</code> | | <code>null</code> | <code>1-resman</code> |
|
||||
| [vpn_onprem_primary_config](variables.tf#L185) | VPN gateway configuration for onprem interconnection in the primary region. | <code title="object({ peer_external_gateways = map(object({ redundancy_type = string interfaces = list(string) })) router_config = object({ create = optional(bool, true) asn = number name = optional(string) keepalive = optional(number) custom_advertise = optional(object({ all_subnets = bool ip_ranges = map(string) })) }) tunnels = map(object({ bgp_peer = object({ address = string asn = number route_priority = optional(number, 1000) custom_advertise = optional(object({ all_subnets = bool all_vpc_subnets = bool all_peer_vpc_subnets = bool ip_ranges = map(string) })) }) bgp_session_range = string ike_version = optional(number, 2) peer_external_gateway_interface = optional(number) peer_gateway = optional(string, "default") router = optional(string) shared_secret = optional(string) vpn_gateway_interface = number })) })">object({…})</code> | | <code>null</code> | |
|
||||
| [psa_ranges](variables.tf#L138) | IP ranges used for Private Service Access (CloudSQL, etc.). | <code title="object({ dev = object({ ranges = map(string) export_routes = optional(bool, false) import_routes = optional(bool, false) }) prod = object({ ranges = map(string) export_routes = optional(bool, false) import_routes = optional(bool, false) }) })">object({…})</code> | | <code>null</code> | |
|
||||
| [regions](variables.tf#L155) | Region definitions. | <code title="object({ primary = string secondary = string })">object({…})</code> | | <code title="{ primary = "europe-west1" secondary = "europe-west4" }">{…}</code> | |
|
||||
| [service_accounts](variables.tf#L167) | Automation service accounts in name => email format. | <code title="object({ data-platform-dev = string data-platform-prod = string gke-dev = string gke-prod = string project-factory-dev = string project-factory-prod = string })">object({…})</code> | | <code>null</code> | <code>1-resman</code> |
|
||||
| [vpn_onprem_primary_config](variables.tf#L181) | VPN gateway configuration for onprem interconnection in the primary region. | <code title="object({ peer_external_gateways = map(object({ redundancy_type = string interfaces = list(string) })) router_config = object({ create = optional(bool, true) asn = number name = optional(string) keepalive = optional(number) custom_advertise = optional(object({ all_subnets = bool ip_ranges = map(string) })) }) tunnels = map(object({ bgp_peer = object({ address = string asn = number route_priority = optional(number, 1000) custom_advertise = optional(object({ all_subnets = bool all_vpc_subnets = bool all_peer_vpc_subnets = bool ip_ranges = map(string) })) }) bgp_session_range = string ike_version = optional(number, 2) peer_external_gateway_interface = optional(number) peer_gateway = optional(string, "default") router = optional(string) shared_secret = optional(string) vpn_gateway_interface = number })) })">object({…})</code> | | <code>null</code> | |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
|
|
@ -4,5 +4,5 @@ region: europe-west1
|
|||
description: Default subnet for dev Data Platform
|
||||
ip_cidr_range: 10.127.48.0/24
|
||||
secondary_ip_ranges:
|
||||
pods: 100.64.0.0/24
|
||||
pods: 100.64.0.0/16
|
||||
services: 100.64.1.0/24
|
||||
|
|
|
@ -4,5 +4,5 @@ region: europe-west1
|
|||
description: Default subnet for prod gke nodes
|
||||
ip_cidr_range: 10.127.49.0/24
|
||||
secondary_ip_ranges:
|
||||
pods: 100.65.0.0/24
|
||||
pods: 100.65.0.0/16
|
||||
services: 100.65.1.0/24
|
||||
|
|
|
@ -55,7 +55,9 @@ module "landing-vpc" {
|
|||
private = true
|
||||
restricted = true
|
||||
}
|
||||
data_folder = "${var.factories_config.data_dir}/subnets/landing"
|
||||
factories_config = {
|
||||
subnets_folder = "${var.factories_config.data_dir}/subnets/landing"
|
||||
}
|
||||
}
|
||||
|
||||
module "landing-firewall" {
|
||||
|
|
|
@ -46,12 +46,14 @@ module "dev-spoke-project" {
|
|||
}
|
||||
|
||||
module "dev-spoke-vpc" {
|
||||
source = "../../../modules/net-vpc"
|
||||
project_id = module.dev-spoke-project.project_id
|
||||
name = "dev-spoke-0"
|
||||
mtu = 1500
|
||||
data_folder = "${var.factories_config.data_dir}/subnets/dev"
|
||||
psa_config = try(var.psa_ranges.dev, null)
|
||||
source = "../../../modules/net-vpc"
|
||||
project_id = module.dev-spoke-project.project_id
|
||||
name = "dev-spoke-0"
|
||||
mtu = 1500
|
||||
factories_config = {
|
||||
subnets_folder = "${var.factories_config.data_dir}/subnets/dev"
|
||||
}
|
||||
psa_config = try(var.psa_ranges.dev, null)
|
||||
# set explicit routes for googleapis in case the default route is deleted
|
||||
create_googleapis_routes = {
|
||||
private = true
|
||||
|
|
|
@ -45,12 +45,14 @@ module "prod-spoke-project" {
|
|||
}
|
||||
|
||||
module "prod-spoke-vpc" {
|
||||
source = "../../../modules/net-vpc"
|
||||
project_id = module.prod-spoke-project.project_id
|
||||
name = "prod-spoke-0"
|
||||
mtu = 1500
|
||||
data_folder = "${var.factories_config.data_dir}/subnets/prod"
|
||||
psa_config = try(var.psa_ranges.prod, null)
|
||||
source = "../../../modules/net-vpc"
|
||||
project_id = module.prod-spoke-project.project_id
|
||||
name = "prod-spoke-0"
|
||||
mtu = 1500
|
||||
factories_config = {
|
||||
subnets_folder = "${var.factories_config.data_dir}/subnets/prod"
|
||||
}
|
||||
psa_config = try(var.psa_ranges.prod, null)
|
||||
# set explicit routes for googleapis in case the default route is deleted
|
||||
create_googleapis_routes = {
|
||||
private = true
|
||||
|
|
|
@ -139,18 +139,14 @@ variable "psa_ranges" {
|
|||
description = "IP ranges used for Private Service Access (CloudSQL, etc.)."
|
||||
type = object({
|
||||
dev = object({
|
||||
ranges = map(string)
|
||||
routes = object({
|
||||
export = bool
|
||||
import = bool
|
||||
})
|
||||
ranges = map(string)
|
||||
export_routes = optional(bool, false)
|
||||
import_routes = optional(bool, false)
|
||||
})
|
||||
prod = object({
|
||||
ranges = map(string)
|
||||
routes = object({
|
||||
export = bool
|
||||
import = bool
|
||||
})
|
||||
ranges = map(string)
|
||||
export_routes = optional(bool, false)
|
||||
import_routes = optional(bool, false)
|
||||
})
|
||||
})
|
||||
default = null
|
||||
|
|
|
@ -430,11 +430,11 @@ DNS configurations are centralised in the `dns-*.tf` files. Spokes delegate DNS
|
|||
| [dns](variables.tf#L72) | Onprem DNS resolvers. | <code>map(list(string))</code> | | <code title="{ onprem = ["10.0.200.3"] }">{…}</code> | |
|
||||
| [factories_config](variables.tf#L80) | Configuration for network resource factories. | <code title="object({ data_dir = optional(string, "data") dns_policy_rules_file = optional(string, "data/dns-policy-rules.yaml") firewall_policy_name = optional(string, "net-default") })">object({…})</code> | | <code title="{ data_dir = "data" }">{…}</code> | |
|
||||
| [outputs_location](variables.tf#L121) | Path where providers and tfvars files for the following stages are written. Leave empty to disable. | <code>string</code> | | <code>null</code> | |
|
||||
| [psa_ranges](variables.tf#L138) | IP ranges used for Private Service Access (CloudSQL, etc.). | <code title="object({ dev = object({ ranges = map(string) routes = object({ export = bool import = bool }) }) prod = object({ ranges = map(string) routes = object({ export = bool import = bool }) }) })">object({…})</code> | | <code>null</code> | |
|
||||
| [regions](variables.tf#L159) | Region definitions. | <code title="object({ primary = string secondary = string })">object({…})</code> | | <code title="{ primary = "europe-west1" secondary = "europe-west4" }">{…}</code> | |
|
||||
| [service_accounts](variables.tf#L171) | Automation service accounts in name => email format. | <code title="object({ data-platform-dev = string data-platform-prod = string gke-dev = string gke-prod = string project-factory-dev = string project-factory-prod = string })">object({…})</code> | | <code>null</code> | <code>1-resman</code> |
|
||||
| [psa_ranges](variables.tf#L138) | IP ranges used for Private Service Access (CloudSQL, etc.). | <code title="object({ dev = object({ ranges = map(string) export_routes = optional(bool, false) import_routes = optional(bool, false) }) prod = object({ ranges = map(string) export_routes = optional(bool, false) import_routes = optional(bool, false) }) })">object({…})</code> | | <code>null</code> | |
|
||||
| [regions](variables.tf#L155) | Region definitions. | <code title="object({ primary = string secondary = string })">object({…})</code> | | <code title="{ primary = "europe-west1" secondary = "europe-west4" }">{…}</code> | |
|
||||
| [service_accounts](variables.tf#L167) | Automation service accounts in name => email format. | <code title="object({ data-platform-dev = string data-platform-prod = string gke-dev = string gke-prod = string project-factory-dev = string project-factory-prod = string })">object({…})</code> | | <code>null</code> | <code>1-resman</code> |
|
||||
| [vpn_configs](variables-vpn.tf#L17) | Hub to spokes VPN configurations. | <code title="object({ dev = object({ asn = number custom_advertise = optional(object({ all_subnets = bool ip_ranges = map(string) })) }) landing = object({ asn = number custom_advertise = optional(object({ all_subnets = bool ip_ranges = map(string) })) }) prod = object({ asn = number custom_advertise = optional(object({ all_subnets = bool ip_ranges = map(string) })) }) })">object({…})</code> | | <code title="{ dev = { asn = 65501 } landing = { asn = 65500 } prod = { asn = 65502 } }">{…}</code> | |
|
||||
| [vpn_onprem_primary_config](variables.tf#L185) | VPN gateway configuration for onprem interconnection in the primary region. | <code title="object({ peer_external_gateways = map(object({ redundancy_type = string interfaces = list(string) })) router_config = object({ create = optional(bool, true) asn = number name = optional(string) keepalive = optional(number) custom_advertise = optional(object({ all_subnets = bool ip_ranges = map(string) })) }) tunnels = map(object({ bgp_peer = object({ address = string asn = number route_priority = optional(number, 1000) custom_advertise = optional(object({ all_subnets = bool all_vpc_subnets = bool all_peer_vpc_subnets = bool ip_ranges = map(string) })) }) bgp_session_range = string ike_version = optional(number, 2) peer_external_gateway_interface = optional(number) peer_gateway = optional(string, "default") router = optional(string) shared_secret = optional(string) vpn_gateway_interface = number })) })">object({…})</code> | | <code>null</code> | |
|
||||
| [vpn_onprem_primary_config](variables.tf#L181) | VPN gateway configuration for onprem interconnection in the primary region. | <code title="object({ peer_external_gateways = map(object({ redundancy_type = string interfaces = list(string) })) router_config = object({ create = optional(bool, true) asn = number name = optional(string) keepalive = optional(number) custom_advertise = optional(object({ all_subnets = bool ip_ranges = map(string) })) }) tunnels = map(object({ bgp_peer = object({ address = string asn = number route_priority = optional(number, 1000) custom_advertise = optional(object({ all_subnets = bool all_vpc_subnets = bool all_peer_vpc_subnets = bool ip_ranges = map(string) })) }) bgp_session_range = string ike_version = optional(number, 2) peer_external_gateway_interface = optional(number) peer_gateway = optional(string, "default") router = optional(string) shared_secret = optional(string) vpn_gateway_interface = number })) })">object({…})</code> | | <code>null</code> | |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
|
|
@ -4,5 +4,5 @@ region: europe-west1
|
|||
description: Default subnet for dev Data Platform
|
||||
ip_cidr_range: 10.127.48.0/24
|
||||
secondary_ip_ranges:
|
||||
pods: 100.64.0.0/24
|
||||
pods: 100.64.0.0/16
|
||||
services: 100.64.1.0/24
|
||||
|
|
|
@ -4,5 +4,5 @@ region: europe-west1
|
|||
description: Default subnet for prod gke nodes
|
||||
ip_cidr_range: 10.127.49.0/24
|
||||
secondary_ip_ranges:
|
||||
pods: 100.65.0.0/24
|
||||
pods: 100.65.0.0/16
|
||||
services: 100.65.1.0/24
|
||||
|
|
|
@ -55,7 +55,9 @@ module "landing-vpc" {
|
|||
private = true
|
||||
restricted = true
|
||||
}
|
||||
data_folder = "${var.factories_config.data_dir}/subnets/landing"
|
||||
factories_config = {
|
||||
subnets_folder = "${var.factories_config.data_dir}/subnets/landing"
|
||||
}
|
||||
}
|
||||
|
||||
module "landing-firewall" {
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue