Compare commits

...

185 Commits

Author SHA1 Message Date
Ludovico Magnocavallo 5edc931bf9
add missing secret to spoke tunnels (#1265) 2023-03-17 20:52:40 +01:00
Ludovico Magnocavallo 5fb17cb3ac
Widen scope for prod project factory SA to dev (#1263)
* restrict storage role on outputs bucket for stage SAs

* grant prod project factory SA authority over prod and dev org policies

* network stages delegated grants on dev to prod pf SA

* security grants to prod pf SA on dev

* tfdoc

* tests
2023-03-17 16:24:55 +00:00
Ludo 367f4b6670
remove debug output 2023-03-17 15:35:18 +01:00
Taneli Leppä 4b15fe4744
Add backend service names to outputs for net-glb and net-ilb-l7 (some things like (#1258)
autoneg require names).

Co-authored-by: Ludovico Magnocavallo <ludomagno@google.com>
2023-03-17 10:40:11 +00:00
Dedeco 230cbe4903
Fix variable terraform.tfvars.sample (#1261) 2023-03-17 11:13:10 +01:00
Ludovico Magnocavallo 8a8b7ea35f
Add support for `iam_additive` and simplify factory interface in net VPC module (#1259)
* initial implementation, no tests

* change interface, align tests

* add examples ToC

* fix variable type, test module-level variable
2023-03-17 10:12:34 +00:00
Ludovico Magnocavallo 50adf1da2a
change target_vpcs variable to support dynamic values (#1255) 2023-03-17 07:14:09 +00:00
Ludo e322c83f90
update changelog 2023-03-16 18:58:47 +01:00
apichick e949216bb6
Merge pull request #1257 from apichick/fixes-compute-vm-boot-disk
Fixes related to boot_disk in compute-vm module
2023-03-16 16:24:25 +01:00
Miren Esnaola 21fa6d1f13 Fixes related to boot_disk in compute-vm module 2023-03-16 15:58:39 +01:00
Ludovico Magnocavallo 79a6e9b191
pin local provider (#1256) 2023-03-16 10:59:06 +00:00
Ludovico Magnocavallo cfc4b28600
Update CONTRIBUTING.md 2023-03-16 07:24:12 +01:00
Anton KOVACH d7991709c3
Merge pull request #1240 from antonkovach/feature/fast-cicd-github-enable-populating-of-data-directory-sample-files-and-update-dependencies
feat: Enable populating of data directory and .sample files and update dependencies in 0-cicd-github
2023-03-15 14:55:07 +01:00
Anton KOVACH 5d8cbd3c57
Merge branch 'master' into feature/fast-cicd-github-enable-populating-of-data-directory-sample-files-and-update-dependencies 2023-03-15 11:57:21 +01:00
Ludovico Magnocavallo 2794cb6f24
Fix #1139 (#1249) 2023-03-15 11:43:43 +01:00
Anton KOVACH 1355ee4c44 Refactor to avoid explicit dependencies 2023-03-15 10:07:09 +01:00
Ludovico Magnocavallo 892b5b3446
Merge branch 'master' into feature/fast-cicd-github-enable-populating-of-data-directory-sample-files-and-update-dependencies 2023-03-14 19:25:24 +01:00
Julio Diez 5daa83f72a
Merge pull request #1248 from juliodiez/master
Add link to public serverless networking guide
2023-03-14 18:05:44 +01:00
Julio Diez c7ca4325c3
Merge branch 'master' into master 2023-03-14 17:46:26 +01:00
Natalia Strelkova 8f141e36d6
Merge pull request #1247 from GoogleCloudPlatform/fast-resman-gke-gcs-location
Fast: resman: location and storage class added to GKE GCS buckets
2023-03-14 15:37:15 +00:00
Julio Diez b3139004b0 Add link to public serverless networking guide 2023-03-14 16:14:19 +01:00
Natalia Strelkova fe7725e7d0 formatting 2023-03-14 14:48:04 +00:00
Natalia Strelkova 8bf3e11f34
location and storage class added to GKE GCS buckets 2023-03-14 15:43:55 +01:00
Julio Castillo 7975dac11c
Merge pull request #1246 from GoogleCloudPlatform/jccb/project-wait-services
Delay creation of SVPC host bindings until APIs and JIT SAs are done
2023-03-14 15:16:59 +01:00
Julio Castillo c82f142d2d Delay creation of SVPC host bindings until APIs and JIT SAs are done 2023-03-14 14:51:17 +01:00
Ludovico Magnocavallo 042b2e333f
Merge branch 'master' into feature/fast-cicd-github-enable-populating-of-data-directory-sample-files-and-update-dependencies 2023-03-14 08:27:27 +01:00
lcaggio 3d78d42fc2
Merge pull request #1245 from GoogleCloudPlatform/lcaggio/fix-1236
Composer-2 - Fix 1236
2023-03-13 21:48:22 +01:00
lcaggio 368472c9a0 Fix 1236 2023-03-13 21:24:27 +01:00
Anton KOVACH e344dbc4f4 Add populate_samples attribute 2023-03-13 20:29:50 +01:00
Ludovico Magnocavallo bffd5bc17b
Merge branch 'master' into feature/fast-cicd-github-enable-populating-of-data-directory-sample-files-and-update-dependencies 2023-03-13 16:01:52 +01:00
apichick 3b82ccf510
Merge pull request #1243 from apichick/autopilot-fixes
Autopilot fixes
2023-03-13 14:17:19 +01:00
Miren Esnaola 57282d5dd3 Autopilot fixes 2023-03-13 12:55:45 +01:00
Sebastian Kunze 7afdde08c1
Remove container image workflows (#1242) 2023-03-13 07:39:03 +00:00
Ludovico Magnocavallo 112d9a8d9c
Allow using existing boot disk in compute-vm module (#1241)
* allow using existing boot disk in compute-vm module

* allow setting initialize params to null

* tests

* fast

* blueprints
2023-03-12 10:53:59 +01:00
simonebruzzechesse 6aa0fde85b
Small fixes on Network Dashboard cloud function code (#1218)
* small fix on discovery compute quota file
decresed severity of log in discover cai from INFO to DEBUG

* remove else statement in condition

* add flag for debug logging

---------

Co-authored-by: Ludo <ludomagno@google.com>
2023-03-12 09:53:22 +00:00
Anton KOVACH 7a53511c9a Enable populating of data directory and .sample files and update dependencies
The Readme.md files reference the data directory and .sample files, but the code did not allow for their populating. This update enables the copying of the data directory and .sample files, with the data directory being populating as a data.sample directory to prevent overwriting any existing data directory.

Additionally, dependencies have been updated by adding the depends_on section to several resources to ensure that the dependencies are in the correct order. This update addresses some states that were not being handled previously.

There is a minor known issue with Pull Request creation in the current state of the code. The Pull Request is only created after the first run has occurred. A fix for this issue is currently being worked on and will be addressed in a separate Pull Request. However, this issue does not affect the main functionality of the code.
2023-03-11 15:27:41 +01:00
Ludovico Magnocavallo 6ba0f8b0ba
allow overriding name in net-vpc subnet factory (#1239) 2023-03-11 09:30:42 +01:00
simonebruzzechesse 510db1b36f
Fix policy_based_routing.sh script on simple-nva module (#1226) 2023-03-10 18:36:07 +01:00
Julio Castillo 1c3645f3a3 Fix dataproc modules variables 2023-03-10 16:54:09 +01:00
Ludovico Magnocavallo 6e70b4216f
add missing attribute to FAST onprem VPN examples (#1237) 2023-03-10 14:58:33 +00:00
simonebruzzechesse b54951f596
Merge pull request #1234 from GoogleCloudPlatform/bruzz/fix-net-ilb-conn-track
Fixed connection tracking configuration on LB backend in net-ilb module
2023-03-10 15:25:30 +01:00
bruzzechesse 7595508bd4 fix variable 2023-03-10 12:03:54 +01:00
bruzzechesse 3ffda9c8c9 terraform fmt 2023-03-10 10:45:39 +01:00
bruzzechesse f688b9a47d realign logic to boolean variable 2023-03-10 10:43:37 +01:00
bruzzechesse 7781b72690 replace track_per_session with tracking_mode and fixed connection tracking conf for backends 2023-03-10 10:03:45 +01:00
Ludo b5cf00363e
update changelog 2023-03-10 09:25:57 +01:00
Ludovico Magnocavallo 45c12e233b
Network firewall policy module (#1232)
* validated, untested

* tested

* typo in README
2023-03-10 08:21:49 +00:00
Ludo b3e0f4e5f3
update changelog 2023-03-09 18:14:53 +01:00
Ludovico Magnocavallo be06554bba
Simplify VPN implementation in FAST networking stages (#1228)
* peering stage

* fix link, toc

* vpn stage

* fix link

* nva stage

* fix examples and test

* separate envs stage

* tfdoc
2023-03-09 17:57:44 +01:00
Julio Castillo 1b7864af2e
Merge pull request #1231 from GoogleCloudPlatform/jccb/workflow-refactor
Simplify testing workflow
2023-03-09 16:27:05 +01:00
Julio Castillo 744863b9a3 Simplify testing workflow 2023-03-09 16:04:01 +01:00
Julio Diez 1ff3bf9327
Merge pull request #1219 from juliodiez/ncc
Network Connectivity Center module
2023-03-09 16:01:51 +01:00
Julio Diez ff8f73762a
Merge branch 'master' into ncc 2023-03-09 15:38:40 +01:00
Julio Diez d0f346f6c6 Add resources created as outputs 2023-03-09 15:35:52 +01:00
Julio Diez f82b5284c9 Change semantics of custom_advertise 2023-03-09 15:35:52 +01:00
Julio Castillo 5a6cf3cbc4
Merge pull request #1230 from GoogleCloudPlatform/jccb/contributing-guide-update
Update contributing guide with new test framework
2023-03-09 15:16:08 +01:00
Julio Castillo 0e6f496b48 More typos 2023-03-09 14:55:41 +01:00
Julio Castillo 0989f20755 Fix typos/grammar 2023-03-09 14:41:18 +01:00
Julio Castillo 789405fcee Update TOC 2023-03-09 14:41:18 +01:00
Julio Castillo 165515f9fd Update contributing guide with new test framework 2023-03-09 14:41:18 +01:00
apichick e1b43c7c86
Merge pull request #1229 from apichick/removed-unneeded-files
Removed unnecessary files
2023-03-09 14:06:17 +01:00
Ludovico Magnocavallo 38cc743dc5
Merge branch 'master' into removed-unneeded-files 2023-03-09 13:41:15 +01:00
Miren Esnaola 3ea3d28972 Removed unnecessary files 2023-03-09 13:37:08 +01:00
Julio Diez 7eb9fbf676
Merge branch 'master' into ncc 2023-03-09 13:10:36 +01:00
Julio Diez 3e85175f67 Adapt README examples to the variables config 2023-03-09 13:06:02 +01:00
Julio Diez 0cf254f91e Update variable and output tables 2023-03-09 13:06:02 +01:00
Julio Diez 7e6635f535 Alphabetical order and better naming 2023-03-09 13:06:02 +01:00
Julio Diez eef6a48876 Make ip_interfaceX not optional
These IP values are optional, if you don't specify a value Google will try to
find a free IP address. But this is a bad idea because setting them to 'null'
forces a replacement even without any changes to make.
2023-03-09 13:06:02 +01:00
Julio Diez 84d3b83f81 Group router information under router_config 2023-03-09 13:06:02 +01:00
Julio Diez b25ee97d15 Group vpc and subnet under vpc_config 2023-03-09 13:06:02 +01:00
Julio Diez e9312e4dba var ras -> router_appliances 2023-03-09 13:06:02 +01:00
lcaggio ec016edfbb
Merge pull request #1227 from GoogleCloudPlatform/lcaggio/project-ai-encryption
Add CMEK support on BQML blueprint
2023-03-09 10:12:49 +01:00
lcaggio 3f9bbc2e5c Add cmek support on google_vertex_ai_metadata_store. 2023-03-09 09:13:21 +01:00
lcaggio 1671c5b4f3 Add cmek support on google_vertex_ai_metadata_store 2023-03-09 09:11:47 +01:00
lcaggio 82becd7451
Merge branch 'master' into lcaggio/project-ai-encryption 2023-03-09 08:20:14 +01:00
lcaggio cc6ee44759 Add aiplatform robot service account 2023-03-09 08:17:26 +01:00
Ludovico Magnocavallo 5489162b75
Merge branch 'master' into ncc 2023-03-08 20:33:53 +01:00
Julio Diez 96f35c53a5 Fix README variables to pass pytest 2023-03-08 20:00:55 +01:00
Julio Diez 93bb809a40 Rename module net-ncc -> ncc-spoke-ra 2023-03-08 20:00:55 +01:00
Julio Diez 62539508a5 Update README for the new implementation 2023-03-08 20:00:55 +01:00
Julio Diez 6196851d3f Output the name of the hub if created 2023-03-08 20:00:55 +01:00
Julio Diez 34c6a6aee1 Make creation of the hub optional 2023-03-08 20:00:55 +01:00
Julio Diez 1b4ba11dcd Make IPs for the CR interfaces optional 2023-03-08 20:00:55 +01:00
Julio Diez 0da0f33525 Make keepalive optional 2023-03-08 20:00:55 +01:00
Julio Diez 81121f4aa6 data_transfer default to false 2023-03-08 20:00:55 +01:00
Julio Diez d5d743174e Make custom_advertise optional 2023-03-08 20:00:55 +01:00
Julio Diez 2f64fcd5f4 Reimplement the module to manage only one spoke 2023-03-08 20:00:55 +01:00
Giorgio Conte b059da8327
Merge pull request #1225 from GoogleCloudPlatform/conteg/bqml-fix
Fix on bqml demo
2023-03-08 18:40:39 +01:00
Giorgio Conte ca9898395d
Merge branch 'master' into conteg/bqml-fix 2023-03-08 17:52:03 +01:00
lcaggio 4b108e8993
Merge pull request #1224 from GoogleCloudPlatform/lcaggio/project-notebook
Fix JIT notebook service account.
2023-03-08 16:33:39 +01:00
lcaggio e213f156ad Fix Jit notebook service account. 2023-03-08 16:06:27 +01:00
simonebruzzechesse fd07c444cb
Extended simple-nva module to manage BGP service running on FR routing docker container (#1195) 2023-03-08 09:43:13 +01:00
Julio Castillo b6b4e6417a
Merge pull request #1222 from GoogleCloudPlatform/jccb/fix-1220
Manage billing.creator role authoritatively in FAST bootstrap.
2023-03-07 19:04:06 +01:00
Julio Castillo e33caf0059 Fix tests 2023-03-07 17:52:00 +01:00
Julio Castillo 38808b37c0 Manage billing.creator role authoritatively in FAST bootstrap.
By default new orgs grant billing.creator and
resourcemanager.projectCreator to the whole domain[1]. This PR makes
FAST remove the former binding during the bootstrap (the latter is
already managed by FAST).

Fixes #1220

[1] https://cloud.google.com/resource-manager/docs/default-access-control
2023-03-07 17:52:00 +01:00
Natalia Strelkova cd8f0890e9
Merge pull request #1221 from GoogleCloudPlatform/fast-faq-non-empty-org
FAQ on installing Fast on a non-empty org
2023-03-07 16:23:45 +00:00
Natalia Strelkova 7184ce43eb
Merge branch 'master' into fast-faq-non-empty-org 2023-03-07 16:04:11 +00:00
apichick 0363835a2c
Merge pull request #1217 from apichick/autopilot
Added autopilot blueprint
2023-03-07 16:05:14 +01:00
Ludovico Magnocavallo 662a9b185c
Merge branch 'master' into autopilot 2023-03-07 15:51:03 +01:00
Natalia Strelkova 1f8e4cf1bf
FAQ on installing Fast on a non-empty org 2023-03-07 15:45:38 +01:00
Miren Esnaola a39fa7ca64 Added autopilot blueprint 2023-03-07 15:37:20 +01:00
Giorgio Conte a9bb717554 minor fix on bqml demo 2023-03-07 14:20:48 +00:00
Julio Diez 6eb82a2214
Merge pull request #16 from juliodiez/master
Sync branch
2023-03-07 13:13:35 +01:00
Julio Diez 5374c0e3bf
Merge pull request #15 from GoogleCloudPlatform/master
Sync fork
2023-03-07 13:12:21 +01:00
Julio Diez d9eaa59862 Generated variable table via tfdoc 2023-03-07 13:04:15 +01:00
Julio Diez ac224ad11c Add tftest to README 2023-03-07 12:29:20 +01:00
Julio Diez 94f3a08129 Add example of custom route advertisements 2023-03-07 11:54:34 +01:00
Julio Diez 9b5bc407ba Add image for load-balanced router appliances example 2023-03-07 11:10:19 +01:00
Julio Diez 58c90feca2 Add example of load-balanced router appliances 2023-03-07 11:06:23 +01:00
Julio Diez 3e0a8c4c0a Add image for site to two VPCs example 2023-03-07 10:43:51 +01:00
Julio Diez 76972d5804 Add example of site to two VPCs 2023-03-07 10:37:58 +01:00
Julio Diez 449f5cbb56 Adapt example to use only allowed chars for resource names 2023-03-07 10:28:29 +01:00
Julio Diez 87107ba3e0 Set a unique name to CRs linked to spokes 2023-03-07 10:11:02 +01:00
Julio Diez e7963eb630 Set a unique name to spokes 2023-03-07 10:01:07 +01:00
Julio Diez 71cb18f808 Replace map key derived from resource attributes 2023-03-07 09:52:34 +01:00
Julio Diez 0f4919a771 Add image for site to VPC example 2023-03-06 20:55:36 +01:00
Julio Diez 69493d8a40 Add README with first example 2023-03-06 20:47:18 +01:00
Julio Diez 65671647e7 Make optional some router config fields 2023-03-06 20:45:08 +01:00
Julio Diez 25b14465b2 Simplify some naming 2023-03-06 19:21:09 +01:00
Julio Diez e835730665 Add router BGP peers 2023-03-06 18:02:50 +01:00
Julio Diez 02707eb275 Initial commit for NCC module 2023-03-06 14:09:14 +01:00
lcaggio ca31192570
Merge pull request #1210 from GoogleCloudPlatform/lcaggio/bqml
Blueprint - BigQuery ML and Vertex AI Pipeline
2023-03-06 13:51:02 +01:00
Giorgio Conte 3123d67ddf moved sql to notebook 2023-03-06 12:28:58 +00:00
lcaggio ee55127ede Fix notebook 2023-03-06 12:51:46 +01:00
Giorgio Conte 0ac6dd65cf sql fix and more comments on demo notebook 2023-03-06 11:21:30 +00:00
Ludovico Magnocavallo cd24d90e0d
Merge branch 'master' into lcaggio/bqml 2023-03-06 12:15:14 +01:00
Giorgio Conte c82e7cca7b added demo README file 2023-03-06 11:12:36 +00:00
Ludovico Magnocavallo ef28e208d3
Use composite action for test workflow prerequisite steps (#1216)
* test composite action

* add shell in action steps

* home input

* boilerplate

* static home

* use action in all test steps

* fix step name
2023-03-06 11:44:57 +01:00
Giorgio Conte 0852ae3778 demo: added batch prediction example 2023-03-06 10:16:21 +00:00
Ludovico Magnocavallo 406921c7d7
Merge branch 'master' into lcaggio/bqml 2023-03-06 10:39:35 +01:00
Ludovico Magnocavallo 563ef270af
Try plugin cache, split examples tests (#1215)
* try plugin cache, split examples tests

* fix mkdir

* use cache
2023-03-06 10:38:39 +01:00
Ludovico Magnocavallo 016390486d
Merge branch 'master' into lcaggio/bqml 2023-03-06 09:32:55 +01:00
Anton KOVACH 77db9121f9
feat: Add Pull Request support to 0-cicd-github (#1213)
* feat: Add Pull Request support to 0-cicd-github

The cloud-foundation-fabricrepository is continually evolving, and to help keep up with the changes, it would be beneficial to introduce a pull request mechanism to review and approve changes. This feature is 100% backward compatible, and by default, no pull request is created, and changes are committed directly to the main branch. However, an optional variable pull_request_config can be used to configure the title, body, head_ref, and base_ref of the pull request that will be created for the initial population or update of files. To create a pull request, in pull_request_config set the create attribute to true. base_ref defaults to main, and head_ref to the name of the head branch. If the head branch doesn't exist, it will be created from the base_ref branch.

* fix README.md

* fix pull_request_config title
2023-03-06 09:32:36 +01:00
lcaggio 6f8cff558a
Merge branch 'master' into lcaggio/bqml 2023-03-05 22:56:07 +01:00
lcaggio f9acf61b81 Fix README 2023-03-05 22:42:27 +01:00
lcaggio 16f703f336 Fix typos 2023-03-05 22:30:33 +01:00
lcaggio dc034d74f7 Variables. 2023-03-05 22:24:16 +01:00
lcaggio 9e19f89608 Implement PR comments. 2023-03-05 22:02:41 +01:00
Justin M 4eff309685
Update subnet sample yaml files to use subnet_secondary_ranges (#1203)
* Replaces 'secondary_ip_range:' with 'secondary_ip_ranges:' in samples

* Replaces 'secondary_ip_range:' with 'secondary_ip_ranges:' in tests/

* reverts previous commit- files in tests/ don't need to be changed

---------

Co-authored-by: Ludovico Magnocavallo <ludomagno@google.com>
2023-03-05 19:37:23 +01:00
Anton KOVACH e72ddb6a2a
feat: Add option to skip committing unchanged files in 0-cicd-github (#1212)
When running 0-cicd-github multiple times, files that haven't changed are also committed. This change adds an option to skip committing unchanged files to prevent unnecessary commits.

Co-authored-by: Ludovico Magnocavallo <ludomagno@google.com>
2023-03-05 19:16:48 +01:00
Ludo d3a6c9d1f1
update changelog 2023-03-05 18:53:46 +01:00
Ludovico Magnocavallo 8fc9549c58
add support for proxy and psc subnets to module factory (#1211) 2023-03-05 17:08:43 +01:00
lcaggio 2b8ba16a9a Fix typos 2023-03-04 14:32:54 +01:00
lcaggio 652495e530 Update versions. 2023-03-04 14:12:50 +01:00
lcaggio f8a7aa865a Fix test. 2023-03-04 08:25:29 +01:00
lcaggio 8d70e1d900
Merge branch 'master' into lcaggio/bqml 2023-03-04 08:22:22 +01:00
lcaggio ccd68b2fa6 Fix linting. 2023-03-04 08:19:47 +01:00
lcaggio 0d4b599e99 Fix README 2023-03-04 08:13:53 +01:00
lcaggio 98e17bb997 Fix readme. 2023-03-04 08:09:29 +01:00
Ludo 21e451d4cb
update changelog 2023-03-03 23:00:51 +01:00
Ludovico Magnocavallo 96e829bdf3
Billing exclusion support for FAST mt resman (#1209)
* fix files resource parsing in tfdoc

* fix tfdoc generated output

* billing exclusion support in mt bootstrap
2023-03-03 16:23:36 +00:00
Giorgio Conte 6526dda8c7 sql linting 2023-03-03 14:52:35 +00:00
lcaggio 594a615e1e Update 2023-03-03 15:08:57 +01:00
lcaggio 32808f93ea Update README. 2023-03-03 14:52:33 +01:00
Ludovico Magnocavallo 2217abe5f0
Allow preventing creation of billing IAM roles in FAST, add instructions on delayed billing association (#1207)
* stage 0

* resman and networking stages

* tfdoc

* security stage
2023-03-03 09:24:41 +01:00
Aleksandr Averbukh 06dd38170d
Fix outdated go deps, dependabot alerts (#1208) 2023-03-03 07:15:08 +01:00
lcaggio 1fd68f6106
Merge pull request #1206 from GoogleCloudPlatform/lcaggio/dataproc-03
Dataproc module. Fix output.
2023-03-02 13:59:18 +01:00
lcaggio 88ecdbe671
Merge branch 'master' into lcaggio/dataproc-03 2023-03-02 12:18:52 +01:00
Taneli Leppä ba71905f54
Merge pull request #1205 from rosmo/fix-pubsub
Fix issue with GKE cluster notifications topic & static output for pubsub module
2023-03-02 11:43:40 +01:00
Taneli Leppä 99d19d5ec8 Fix issue with GKE cluster notifications topic, change pubsub module output to static. 2023-03-02 11:23:05 +01:00
lcaggio b7793f69a2 Dataproc module. Fix output. 2023-03-02 10:39:08 +01:00
Luca Prete a5fd32edcb
Blueprint: GLB hybrid NEG internal 2023-03-02 09:53:07 +01:00
erabusi 2ebb21e4cc
Fix url_redirect issue on net-glb module (#1204) 2023-03-02 07:51:39 +01:00
Aleksandr Averbukh 1f713557c2
Merge pull request #1201 from GoogleCloudPlatform/tfc-blueprint-miss-tmlp
Add missing tfvars template to the tfc blueprint
2023-03-01 21:10:45 +01:00
Aleksandr Averbukh 9b6d1b59da
Merge branch 'master' into tfc-blueprint-miss-tmlp 2023-03-01 12:29:31 +01:00
lcaggio 57398a50b4
Merge pull request #1199 from GoogleCloudPlatform/lcaggio/dataproc-02
[Dataproc module] Fix Variables
2023-03-01 12:16:11 +01:00
Aleksandr Averbukh b4a8a37805 Fix tfvars template 2023-03-01 11:34:37 +01:00
lcaggio b39b486cd4 Fix README 2023-03-01 10:48:33 +01:00
lcaggio e9a73f873f Remove wrongly submitted file. 2023-03-01 10:46:33 +01:00
lcaggio c4d8175d9a
Merge branch 'master' into lcaggio/dataproc-02 2023-03-01 10:44:31 +01:00
lcaggio 0d37fe8338 Update README 2023-03-01 10:44:01 +01:00
lcaggio e9119f2c9d Update README. 2023-03-01 10:43:33 +01:00
Aleksandr Averbukh b7418353be Missing newline 2023-03-01 10:43:32 +01:00
Aleksandr Averbukh 2d9dd5071c Add more explicit template 2023-03-01 10:42:39 +01:00
Aleksandr Averbukh d7dae1da08 Add missing tfvars template to the tfc blueprint 2023-03-01 10:33:08 +01:00
Julio Castillo 7bfbd16fbb
Merge pull request #1200 from GoogleCloudPlatform/jccb/test-1197
Add test for #1197
2023-03-01 10:15:12 +01:00
Julio Castillo 67bc391b66 Add test for #1197 2023-03-01 09:58:50 +01:00
Ludovico Magnocavallo 3a2d6e1b46
Fix secondary ranges in net-vpc readme (#1198)
Fixes #1197
2023-03-01 08:08:07 +01:00
lcaggio dad3c49012 Fix linting 2023-03-01 08:00:52 +01:00
Ludovico Magnocavallo 6629e5cd06
Merge branch 'master' into lcaggio/dataproc-02 2023-03-01 07:57:21 +01:00
lcaggio dc37783022 Fix Variables 2023-03-01 07:54:10 +01:00
Giorgio Conte 17b8a461f0 fixed notebook with dynamic model name
cleared output from cells
added creation of view instead of table
2023-02-27 15:28:49 +00:00
Giorgio Conte 3271acd2f2 Added sql and jupyter notebook to run the demo 2023-02-27 10:56:47 +00:00
Giorgio Conte a51c682005 Updated tf file to add the following features:
- default location of dataset to US
- changed name of vertex metastore to "default"
- add ai user and service account us to notebook SA
- add ai user to vertex sa
2023-02-24 13:27:44 +00:00
lcaggio 50856e6951 First commit 2023-02-23 18:36:03 +01:00
320 changed files with 8927 additions and 2271 deletions

59
.github/actions/fabric-tests/action.yml vendored Normal file
View File

@ -0,0 +1,59 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: fabric-tests
description: Set up Fabric testing environment
inputs:
PYTHON_VERSION:
required: true
TERRAFORM_VERSION:
required: true
runs:
using: composite
steps:
- name: Config auth
shell: bash
run: |
echo '{"type": "service_account", "project_id": "test-only"}' \
| tee -a $GOOGLE_APPLICATION_CREDENTIALS
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ inputs.PYTHON_VERSION }}
cache: 'pip'
cache-dependency-path: 'tests/requirements.txt'
- name: Set up Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: ${{ inputs.TERRAFORM_VERSION }}
terraform_wrapper: false
- name: Configure provider cache
shell: bash
run: |
echo 'plugin_cache_dir = "/home/runner/.terraform.d/plugin-cache"' \
| tee -a /home/runner/.terraformrc
echo 'disable_checkpoint = true' \
| tee -a /home/runner/.terraformrc
mkdir -p ${{ env.TF_PLUGIN_CACHE_DIR }}
# avoid conflicts with user-installed providers on local machines
- name: Pin provider versions
shell: bash
run: |
for f in $(find . -name versions.tf); do
sed -i 's/>=\(.*# tftest\)/=\1/g' $f;
done
- name: Install Python Dependencies
shell: bash
run: |
pip install -r tests/requirements.txt

View File

@ -1,66 +0,0 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: "Build and push a generic container image"
on:
workflow_call:
inputs:
image_name:
required: true
type: string
docker_context:
required: true
type: string
permissions:
packages: write
env:
REGISTRY: ghcr.io
jobs:
build-push-generic-container-image:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set image version
run: echo IMAGE_VERSION=$(date +'%Y%m%d') >> $GITHUB_ENV
- name: Normalise image name
run: echo IMAGE_NAME=$(echo '${{ github.repository_owner }}/${{ inputs.image_name }}' | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
- name: Login to GHCR
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v3
with:
context: ${{ inputs.docker_context }}
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_VERSION }}
labels: |
org.opencontainers.image.licenses=Apache-2.0
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.title=${{ inputs.image_name }}
org.opencontainers.image.vendor=Google LLC
org.opencontainers.image.version=${{ env.IMAGE_VERSION }}

View File

@ -31,150 +31,72 @@ env:
TF_VERSION: 1.3.9
jobs:
examples:
examples-blueprints:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Config auth
run: |
echo '{"type": "service_account", "project_id": "test-only"}' \
| tee -a $GOOGLE_APPLICATION_CREDENTIALS
- name: Set up Python
uses: actions/setup-python@v4
- name: Call composite action fabric-tests
uses: ./.github/actions/fabric-tests
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
cache-dependency-path: 'tests/requirements.txt'
- name: Set up Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: ${{ env.TF_VERSION }}
terraform_wrapper: false
# avoid conflicts with user-installed providers on local machines
- name: Pin provider versions
run: |
for f in $(find . -name versions.tf); do
sed -i 's/>=\(.*# tftest\)/=\1/g' $f;
done
PYTHON_VERSION: ${{ env.PYTHON_VERSION }}
TERRAFORM_VERSION: ${{ env.TERRAFORM_VERSION }}
- name: Run tests on documentation examples
id: pytest
run: |
mkdir -p ${{ env.TF_PLUGIN_CACHE_DIR }}
pip install -r tests/requirements.txt
pytest -vv tests/examples
run: pytest -vv -k blueprints/ tests/examples
examples-modules:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Call composite action fabric-tests
uses: ./.github/actions/fabric-tests
with:
PYTHON_VERSION: ${{ env.PYTHON_VERSION }}
TERRAFORM_VERSION: ${{ env.TERRAFORM_VERSION }}
- name: Run tests on documentation examples
run: pytest -vv -k modules/ tests/examples
blueprints:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Config auth
run: |
echo '{"type": "service_account", "project_id": "test-only"}' \
| tee -a $GOOGLE_APPLICATION_CREDENTIALS
- name: Set up Python
uses: actions/setup-python@v4
- name: Call composite action fabric-tests
uses: ./.github/actions/fabric-tests
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
cache-dependency-path: 'tests/requirements.txt'
- name: Set up Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: ${{ env.TF_VERSION }}
terraform_wrapper: false
# avoid conflicts with user-installed providers on local machines
- name: Pin provider versions
run: |
for f in $(find . -name versions.tf); do
sed -i 's/>=\(.*# tftest\)/=\1/g' $f;
done
PYTHON_VERSION: ${{ env.PYTHON_VERSION }}
TERRAFORM_VERSION: ${{ env.TERRAFORM_VERSION }}
- name: Run tests environments
id: pytest
run: |
mkdir -p ${{ env.TF_PLUGIN_CACHE_DIR }}
pip install -r tests/requirements.txt
pytest -vv tests/blueprints
run: pytest -vv tests/blueprints
modules:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Config auth
run: |
echo '{"type": "service_account", "project_id": "test-only"}' \
| tee -a $GOOGLE_APPLICATION_CREDENTIALS
- name: Set up Python
uses: actions/setup-python@v4
- name: Call composite action fabric-tests
uses: ./.github/actions/fabric-tests
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
cache-dependency-path: 'tests/requirements.txt'
- name: Set up Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: ${{ env.TF_VERSION }}
terraform_wrapper: false
# avoid conflicts with user-installed providers on local machines
- name: Pin provider versions
run: |
for f in $(find . -name versions.tf); do
sed -i 's/>=\(.*# tftest\)/=\1/g' $f;
done
PYTHON_VERSION: ${{ env.PYTHON_VERSION }}
TERRAFORM_VERSION: ${{ env.TERRAFORM_VERSION }}
- name: Run tests modules
id: pytest
run: |
mkdir -p ${{ env.TF_PLUGIN_CACHE_DIR }}
pip install -r tests/requirements.txt
pytest -vv tests/modules
run: pytest -vv tests/modules
fast:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Config auth
run: |
echo '{"type": "service_account", "project_id": "test-only"}' \
| tee -a $GOOGLE_APPLICATION_CREDENTIALS
- name: Set up Python
uses: actions/setup-python@v4
- name: Call composite action fabric-tests
uses: ./.github/actions/fabric-tests
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
cache-dependency-path: 'tests/requirements.txt'
- name: Set up Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: ${{ env.TF_VERSION }}
terraform_wrapper: false
# avoid conflicts with user-installed providers on local machines
- name: Pin provider versions
run: |
for f in $(find . -name versions.tf); do
sed -i 's/>=\(.*# tftest\)/=\1/g' $f;
done
PYTHON_VERSION: ${{ env.PYTHON_VERSION }}
TERRAFORM_VERSION: ${{ env.TERRAFORM_VERSION }}
- name: Run tests on FAST stages
id: pytest
run: |
mkdir -p ${{ env.TF_PLUGIN_CACHE_DIR }}
pip install -r tests/requirements.txt
pytest -vv tests/fast
run: pytest -vv tests/fast

5
.gitignore vendored
View File

@ -49,3 +49,8 @@ blueprints/apigee/hybrid-gke/apiproxy.zip
blueprints/apigee/hybrid-gke/deploy-apiproxy.sh
blueprints/apigee/hybrid-gke/ansible/gssh.sh
blueprints/apigee/hybrid-gke/ansible/vars/vars.yaml
blueprints/gke/autopilot/ansible/gssh.sh
blueprints/gke/autopilot/ansible/vars/vars.yaml
blueprints/gke/autopilot/bundle/monitoring/kustomization.yaml
blueprints/gke/autopilot/bundle/locust/kustomization.yaml
blueprints/gke/autopilot/bundle.tar.gz

View File

@ -8,6 +8,22 @@ All notable changes to this project will be documented in this file.
### BLUEPRINTS
- [[#1257](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1257)] Fixes related to boot_disk in compute-vm module ([apichick](https://github.com/apichick)) <!-- 2023-03-16 15:24:26+00:00 -->
- [[#1256](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1256)] **incompatible change:** Pin local provider ([ludoo](https://github.com/ludoo)) <!-- 2023-03-16 10:59:07+00:00 -->
- [[#1245](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1245)] Composer-2 - Fix 1236 ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-13 20:48:22+00:00 -->
- [[#1243](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1243)] Autopilot fixes ([apichick](https://github.com/apichick)) <!-- 2023-03-13 13:17:20+00:00 -->
- [[#1241](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1241)] **incompatible change:** Allow using existing boot disk in compute-vm module ([ludoo](https://github.com/ludoo)) <!-- 2023-03-12 09:54:00+00:00 -->
- [[#1218](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1218)] Small fixes on Network Dashboard cloud function code ([simonebruzzechesse](https://github.com/simonebruzzechesse)) <!-- 2023-03-12 09:53:22+00:00 -->
- [[#1229](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1229)] Removed unnecessary files ([apichick](https://github.com/apichick)) <!-- 2023-03-09 13:06:18+00:00 -->
- [[#1227](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1227)] Add CMEK support on BQML blueprint ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-09 09:12:50+00:00 -->
- [[#1225](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1225)] Fix on bqml demo ([gioconte](https://github.com/gioconte)) <!-- 2023-03-08 17:40:40+00:00 -->
- [[#1217](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1217)] Added autopilot blueprint ([apichick](https://github.com/apichick)) <!-- 2023-03-07 15:05:15+00:00 -->
- [[#1210](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1210)] Blueprint - BigQuery ML and Vertex AI Pipeline ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-06 12:51:02+00:00 -->
- [[#1208](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1208)] Fix outdated go deps, dependabot alerts ([averbuks](https://github.com/averbuks)) <!-- 2023-03-03 06:15:09+00:00 -->
- [[#1150](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1150)] Blueprint: GLB hybrid NEG internal ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-03-02 08:53:07+00:00 -->
- [[#1201](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1201)] Add missing tfvars template to the tfc blueprint ([averbuks](https://github.com/averbuks)) <!-- 2023-03-01 20:10:46+00:00 -->
- [[#1196](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1196)] Fix compute-vm:CloudKMS test for provider>=4.54.0 ([dan-farmer](https://github.com/dan-farmer)) <!-- 2023-02-28 15:53:41+00:00 -->
- [[#1189](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1189)] Update healthchecker deps (dependabot alerts) ([averbuks](https://github.com/averbuks)) <!-- 2023-02-27 21:48:49+00:00 -->
- [[#1184](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1184)] **incompatible change:** Allow multiple peer gateways in VPN HA module ([ludoo](https://github.com/ludoo)) <!-- 2023-02-27 10:19:00+00:00 -->
- [[#1143](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1143)] Test blueprints from README files ([juliocc](https://github.com/juliocc)) <!-- 2023-02-27 08:57:41+00:00 -->
- [[#1181](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1181)] Bump golang.org/x/sys from 0.0.0-20220310020820-b874c991c1a5 to 0.1.0 in /blueprints/cloud-operations/unmanaged-instances-healthcheck/function/healthchecker ([dependabot[bot]](https://github.com/dependabot[bot])) <!-- 2023-02-25 17:02:08+00:00 -->
@ -27,6 +43,17 @@ All notable changes to this project will be documented in this file.
### DOCUMENTATION
- [[#1257](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1257)] Fixes related to boot_disk in compute-vm module ([apichick](https://github.com/apichick)) <!-- 2023-03-16 15:24:26+00:00 -->
- [[#1248](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1248)] Add link to public serverless networking guide ([juliodiez](https://github.com/juliodiez)) <!-- 2023-03-14 17:05:45+00:00 -->
- [[#1232](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1232)] Network firewall policy module ([ludoo](https://github.com/ludoo)) <!-- 2023-03-10 08:21:50+00:00 -->
- [[#1230](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1230)] Update contributing guide with new test framework ([juliocc](https://github.com/juliocc)) <!-- 2023-03-09 14:16:08+00:00 -->
- [[#1221](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1221)] FAQ on installing Fast on a non-empty org ([skalolazka](https://github.com/skalolazka)) <!-- 2023-03-07 16:23:46+00:00 -->
- [[#1217](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1217)] Added autopilot blueprint ([apichick](https://github.com/apichick)) <!-- 2023-03-07 15:05:15+00:00 -->
- [[#1210](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1210)] Blueprint - BigQuery ML and Vertex AI Pipeline ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-06 12:51:02+00:00 -->
- [[#1150](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1150)] Blueprint: GLB hybrid NEG internal ([LucaPrete](https://github.com/LucaPrete)) <!-- 2023-03-02 08:53:07+00:00 -->
- [[#1193](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1193)] Add reference to Cloud Run blueprints ([juliodiez](https://github.com/juliodiez)) <!-- 2023-02-28 10:16:45+00:00 -->
- [[#1188](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1188)] Add reference to Cloud Run blueprints ([juliodiez](https://github.com/juliodiez)) <!-- 2023-02-27 21:22:31+00:00 -->
- [[#1187](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1187)] Add references to the serverless chapters ([juliodiez](https://github.com/juliodiez)) <!-- 2023-02-27 17:16:20+00:00 -->
- [[#1179](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1179)] Added a PSC GCLB example ([cgrotz](https://github.com/cgrotz)) <!-- 2023-02-24 20:09:31+00:00 -->
- [[#1165](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1165)] DataPlatform: Support project creation ([lcaggio](https://github.com/lcaggio)) <!-- 2023-02-23 11:10:44+00:00 -->
- [[#1145](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1145)] FAST stage docs cleanup ([ludoo](https://github.com/ludoo)) <!-- 2023-02-15 05:42:14+00:00 -->
@ -36,6 +63,19 @@ All notable changes to this project will be documented in this file.
### FAST
- [[#1240](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1240)] feat: Enable populating of data directory and .sample files and update dependencies in 0-cicd-github ([antonkovach](https://github.com/antonkovach)) <!-- 2023-03-15 13:55:08+00:00 -->
- [[#1249](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1249)] Document need to set `outputs_location` explicitly in every stage ([ludoo](https://github.com/ludoo)) <!-- 2023-03-15 10:43:44+00:00 -->
- [[#1247](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1247)] Fast: resman: location and storage class added to GKE GCS buckets ([skalolazka](https://github.com/skalolazka)) <!-- 2023-03-14 15:37:16+00:00 -->
- [[#1241](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1241)] **incompatible change:** Allow using existing boot disk in compute-vm module ([ludoo](https://github.com/ludoo)) <!-- 2023-03-12 09:54:00+00:00 -->
- [[#1237](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1237)] Add missing attribute to FAST onprem VPN examples ([ludoo](https://github.com/ludoo)) <!-- 2023-03-10 14:58:34+00:00 -->
- [[#1228](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1228)] **incompatible change:** Simplify VPN implementation in FAST networking stages ([ludoo](https://github.com/ludoo)) <!-- 2023-03-09 16:57:45+00:00 -->
- [[#1222](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1222)] Manage billing.creator role authoritatively in FAST bootstrap. ([juliocc](https://github.com/juliocc)) <!-- 2023-03-07 18:04:07+00:00 -->
- [[#1213](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1213)] feat: Add Pull Request support to 0-cicd-github ([antonkovach](https://github.com/antonkovach)) <!-- 2023-03-06 08:32:36+00:00 -->
- [[#1203](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1203)] Update subnet sample yaml files to use subnet_secondary_ranges ([jmound](https://github.com/jmound)) <!-- 2023-03-05 18:37:23+00:00 -->
- [[#1212](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1212)] feat: skip committing unchanged files in 0-cicd-github ([antonkovach](https://github.com/antonkovach)) <!-- 2023-03-05 18:16:48+00:00 -->
- [[#1211](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1211)] **incompatible change:** Add support for proxy and psc subnets to net-vpc module factory ([ludoo](https://github.com/ludoo)) <!-- 2023-03-05 16:08:43+00:00 -->
- [[#1209](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1209)] Billing exclusion support for FAST mt resman ([ludoo](https://github.com/ludoo)) <!-- 2023-03-03 16:23:37+00:00 -->
- [[#1207](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1207)] Allow preventing creation of billing IAM roles in FAST, add instructions on delayed billing association ([ludoo](https://github.com/ludoo)) <!-- 2023-03-03 08:24:42+00:00 -->
- [[#1184](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1184)] **incompatible change:** Allow multiple peer gateways in VPN HA module ([ludoo](https://github.com/ludoo)) <!-- 2023-02-27 10:19:00+00:00 -->
- [[#1165](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1165)] DataPlatform: Support project creation ([lcaggio](https://github.com/lcaggio)) <!-- 2023-02-23 11:10:44+00:00 -->
- [[#1170](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1170)] Add documentation about referring modules stored on CSR ([wiktorn](https://github.com/wiktorn)) <!-- 2023-02-22 09:02:54+00:00 -->
@ -52,6 +92,30 @@ All notable changes to this project will be documented in this file.
### MODULES
- [[#1256](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1256)] **incompatible change:** Pin local provider ([ludoo](https://github.com/ludoo)) <!-- 2023-03-16 10:59:07+00:00 -->
- [[#1246](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1246)] Delay creation of SVPC host bindings until APIs and JIT SAs are done ([juliocc](https://github.com/juliocc)) <!-- 2023-03-14 14:16:59+00:00 -->
- [[#1241](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1241)] **incompatible change:** Allow using existing boot disk in compute-vm module ([ludoo](https://github.com/ludoo)) <!-- 2023-03-12 09:54:00+00:00 -->
- [[#1239](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1239)] Allow overriding name in net-vpc subnet factory ([ludoo](https://github.com/ludoo)) <!-- 2023-03-11 08:30:43+00:00 -->
- [[#1226](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1226)] Fix policy_based_routing.sh script on simple-nva module ([simonebruzzechesse](https://github.com/simonebruzzechesse)) <!-- 2023-03-10 17:36:08+00:00 -->
- [[#1234](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1234)] Fixed connection tracking configuration on LB backend in net-ilb module ([simonebruzzechesse](https://github.com/simonebruzzechesse)) <!-- 2023-03-10 14:25:30+00:00 -->
- [[#1232](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1232)] Network firewall policy module ([ludoo](https://github.com/ludoo)) <!-- 2023-03-10 08:21:50+00:00 -->
- [[#1219](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1219)] Network Connectivity Center module ([juliodiez](https://github.com/juliodiez)) <!-- 2023-03-09 15:01:51+00:00 -->
- [[#1227](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1227)] Add CMEK support on BQML blueprint ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-09 09:12:50+00:00 -->
- [[#1224](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1224)] Fix JIT notebook service account. ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-08 15:33:40+00:00 -->
- [[#1195](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1195)] Extended simple-nva module to manage BGP service running on FR routing docker container ([simonebruzzechesse](https://github.com/simonebruzzechesse)) <!-- 2023-03-08 08:43:13+00:00 -->
- [[#1211](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1211)] **incompatible change:** Add support for proxy and psc subnets to net-vpc module factory ([ludoo](https://github.com/ludoo)) <!-- 2023-03-05 16:08:43+00:00 -->
- [[#1206](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1206)] Dataproc module. Fix output. ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-02 12:59:19+00:00 -->
- [[#1205](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1205)] Fix issue with GKE cluster notifications topic & static output for pubsub module ([rosmo](https://github.com/rosmo)) <!-- 2023-03-02 10:43:40+00:00 -->
- [[#1204](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1204)] Fix url_redirect issue on net-glb module ([erabusi](https://github.com/erabusi)) <!-- 2023-03-02 06:51:40+00:00 -->
- [[#1199](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1199)] [Dataproc module] Fix Variables ([lcaggio](https://github.com/lcaggio)) <!-- 2023-03-01 11:16:11+00:00 -->
- [[#1200](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1200)] Add test for #1197 ([juliocc](https://github.com/juliocc)) <!-- 2023-03-01 09:15:13+00:00 -->
- [[#1198](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1198)] Fix secondary ranges in net-vpc readme ([ludoo](https://github.com/ludoo)) <!-- 2023-03-01 07:08:08+00:00 -->
- [[#1196](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1196)] Fix compute-vm:CloudKMS test for provider>=4.54.0 ([dan-farmer](https://github.com/dan-farmer)) <!-- 2023-02-28 15:53:41+00:00 -->
- [[#1194](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1194)] Fix HTTPS health check mismapped to HTTP in compute-mig and net-ilb modules ([jogoldberg](https://github.com/jogoldberg)) <!-- 2023-02-28 14:48:13+00:00 -->
- [[#1192](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1192)] Dataproc module: Fix outputs ([lcaggio](https://github.com/lcaggio)) <!-- 2023-02-28 10:47:23+00:00 -->
- [[#1190](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1190)] Dataproc Module ([lcaggio](https://github.com/lcaggio)) <!-- 2023-02-28 06:45:41+00:00 -->
- [[#1191](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1191)] Fix external gateway in VPN HA module ([ludoo](https://github.com/ludoo)) <!-- 2023-02-27 23:46:51+00:00 -->
- [[#1186](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1186)] Fix Workload Identity for ASM in GKE hub module ([valeriobponza](https://github.com/valeriobponza)) <!-- 2023-02-27 19:17:45+00:00 -->
- [[#1184](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1184)] **incompatible change:** Allow multiple peer gateways in VPN HA module ([ludoo](https://github.com/ludoo)) <!-- 2023-02-27 10:19:00+00:00 -->
- [[#1177](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1177)] Implemented conditional dynamic blocks for `google_access_context_manager_service_perimeter` `spec` and `status` ([calexandre](https://github.com/calexandre)) <!-- 2023-02-25 16:04:19+00:00 -->
- [[#1178](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1178)] adding meshconfig.googleapis.com to JIT list. ([valeriobponza](https://github.com/valeriobponza)) <!-- 2023-02-24 18:28:05+00:00 -->
@ -77,6 +141,13 @@ All notable changes to this project will be documented in this file.
### TOOLS
- [[#1242](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1242)] Remove container image workflows ([kunzese](https://github.com/kunzese)) <!-- 2023-03-13 07:39:04+00:00 -->
- [[#1231](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1231)] Simplify testing workflow ([juliocc](https://github.com/juliocc)) <!-- 2023-03-09 15:27:05+00:00 -->
- [[#1216](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1216)] Use composite action for test workflow prerequisite steps ([ludoo](https://github.com/ludoo)) <!-- 2023-03-06 10:44:58+00:00 -->
- [[#1215](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1215)] Try plugin cache, split examples tests ([ludoo](https://github.com/ludoo)) <!-- 2023-03-06 09:38:40+00:00 -->
- [[#1211](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1211)] **incompatible change:** Add support for proxy and psc subnets to net-vpc module factory ([ludoo](https://github.com/ludoo)) <!-- 2023-03-05 16:08:43+00:00 -->
- [[#1209](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1209)] Billing exclusion support for FAST mt resman ([ludoo](https://github.com/ludoo)) <!-- 2023-03-03 16:23:37+00:00 -->
- [[#1208](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1208)] Fix outdated go deps, dependabot alerts ([averbuks](https://github.com/averbuks)) <!-- 2023-03-03 06:15:09+00:00 -->
- [[#1182](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1182)] Bump actions versions ([juliocc](https://github.com/juliocc)) <!-- 2023-02-25 16:27:20+00:00 -->
- [[#1052](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/1052)] **incompatible change:** FAST multitenant bootstrap and resource management, rename org-level FAST stages ([ludoo](https://github.com/ludoo)) <!-- 2023-02-04 14:00:46+00:00 -->

View File

@ -4,17 +4,26 @@ Contributors are the engine that keeps Fabric alive so if you were or are planni
## Table of Contents
[I just found a bug / have a feature request!](#i-just-found-a-bug--have-a-feature-request)
[Quick developer workflow](#quick-developer-workflow)
[Developer's Handbook](#developers-handbook)
- [The Zen of Fabric](#the-zen-of-fabric)
- [Design principles in action](#design-principles-in-action)
- [FAST stage design](#fast-stage-design)
- [Style guide reference](#style-guide-reference)
- [Checks, tests and tools](#interacting-with-checks-tests-and-tools)
* [I just found a bug / have a feature request](#i-just-found-a-bug---have-a-feature-request)
* [Quick developer workflow](#quick-developer-workflow)
* [Developer's handbook](#developers-handbook)
+ [The Zen of Fabric](#the-zen-of-fabric)
+ [Design principles in action](#design-principles-in-action)
+ [FAST stage design](#fast-stage-design)
+ [Style guide reference](#style-guide-reference)
+ [Interacting with checks and tools](#interacting-with-checks-and-tools)
* [Using and writing tests](#using-and-writing-tests)
+ [Testing via README.md example blocks.](#testing-via-readmemd-example-blocks)
- [Testing examples against an inventory YAML](#testing-examples-against-an-inventory-yaml)
- [Using external files](#using-external-files)
- [Running tests for specific examples](#running-tests-for-specific-examples)
- [Generating the inventory automatically](#generating-the-inventory-automatically)
- [Building tests for blueprints](#building-tests-for-blueprints)
+ [Testing via `tfvars` and `yaml` (aka `tftest`-based tests)](#testing-via--tfvars--and--yaml---aka--tftest--based-tests-)
- [Generating the inventory for `tftest`-based tests](#generating-the-inventory-for--tftest--based-tests)
+ [Writing tests in Python (legacy approach)](#writing-tests-in-python--legacy-approach-)
+ [Running tests from a temporary directory](#running-tests-from-a-temporary-directory)
* [Fabric tools](#fabric-tools)
## I just found a bug / have a feature request
@ -301,9 +310,11 @@ module "simple-vm-example" {
zone = "europe-west1-b"
name = "test"
boot_disk = {
image = "projects/debian-cloud/global/images/family/cos-97-lts"
type = "pd-balanced"
size = 10
initialize_params = {
image = "projects/debian-cloud/global/images/family/cos-97-lts"
type = "pd-balanced"
size = 10
}
}
}
```
@ -578,7 +589,7 @@ variable "prefix" {
}
```
### Interacting with checks, tests and tools
### Interacting with checks and tools
Our modules are designed for composition and live in a monorepo together with several end-to-end blueprints, so it was inevitable that over time we found ways of ensuring that a change does not break consumers.
@ -642,7 +653,7 @@ Options:
The test workflow runs test suites in parallel. Refer to the next section for more details on running and writing tests.
#### Using and writing tests
## Using and writing tests
Our testing approach follows a simple philosophy: we mainly test to ensure code works, and that it does not break due to changes to dependencies (modules) or provider resources.
@ -650,11 +661,239 @@ This makes testing very simple, as a successful `terraform plan` run in a test c
As our testing needs are very simple, we also wanted to reduce the friction required to write new tests as much as possible: our tests are written in Python and use `pytest` which is the standard for the language, leveraging our [`tftest`](https://pypi.org/project/tftest/) library, which wraps the Terraform executable and returns familiar data structures for most commands.
Writing `pytest` unit tests to check plan results is really easy, but since wrapping modules and examples in dedicated fixtures and hand-coding checks gets annoying after a while, we developed a thin layer that allows us to use `tfvars` files to run tests, and `yaml` results to check results. In some specific situations you might still want to interact directly with `tftest` via Python, if that's the case skip to the legacy approach below.
Writing `pytest` unit tests to check plan results is really easy, but since wrapping modules and examples in dedicated fixtures and hand-coding checks gets annoying after a while, we developed additional ways that allow us to simplify the overall process.
##### Testing end-to-end examples via `tfvars` and `yaml`
In the following sections we describe the three testing approaches we currently have:
Our new approach to testing requires you to:
- [Example-based tests](#testing-via-readmemd-example-blocks): this is perhaps the easiest and most common way to test either a module or a blueprint. You simply have to provide an example call to your module and a few metadata values in the module's README.md.
- [tfvars-based tests](#testing-via-tfvars-and-yaml): allows you to test a module or blueprint by providing variables via tfvar files and an expected plan result in form of an inventory. This type of test is useful, for example, for FAST stages that don't have any examples within their READMEs.
- [Python-based (legacy) tests](#writing-tests-in-python--legacy-approach-): in some situations you might still want to interact directly with `tftest` via Python, if that's the case, use this method to write custom Python logic to test your module in any way you see fit.
### Testing via README.md example blocks.
This is the preferred method to write tests for modules and blueprints. Example-based tests are triggered from [HCL Markdown fenced code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks#syntax-highlighting) in any file named README.md, hence there's no need to create any additional files or revert to Python to write a test. Most of our documentation examples are using this method.
To enable an example for testing just use the special `tftest` comment as the last line in the example, listing the number of modules and resources expected.
A [few preset variables](./tests/examples/variables.tf) are available for use, as shown in this example from the `dns` module documentation.
```hcl
module "private-dns" {
source = "./modules/dns"
project_id = "myproject"
type = "private"
name = "test-example"
domain = "test.example."
client_networks = [var.vpc.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
}
}
# tftest modules=1 resources=2
```
This is enough to tell our test suite to run this example and assert that the resulting plan has one module (`modules=1`) and two resources (`resources=2`)
Note that all HCL code examples in READMEs are automatically tested. To prevent this behavior, include `tftest skip` somewhere in the code.
#### Testing examples against an inventory YAML
If you want to go further, you can define a `yaml` "inventory" with the plan and output results you want to test.
Continuing with the example above, imagine you want to ensure the plan also includes the creation of the A record specified in the `recordsets` variable. To do this we add the `inventory` parameter to the `tftest` directive, as shown below.
```hcl
module "private-dns" {
source = "./modules/dns"
project_id = "myproject"
type = "private"
name = "test-example"
domain = "test.example."
client_networks = [var.vpc.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
}
}
# tftest modules=1 resources=2 inventory=recordsets.yaml
```
Next define the corresponding "inventory" `yaml` file which will be used to assert values from the plan. The inventory is loaded from `tests/[module path]/examples/[inventory_name]`. In our example we have to create `tests/modules/dns/examples/recordsets.yaml`.
In the inventory file you have three sections available, and all of them are optional:
- `values` is a map of resource indexes (the same ones used by Terraform state) and their attribute name and values; you can define just the attributes you are interested in and the rest will be ignored
- `counts` is a map of resource types (eg `google_compute_engine`) and the number of times each type occurs in the plan; here too only define the ones that need checking
- `outputs` is a map of outputs and their values; where a value is unknown at plan time use the special `__missing__` token
Going back to our example, we create the inventory with values for the recordset and we also include the zone for good measure.
```yaml
# file: tests/modules/dns/examples/recordsets.yaml
values:
module.private-dns.google_dns_managed_zone.non-public[0]:
dns_name: test.example.
forwarding_config: []
name: test-example
peering_config: []
project: myproject
reverse_lookup: false
service_directory_config: []
visibility: private
module.private-dns.google_dns_record_set.cloud-static-records["A localhost"]:
managed_zone: test-example
name: localhost.test.example.
project: myproject
routing_policy: []
rrdatas:
- 127.0.0.1
ttl: 300
type: A
counts:
google_dns_managed_zone: 1
google_dns_record_set: 1
```
#### Using external files
In some situations your module might require additional files to properly test it. This is a common situation with modules that implement [factories](blueprints/factories/README.md) that drive the creation of resources from YAML files. If you're in this situation, you can still use example-based tests as described below:
- create your regular `hcl` code block example and add the `tftest` directive as described above.
- create a new code block with the contents of the additional file and use the `tftest-file` directive. You have to specify a label for the file and a relative path where the file will live.
- update your hcl code block to use the `files` parameters and pass a comma separated list of file ids that you want to make available to the module.
Continuing with the DNS example, imagine you want to load the recordsets from a YAML file
```hcl
module "private-dns" {
source = "./modules/dns"
project_id = "myproject"
type = "private"
name = "test-example"
domain = "test.example."
client_networks = [var.vpc.self_link]
recordsets = yamldecode(file("records/example.yaml"))
}
# tftest modules=1 resources=2 files=records
```
```yaml
# tftest-file id=records path=records/example.yaml
A localhost:
ttl: 300
records: ["127.0.0.1"]
A myhost:
ttl: 600
records: ["10.10.0.1"]
```
Note that you can use the `files` parameters together with `inventory` to allow more fine-grained assertions. Please review the [subnet factory](modules/net-vpc#subnet-factory) in the `net-vpc` module for an example of this.
#### Running tests for specific examples
As mentioned before, we use `pytest` as our test runner, so you can use any of the standard [test selection options](https://docs.pytest.org/en/latest/how-to/usage.html) available in `pytest`.
Example-based test are named based on the section within the README.md that contains them. You can use this name to select specific tests.
Here we show a few commonly used selection commands:
- Run all examples:
- `pytest tests/examples/`
- Run all examples for modules:
- `pytest -k modules/ tests/examples`
- Run all examples for the `net-vpc` module:
- `pytest -k 'net and vpc' tests/examples`
- Run a specific example in module `net-vpc`:
- `pytest -k 'modules and dns and private'`
- `pytest -v 'tests/examples/test_plan.py::test_example[modules/dns:Private Zone]'`
- Run tests for all blueprints except those under the gke directory:
- `pytest -k 'blueprints and not gke'`
Tip: you can use `pytest --collect-only` to fine tune your selection query without actually running the tests. Once you find the expression matching your desired tests, remove the `collect-only` flag.
#### Generating the inventory automatically
Building an inventory file by hand is difficult. To simplify this task, the default test runner for examples prints the inventory for the full plan if it succeeds. Therefore, you can start without an inventory and then run a test to get the full plan and extract the pieces you want to build the inventory file.
Suppose you want to generate the inventory for the last DNS example above (the one creating the recordsets from a YAML file). Assuming that example is under the "Private Zone" section in the README for the `dns`, you can run the following command to build the inventory:
```bash
pytest -s 'tests/examples/test_plan.py::test_example[modules/dns:Private Zone]'
```
which will generate a output similar to this:
```
==================================== test session starts ====================================
platform ... -- Python 3.11.2, pytest-7.2.1, pluggy-1.0.0
rootdir: ...
plugins: xdist-3.1.0
collected 1 item
tests/examples/test_plan.py
values:
module.private-dns.google_dns_managed_zone.non-public[0]:
description: Terraform managed.
dns_name: test.example.
dnssec_config: []
force_destroy: false
forwarding_config: []
labels: null
name: test-example
peering_config: []
private_visibility_config:
- gke_clusters: []
networks:
- network_url: projects/xxx/global/networks/aaa
project: myproject
reverse_lookup: false
service_directory_config: []
timeouts: null
visibility: private
module.private-dns.google_dns_record_set.cloud-static-records["A localhost"]:
managed_zone: test-example
name: localhost.test.example.
project: myproject
routing_policy: []
rrdatas:
- 127.0.0.1
ttl: 300
type: A
module.private-dns.google_dns_record_set.cloud-static-records["A myhost"]:
managed_zone: test-example
name: myhost.test.example.
project: myproject
routing_policy: []
rrdatas:
- 10.10.0.1
ttl: 600
type: A
counts:
google_dns_managed_zone: 1
google_dns_record_set: 2
modules: 1
resources: 3
outputs: {}
.
===================================== 1 passed in 3.46s =====================================
```
You can use that output to build the inventory file.
Note that for complex modules, the output can be very large and includes a lot of details about the resources. Extract only those resources and fields that are relevant to your test. There is a fine balance between asserting the critical bits related to your test scenario and including too many details that end up making the test too specific.
#### Building tests for blueprints
Generally blueprints are used as top-level modules which means that usually their READMEs include sample values for their variables but there are no examples showing how to use them as modules.
If you want to test a blueprint using an example, we suggest adding a "Test" section at the end of the README and include the example there. See any existing blueprint for a [concrete example](blueprints/cloud-operations/asset-inventory-feed-remediation#test).
### Testing via `tfvars` and `yaml` (aka `tftest`-based tests)
The second approach to testing requires you to:
- create a folder in the right `tests` hierarchy where specific test files will be hosted
- define `tfvars` files each with a specific variable configuration to test
@ -663,11 +902,10 @@ Our new approach to testing requires you to:
Let's go through each step in succession, assuming you are testing the new `net-glb` module.
First create a new folder under `tests/modules` replacing any dash in the module name with underscores. You also need to create an empty `__init__.py` file in it, since the folder represents a package from the point of view of `pytest`. Note that if you were testing a blueprint the folder would go in `tests/blueprints`.
First create a new folder under `tests/modules` replacing any dash in the module name with underscores. Note that if you were testing a blueprint the folder would go in `tests/blueprints`.
```bash
mkdir tests/modules/net_glb
touch tests/modules/net_glb/__init__.py
```
Then define a `tfvars` file with one of the module configurations you want to test. If you have a lot of variables which are shared across different tests, you can group all the common variables in a single `tfvars` file and associate it with each test's specific `tfvars` file (check the [organization module test](./tests/modules/organization/tftest.yaml) for an example).
@ -683,10 +921,10 @@ backend_buckets_config = {
}
```
Next define the corresponding inventory `yaml` file which will be used to assert values from the plan that uses the `tfvars` file above. In the inventory file you have three sections available:
Next define the corresponding "inventory" `yaml` file which will be used to assert values from the plan that uses the `tfvars` file above. In the inventory file you have three sections available:
- `values` is a map of resource indexes (the same ones used by Terraform state) and their attribute name and values; you can define just the attributes you are interested in and the other will be ignored
- `counts` is a map of resource types (eg `google_compute_engine`) and the number of times each type occurs in the plan; here too just define the ones the need checking
- `counts` is a map of resource types (eg `google_compute_engine`) and the number of times each type occurs in the plan; here too just define the ones the that need checking
- `outputs` is a map of outputs and their values; where a value is unknown at plan time use the special `__missing__` token
```yaml
@ -721,17 +959,74 @@ module: modules/net-glb
# common_tfvars:
# - defaults.tfvars
tests:
# run a test named `test-plan`, load the specified tfvars files
# use the default inventory file of `test-plan.yaml`
test-plan:
tfvars:
tfvars: # if ommited, we load test-plan.tfvars by default
- test-plan.tfvars
- test-plan-extra.tfvars
inventory:
- test-plan.yaml
# You can ommit the tfvars and inventory sections and they will
# default to the name of the test. The following two examples are equivalent:
#
# test-plan2:
# tfvars:
# - test-plan2.tfvars
# inventory:
# - test-plan2.yaml
# test-plan2:
```
A good example of tests showing different ways of leveraging our framework is in the [`tests/modules/organization`](./tests/modules/organization) folder.
##### Writing tests in Python (legacy approach)
#### Generating the inventory for `tftest`-based tests
Where possible, we recommend using the testing framework described in the previous section. However, if you need it, you can still write tests using Python directly.
Just as you can generate an initial inventory for example-based tests, you can do the same for `tftest`-based tests. Currently the process relies on an additional tool (`tools/plan_summary.py`) but but we have plans to unify both cases in the future.
As an example, if you want to generate the inventory for the `organization` module using the `common.tfvars` and `audit_config.tfvars` found in `tests/modules/organization/`, simply run `plan_summary.py` as follows:
```bash
$ python tools/plan_summary.py modules/organization \
tests/modules/organization/common.tfvars \
tests/modules/organization/audit_config.tfvars
values:
google_organization_iam_audit_config.config["allServices"]:
audit_log_config:
- exempted_members:
- user:me@example.org
log_type: DATA_WRITE
- exempted_members: []
log_type: DATA_READ
org_id: '1234567890'
service: allServices
counts:
google_organization_iam_audit_config: 1
modules: 0
resources: 1
outputs:
custom_role_id: {}
custom_roles: {}
firewall_policies: {}
firewall_policy_id: {}
network_tag_keys: {}
network_tag_values: {}
organization_id: organizations/1234567890
sink_writer_identities: {}
tag_keys: {}
tag_values: {}
```
You can now use this output to create the inventory file for your test. As mentioned before, please only use those values relevant to your test scenario.
### Writing tests in Python (legacy approach)
Where possible, we recommend using the testing methods described in the previous sections. However, if you need it, you can still write tests using Python directly.
In general, you should try to use the `plan_summary` fixture, which runs a a terraform plan and returns a `PlanSummary` object. The most important arguments to `plan_summary` are:
- the path of the Terraform module you want to test, relative to the root of the repository
@ -756,32 +1051,9 @@ def test_name(plan_summary, tfvars_to_yaml, tmp_path):
For more examples on how to write python tests, check the tests for the [`organization`](./tests/modules/organization/test_plan_org_policies.py) module.
#### Testing documentation examples
### Running tests from a temporary directory
Most of our documentation examples are also tested via the `examples` test suite. To enable an example for testing just use the special `tftest` comment as the last line in the example, listing the number of modules and resources expected.
A [few preset variables](./tests/examples/variables.tf) are available for use, as shown in this example from the `dns` module documentation.
```hcl
module "private-dns" {
source = "./modules/dns"
project_id = "myproject"
type = "private"
name = "test-example"
domain = "test.example."
client_networks = [var.vpc.self_link]
recordsets = {
"A localhost" = { ttl = 300, records = ["127.0.0.1"] }
}
}
# tftest modules=1 resources=2
```
Note that all HCL code examples in READMEs are automatically tested. To prevent this behavior, include `tftest skip` somewhere in the code.
#### Running tests from a temporary directory
Most of the time you can run tests using the `pytest` command as described in previous. However, the `plan_summary` fixture allows copying the root module and running the test from a temporary directory.
Most of the time you can run tests using the `pytest` command as described in previous. However, the `plan_summary` fixture allows copying the root module and running the test from a temporary directory.
To enable this option, just define the environment variable `TFTEST_COPY` and any tests using the `plan_summary` fixture will automatically run from a temporary directory.
@ -796,7 +1068,7 @@ Running tests from temporary directories is useful if:
```
#### Fabric tools
## Fabric tools
The main tool you will interact with in development is `tfdoc`, used to generate file, output and variable tables in README documents.

View File

@ -30,7 +30,7 @@ The current list of modules supports most of the core foundational and networkin
Currently available modules:
- **foundational** - [billing budget](./modules/billing-budget), [Cloud Identity group](./modules/cloud-identity-group/), [folder](./modules/folder), [service accounts](./modules/iam-service-account), [logging bucket](./modules/logging-bucket), [organization](./modules/organization), [project](./modules/project), [projects-data-source](./modules/projects-data-source)
- **networking** - [DNS](./modules/dns), [Cloud Endpoints](./modules/endpoints), [address reservation](./modules/net-address), [NAT](./modules/net-cloudnat), [Global Load Balancer (classic)](./modules/net-glb/), [L4 ILB](./modules/net-ilb), [L7 ILB](./modules/net-ilb-l7), [VPC](./modules/net-vpc), [VPC firewall](./modules/net-vpc-firewall), [VPC peering](./modules/net-vpc-peering), [VPN dynamic](./modules/net-vpn-dynamic), [HA VPN](./modules/net-vpn-ha), [VPN static](./modules/net-vpn-static), [Service Directory](./modules/service-directory)
- **networking** - [DNS](./modules/dns), [Cloud Endpoints](./modules/endpoints), [address reservation](./modules/net-address), [NAT](./modules/net-cloudnat), [Global Load Balancer (classic)](./modules/net-glb/), [L4 ILB](./modules/net-ilb), [L7 ILB](./modules/net-ilb-l7), [VPC](./modules/net-vpc), [VPC firewall](./modules/net-vpc-firewall), [VPC firewall policy](./modules/net-vpc-firewall-policy), [VPC peering](./modules/net-vpc-peering), [VPN dynamic](./modules/net-vpn-dynamic), [HA VPN](./modules/net-vpn-ha), [VPN static](./modules/net-vpn-static), [Service Directory](./modules/service-directory)
- **compute** - [VM/VM group](./modules/compute-vm), [MIG](./modules/compute-mig), [COS container](./modules/cloud-config-container/cos-generic-metadata/) (coredns, mysql, onprem, squid), [GKE cluster](./modules/gke-cluster), [GKE hub](./modules/gke-hub), [GKE nodepool](./modules/gke-nodepool)
- **data** - [BigQuery dataset](./modules/bigquery-dataset), [Bigtable instance](./modules/bigtable-instance), [Cloud SQL instance](./modules/cloudsql-instance), [Data Catalog Policy Tag](./modules/data-catalog-policy-tag), [Datafusion](./modules/datafusion), [Dataproc](./modules/dataproc), [GCS](./modules/gcs), [Pub/Sub](./modules/pubsub)
- **development** - [API Gateway](./modules/api-gateway), [Apigee](./modules/apigee), [Artifact Registry](./modules/artifact-registry), [Container Registry](./modules/container-registry), [Cloud Source Repository](./modules/source-repository)

View File

@ -6,10 +6,10 @@ Currently available blueprints:
- **apigee** - [Apigee Hybrid on GKE](./apigee/hybrid-gke/), [Apigee X analytics in BigQuery](./apigee/bigquery-analytics), [Apigee network patterns](./apigee/network-patterns/)
- **cloud operations** - [Active Directory Federation Services](./cloud-operations/adfs), [Cloud Asset Inventory feeds for resource change tracking and remediation](./cloud-operations/asset-inventory-feed-remediation), [Fine-grained Cloud DNS IAM via Service Directory](./cloud-operations/dns-fine-grained-iam), [Cloud DNS & Shared VPC design](./cloud-operations/dns-shared-vpc), [Delegated Role Grants](./cloud-operations/iam-delegated-role-grants), [Networking Dashboard](./cloud-operations/network-dashboard), [Managing on-prem service account keys by uploading public keys](./cloud-operations/onprem-sa-key-management), [Compute Image builder with Hashicorp Packer](./cloud-operations/packer-image-builder), [Packer example](./cloud-operations/packer-image-builder/packer), [Compute Engine quota monitoring](./cloud-operations/quota-monitoring), [Scheduled Cloud Asset Inventory Export to Bigquery](./cloud-operations/scheduled-asset-inventory-export-bq), [Configuring workload identity federation with Terraform Cloud/Enterprise workflows](./cloud-operations/terraform-cloud-dynamic-credentials), [TCP healthcheck and restart for unmanaged GCE instances](./cloud-operations/unmanaged-instances-healthcheck), [Migrate for Compute Engine (v5) blueprints](./cloud-operations/vm-migration), [Configuring workload identity federation to access Google Cloud resources from apps running on Azure](./cloud-operations/workload-identity-federation)
- **data solutions** - [GCE and GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion), [Data Platform](./data-solutions/data-platform-foundations), [Spinning up a foundation data pipeline on Google Cloud using Cloud Storage, Dataflow and BigQuery](./data-solutions/gcs-to-bq-with-least-privileges), [#SQL Server Always On Groups blueprint](./data-solutions/sqlserver-alwayson), [Data Playground](./data-solutions/data-playground), [MLOps with Vertex AI](./data-solutions/vertex-mlops), [Shielded Folder](./data-solutions/shielded-folder)
- **data solutions** - [GCE and GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion), [Data Platform](./data-solutions/data-platform-foundations), [Spinning up a foundation data pipeline on Google Cloud using Cloud Storage, Dataflow and BigQuery](./data-solutions/gcs-to-bq-with-least-privileges), [#SQL Server Always On Groups blueprint](./data-solutions/sqlserver-alwayson), [Data Playground](./data-solutions/data-playground), [MLOps with Vertex AI](./data-solutions/vertex-mlops), [Shielded Folder](./data-solutions/shielded-folder), [BigQuery ML and Vertex AI Pipeline](./data-solutions/bq-ml)
- **factories** - [The why and the how of Resource Factories](./factories), [Google Cloud Identity Group Factory](./factories/cloud-identity-group-factory), [Google Cloud BQ Factory](./factories/bigquery-factory), [Google Cloud VPC Firewall Factory](./factories/net-vpc-firewall-yaml), [Minimal Project Factory](./factories/project-factory)
- **GKE** - [Binary Authorization Pipeline Blueprint](./gke/binauthz), [Storage API](./gke/binauthz/image), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api), [GKE Multitenant Blueprint](./gke/multitenant-fleet), [Shared VPC with GKE support](./networking/shared-vpc-gke/)
- **networking** - [Decentralized firewall management](./networking/decentralized-firewall), [Decentralized firewall validator](./networking/decentralized-firewall/validator), [Network filtering with Squid](./networking/filtering-proxy), [Network filtering with Squid with isolated VPCs using Private Service Connect](./networking/filtering-proxy-psc), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Hub and Spoke via VPN](./networking/hub-and-spoke-vpn), [Hub and Spoke via VPC Peering](./networking/hub-and-spoke-peering), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), On-prem DNS and Google Private Access, [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke)
- **GKE** - [Binary Authorization Pipeline Blueprint](./gke/binauthz), [Storage API](./gke/binauthz/image), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api), [GKE Multitenant Blueprint](./gke/multitenant-fleet), [Shared VPC with GKE support](./networking/shared-vpc-gke/), [GKE Autopilot](./gke/autopilot)
- **networking** - [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [Decentralized firewall management](./networking/decentralized-firewall), [Decentralized firewall validator](./networking/decentralized-firewall/validator), [Network filtering with Squid](./networking/filtering-proxy), [GLB and multi-regional daisy-chaining through hybrid NEGs](./networking/glb-hybrid-neg-internal), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Hub and Spoke via VPN](./networking/hub-and-spoke-vpn), [Hub and Spoke via VPC Peering](./networking/hub-and-spoke-peering), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), [Network filtering with Squid with isolated VPCs using Private Service Connect](./networking/filtering-proxy-psc), On-prem DNS and Google Private Access, [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke)
- **serverless** - [Creating multi-region deployments for API Gateway](./serverless/api-gateway), [Cloud Run series](./serverless/cloud-run-explore)
- **third party solutions** - [OpenShift on GCP user-provisioned infrastructure](./third-party-solutions/openshift), [Wordpress deployment on Cloud Run](./third-party-solutions/wordpress/cloudrun)

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -30,9 +30,11 @@ module "mgmt_server" {
}]
service_account_create = true
boot_disk = {
image = var.mgmt_server_config.image
type = var.mgmt_server_config.disk_type
size = var.mgmt_server_config.disk_size
initialize_params = {
image = var.mgmt_server_config.image
type = var.mgmt_server_config.disk_type
size = var.mgmt_server_config.disk_size
}
}
metadata = {
startup-script = <<EOT

View File

@ -83,9 +83,9 @@ module "instance_template" {
addresses = null
}]
boot_disk = {
image = "projects/cos-cloud/global/images/family/cos-stable"
type = "pd-ssd"
size = 10
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
create_template = true
metadata = {

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -81,9 +81,11 @@ module "server" {
}
service_account_create = true
boot_disk = {
image = var.image
type = var.disk_type
size = var.disk_size
initialize_params = {
image = var.image
type = var.disk_type
size = var.disk_size
}
}
group = {
named_ports = {

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -266,11 +266,13 @@ def main_cf_pubsub(event, context):
help='Load JSON resources from file, skips init and discovery.')
@click.option('--debug-plugin',
help='Run only core and specified timeseries plugin.')
@click.option('--debug', is_flag=True, default=False,
help='Turn on debug logging.')
def main(discovery_root, monitoring_project, project=None, folder=None,
custom_quota_file=None, dump_file=None, load_file=None,
debug_plugin=None):
debug_plugin=None, debug=False):
'CLI entry point.'
logging.basicConfig(level=logging.INFO)
logging.basicConfig(level=logging.INFO if not debug else logging.DEBUG)
if discovery_root.partition('/')[0] not in ('folders', 'organizations'):
raise SystemExit('Invalid discovery root.')
descriptors = []

View File

@ -98,7 +98,7 @@ def _handle_resource(resources, asset_type, data):
# derive parent type and id and skip if parent is not within scope
parent_data = _get_parent(data['parent'], resources)
if not parent_data:
LOGGER.info(f'{resource["self_link"]} outside perimeter')
LOGGER.debug(f'{resource["self_link"]} outside perimeter')
LOGGER.debug([
resources['organization'], resources['folders'],
resources['projects:number']

View File

@ -45,6 +45,7 @@ def _handle_discovery(resources, response):
self_link = part.get('selfLink')
if not self_link:
logging.warn('invalid quota response')
continue
self_link = self_link.split('/')
if kind == 'compute#project':
project_id = self_link[-1]

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -0,0 +1,23 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
billing_account = "xxx"
project_create = false
project_id = "xxx"
parent = "organizations/xxx"
tfc_organization_id = "org-xxxxxxxxxxxxx"
tfc_workspace_id = "ws-xxxxxxxxxxxxx"
workload_identity_pool_id = "tfc-pool"
workload_identity_pool_provider_id = "tfc-provider"
issuer_uri = "https://app.terraform.io/"

View File

@ -3,12 +3,10 @@ module example.com/restarter
go 1.16
require (
cloud.google.com/go/iam v0.3.0 // indirect
cloud.google.com/go/pubsub v1.19.0
cloud.google.com/go/pubsub v1.28.0
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a // indirect
golang.org/x/sys v0.1.0 // indirect
google.golang.org/api v0.71.0
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6 // indirect
google.golang.org/grpc v1.45.0 // indirect
golang.org/x/net v0.7.0
golang.org/x/text v0.7.0
google.golang.org/api v0.111.0
google.golang.org/genproto v0.0.0-20230301171018-9ab4bdc49ad5 // indirect
)

View File

@ -234,9 +234,11 @@ module "test-vm" {
zone = "${var.region}-b"
name = "nginx-test"
boot_disk = {
image = "projects/cos-cloud/global/images/family/cos-stable"
type = "pd-ssd"
size = 10
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
type = "pd-ssd"
size = 10
}
}
metadata = {
user-data = module.cos-nginx.cloud_config

View File

@ -69,3 +69,9 @@ This [blueprint](./vertex-mlops/) implements the infrastructure required to have
This [blueprint](./shielded-folder/) implements an opinionated folder configuration according to GCP best practices. Configurations implemented on the folder would be beneficial to host workloads inheriting constraints from the folder they belong to.
<br clear="left">
### BigQuery ML and Vertex AI Pipeline
<a href="./bq-ml/" title="BigQuery ML and Vertex AI Pipeline"><img src="./bq-ml/images/diagram.png" align="left" width="280px"></a>
This [blueprint](./bq-ml/) provides the necessary infrastructure to create a complete development environment for building and deploying machine learning models using BigQuery ML and Vertex AI. With this blueprint, you can deploy your models to a Vertex AI endpoint or use them within BigQuery ML.
<br clear="left">

View File

@ -0,0 +1,102 @@
# BigQuery ML and Vertex AI Pipeline
This blueprint provides the necessary infrastructure to create a complete development environment for building and deploying machine learning models using BigQuery ML and Vertex AI. With this blueprint, you can deploy your models to a Vertex AI endpoint or use them within BigQuery ML.
This is the high-level diagram:
![High-level diagram](diagram.png "High-level diagram")
It also includes the IAM wiring needed to make such scenarios work. Regional resources are used in this example, but the same logic applies to 'dual regional', 'multi regional', or 'global' resources.
The example is designed to match real-world use cases with a minimum amount of resources and be used as a starting point for your scenario.
## Managed resources and services
This sample creates several distinct groups of resources:
- Networking
- VPC network
- Subnet
- Firewall rules for SSH access via IAP and open communication within the VPC
- Cloud Nat
- IAM
- Vertex AI workbench service account
- Vertex AI pipeline service account
- Storage
- GCS bucket
- Bigquery dataset
## Customization
### Virtual Private Cloud (VPC) design
As is often the case in real-world configurations, this blueprint accepts an existing Shared-VPC via the `vpc_config` variable as input.
### Customer Managed Encryption Keys
As is often the case in real-world configurations, this blueprint accepts as input existing Cloud KMS keys to encrypt resources via the `service_encryption_keys` variable.
## Demo
In the [`demo`](./demo/) folder, you can find an example of creating a Vertex AI pipeline from a publicly available dataset and deploying the model to be used from a Vertex AI managed endpoint or from within Bigquery.
To run the demo:
- Connect to the Vertex AI workbench instance
- Clone this repository
- Run the and run [`demo/bmql_pipeline.ipynb`](demo/bmql_pipeline.ipynb) Jupyter Notebook.
## Files
| name | description | modules | resources |
|---|---|---|---|
| [datastorage.tf](./datastorage.tf) | Datastorage resources. | <code>bigquery-dataset</code> · <code>gcs</code> | |
| [main.tf](./main.tf) | Core resources. | <code>project</code> | |
| [outputs.tf](./outputs.tf) | Output variables. | | |
| [variables.tf](./variables.tf) | Terraform variables. | | |
| [versions.tf](./versions.tf) | Version pins. | | |
| [vertex.tf](./vertex.tf) | Vertex resources. | <code>iam-service-account</code> | <code>google_notebooks_instance</code> · <code>google_vertex_ai_metadata_store</code> |
| [vpc.tf](./vpc.tf) | VPC resources. | <code>net-cloudnat</code> · <code>net-vpc</code> · <code>net-vpc-firewall</code> | <code>google_project_iam_member</code> |
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [prefix](variables.tf#L23) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L41) | Project id references existing project if `project_create` is null. | <code>string</code> | ✓ | |
| [location](variables.tf#L17) | The location where resources will be deployed. | <code>string</code> | | <code>&#34;US&#34;</code> |
| [project_create](variables.tf#L32) | Provide values if project creation is needed, use existing project if null. Parent format: folders/folder_id or organizations/org_id. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L46) | The region where resources will be deployed. | <code>string</code> | | <code>&#34;us-central1&#34;</code> |
| [service_encryption_keys](variables.tf#L52) | Cloud KMS to use to encrypt different services. The key location should match the service region. | <code title="object&#40;&#123;&#10; aiplatform &#61; optional&#40;string, null&#41;&#10; bq &#61; optional&#40;string, null&#41;&#10; compute &#61; optional&#40;string, null&#41;&#10; storage &#61; optional&#40;string, null&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [vpc_config](variables.tf#L63) | Shared VPC network configurations to use. If null networks will be created in projects with pre-configured values. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; network_self_link &#61; string&#10; subnet_self_link &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [bucket](outputs.tf#L17) | GCS Bucket URL. | |
| [dataset](outputs.tf#L22) | GCS Bucket URL. | |
| [notebook](outputs.tf#L27) | Vertex AI notebook details. | |
| [project](outputs.tf#L35) | Project id. | |
| [service-account-vertex](outputs.tf#L40) | Service account to be used for Vertex AI pipelines. | |
| [vertex-ai-metadata-store](outputs.tf#L45) | Vertex AI Metadata Store ID. | |
| [vpc](outputs.tf#L50) | VPC Network. | |
<!-- END TFDOC -->
## Test
```hcl
module "test" {
source = "./fabric/blueprints/data-solutions/bq-ml/"
project_create = {
billing_account_id = "123456-123456-123456"
parent = "folders/12345678"
}
project_id = "project-1"
prefix = "prefix"
}
# tftest modules=9 resources=47
```

View File

@ -0,0 +1,32 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Datastorage resources.
module "bucket" {
source = "../../../modules/gcs"
project_id = module.project.project_id
prefix = var.prefix
location = var.location
name = "data"
encryption_key = try(local.service_encryption_keys.storage, null) # Example assignment of an encryption key
}
module "dataset" {
source = "../../../modules/bigquery-dataset"
project_id = module.project.project_id
id = "${replace(var.prefix, "-", "_")}_data"
encryption_key = try(local.service_encryption_keys.bq, null) # Example assignment of an encryption key
location = var.location
}

View File

@ -0,0 +1,40 @@
# BigQuery ML and Vertex AI Pipeline Demo
This demo shows how to combine BigQuery ML (BQML) and Vertex AI to create a ML pipeline leveraging the infrastructure created in the blueprint.
More in details, this tutorial will focus on the following three steps:
- define a Vertex AI pipeline to create features, train and evaluate BQML models
- serve a BQ model through an API powered by Vertex AI Endpoint
- create batch prediction via BigQuery
In this tutorial we will also see how to make explainable predictions, in order to understand what are the most important features that most influence the algorithm outputs.
# Dataset
This tutorial uses a fictitious e-commerce dataset collecting programmatically generated data from the fictitious e-commerce store called The Look. The dataset is publicy available on BigQuery at this location `bigquery-public-data.thelook_ecommerce`.
# Goal
The goal of this tutorial is to train a classification ML model using BigQuery ML and predict if a new web session is going to convert.
The tutorial focuses more on how to combine Vertex AI and BigQuery ML to create a model that can be used both for near-real time and batch predictions rather than the design of the model itself.
# Main components
In this tutorial we will make use of the following main components:
- Big Query:
- standard: to create a view which contains the model features and the target variable
- ML: to train, evaluate and make batch predictions
- Vertex AI:
- Pipeline: to define a configurable and re-usable set of steps to train and evaluate a BQML model
- Experiment: to keep track of all the trainings done via the Pipeline
- Model Registry: to keep track of the trained versions of a specific model
- Endpoint: to serve the model via API
- Workbench: to run this demo
# How to get started
1. Access the Vertex AI Workbench
2. clone this repository
2. run the [`bmql_pipeline.ipynb`](bmql_pipeline.ipynb) Jupyter Notebook

View File

@ -0,0 +1,464 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"**Copyright 2023 Google LLC**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Copyright 2023 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Install python requirements and import packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -r requirements.txt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import kfp\n",
"import google_cloud_pipeline_components.v1.bigquery as bqop\n",
"\n",
"from google.cloud import aiplatform as aip\n",
"from google.cloud import bigquery"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Set your env variables"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set your variables\n",
"PREFIX = 'your-prefix'\n",
"PROJECT_ID = 'your-project-id'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"DATASET = \"{}_data\".format(PREFIX.replace(\"-\",\"_\")) \n",
"EXPERIMENT_NAME = 'bqml-experiment'\n",
"ENDPOINT_DISPLAY_NAME = 'bqml-endpoint'\n",
"LOCATION = 'US'\n",
"MODEL_NAME = 'bqml-model'\n",
"PIPELINE_NAME = 'bqml-vertex-pipeline'\n",
"PIPELINE_ROOT = f\"gs://{PREFIX}-data\"\n",
"REGION = 'us-central1'\n",
"SERVICE_ACCOUNT = f\"vertex-sa@{PROJECT_ID}.iam.gserviceaccount.com\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Vertex AI Pipeline Definition\n",
"\n",
"Let's first define the queries for the features and target creation and the query to train the model\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# this query creates the features for our model and the target value we would like to predict\n",
"\n",
"features_query = \"\"\"\n",
"CREATE VIEW if NOT EXISTS `{project_id}.{dataset}.ecommerce_abt` AS\n",
"WITH abt AS (\n",
" SELECT user_id,\n",
" session_id,\n",
" city,\n",
" postal_code,\n",
" browser,\n",
" traffic_source,\n",
" min(created_at) AS session_starting_ts,\n",
" sum(CASE WHEN event_type = 'purchase' THEN 1 ELSE 0 END) has_purchased\n",
" FROM `bigquery-public-data.thelook_ecommerce.events` \n",
" GROUP BY user_id,\n",
" session_id,\n",
" city,\n",
" postal_code,\n",
" browser,\n",
" traffic_source\n",
"), previous_orders AS (\n",
" SELECT user_id,\n",
" array_agg (struct(created_at AS order_creations_ts,\n",
" o.order_id,\n",
" o.status,\n",
" oi.order_cost)) as user_orders\n",
" FROM `bigquery-public-data.thelook_ecommerce.orders` o\n",
" JOIN (SELECT order_id,\n",
" sum(sale_price) order_cost \n",
" FROM `bigquery-public-data.thelook_ecommerce.order_items`\n",
" GROUP BY 1) oi\n",
" ON o.order_id = oi.order_id\n",
" GROUP BY 1\n",
")\n",
"SELECT abt.*,\n",
" CASE WHEN extract(DAYOFWEEK FROM session_starting_ts) IN (1,7)\n",
" THEN 'WEEKEND' \n",
" ELSE 'WEEKDAY'\n",
" END AS day_of_week,\n",
" extract(HOUR FROM session_starting_ts) hour_of_day,\n",
" (SELECT count(DISTINCT uo.order_id) \n",
" FROM unnest(user_orders) uo \n",
" WHERE uo.order_creations_ts < session_starting_ts \n",
" AND status IN ('Shipped', 'Complete', 'Processing')) AS number_of_successful_orders,\n",
" IFNULL((SELECT sum(DISTINCT uo.order_cost) \n",
" FROM unnest(user_orders) uo \n",
" WHERE uo.order_creations_ts < session_starting_ts \n",
" AND status IN ('Shipped', 'Complete', 'Processing')), 0) AS sum_previous_orders,\n",
" (SELECT count(DISTINCT uo.order_id) \n",
" FROM unnest(user_orders) uo \n",
" WHERE uo.order_creations_ts < session_starting_ts \n",
" AND status IN ('Cancelled', 'Returned')) AS number_of_unsuccessful_orders\n",
"FROM abt \n",
"LEFT JOIN previous_orders pso \n",
"ON abt.user_id = pso.user_id\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# this query create the train job on BQ ML\n",
"train_query = \"\"\"\n",
"CREATE OR REPLACE MODEL `{project_id}.{dataset}.{model_name}`\n",
"OPTIONS(MODEL_TYPE='{model_type}',\n",
" INPUT_LABEL_COLS=['has_purchased'],\n",
" ENABLE_GLOBAL_EXPLAIN=TRUE,\n",
" MODEL_REGISTRY='VERTEX_AI',\n",
" DATA_SPLIT_METHOD = 'RANDOM',\n",
" DATA_SPLIT_EVAL_FRACTION = {split_fraction}\n",
" ) AS \n",
"SELECT * EXCEPT (session_id, session_starting_ts, user_id) \n",
"FROM `{project_id}.{dataset}.ecommerce_abt`\n",
"WHERE extract(ISOYEAR FROM session_starting_ts) = 2022\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In the following code block, we are defining our Vertex AI pipeline. It is made up of three main steps:\n",
"1. Create a BigQuery dataset that will contain the BigQuery ML models\n",
"2. Train the BigQuery ML model, in this case, a logistic regression\n",
"3. Evaluate the BigQuery ML model with the standard evaluation metrics\n",
"\n",
"The pipeline takes as input the following variables:\n",
"- ```dataset```: name of the dataset where the artifacts will be stored\n",
"- ```evaluate_job_conf```: bq dict configuration to define where to store evaluation metrics\n",
"- ```location```: BigQuery location\n",
"- ```model_name```: the display name of the BigQuery ML model\n",
"- ```project_id```: the project id where the GCP resources will be created\n",
"- ```split_fraction```: the percentage of data that will be used as an evaluation dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@kfp.dsl.pipeline(name='bqml-pipeline', pipeline_root=PIPELINE_ROOT)\n",
"def pipeline(\n",
" model_name: str,\n",
" split_fraction: float,\n",
" evaluate_job_conf: dict, \n",
" dataset: str = DATASET,\n",
" project_id: str = PROJECT_ID,\n",
" location: str = LOCATION,\n",
" ):\n",
"\n",
" create_dataset = bqop.BigqueryQueryJobOp(\n",
" project=project_id,\n",
" location=location,\n",
" query=f'CREATE SCHEMA IF NOT EXISTS {dataset}'\n",
" )\n",
"\n",
" create_features_view = bqop.BigqueryQueryJobOp(\n",
" project=project_id,\n",
" location=location,\n",
" query=features_query.format(dataset=dataset, project_id=project_id),\n",
"\n",
" ).after(create_dataset)\n",
"\n",
" create_bqml_model = bqop.BigqueryCreateModelJobOp(\n",
" project=project_id,\n",
" location=location,\n",
" query=train_query.format(model_type = 'LOGISTIC_REG'\n",
" , project_id = project_id\n",
" , dataset = dataset\n",
" , model_name = model_name\n",
" , split_fraction=split_fraction)\n",
" ).after(create_features_view)\n",
"\n",
" evaluate_bqml_model = bqop.BigqueryEvaluateModelJobOp(\n",
" project=project_id,\n",
" location=location,\n",
" model=create_bqml_model.outputs[\"model\"],\n",
" job_configuration_query=evaluate_job_conf\n",
" ).after(create_bqml_model)\n",
"\n",
"\n",
"# this is to compile our pipeline and generate the json description file\n",
"kfp.v2.compiler.Compiler().compile(pipeline_func=pipeline,\n",
" package_path=f'{PIPELINE_NAME}.json') "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create Experiment\n",
"\n",
"We will create an experiment to keep track of our training and tasks on a specific issue or problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"my_experiment = aip.Experiment.get_or_create(\n",
" experiment_name=EXPERIMENT_NAME,\n",
" description='This is a new experiment to keep track of bqml trainings',\n",
" project=PROJECT_ID,\n",
" location=REGION\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Running the same training Vertex AI pipeline with different parameters\n",
"\n",
"One of the main tasks during the training phase is to compare different models or to try the same model with different inputs. We can leverage the power of Vertex AI Pipelines to submit the same steps with different training parameters. Thanks to the experiments artifact, it is possible to easily keep track of all the tests that have been done. This simplifies the process of selecting the best model to deploy.\n",
"\n",
"In this demo case, we will run the same training pipeline while changing the split data percentage between training and test data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# this configuration is needed in order to persist the evaluation metrics on big query\n",
"job_configuration_query = {\"destinationTable\": {\"projectId\": PROJECT_ID, \"datasetId\": DATASET}, \"writeDisposition\": \"WRITE_TRUNCATE\"}\n",
"\n",
"for split_fraction in [0.1, 0.2]:\n",
" job_configuration_query['destinationTable']['tableId'] = MODEL_NAME+'-fraction-{}-eval_table'.format(int(split_fraction*100))\n",
" pipeline = aip.PipelineJob(\n",
" parameter_values = {'split_fraction':split_fraction, 'model_name': MODEL_NAME+'-fraction-{}'.format(int(split_fraction*100)), 'evaluate_job_conf': job_configuration_query },\n",
" display_name=PIPELINE_NAME,\n",
" template_path=f'{PIPELINE_NAME}.json',\n",
" pipeline_root=PIPELINE_ROOT,\n",
" enable_caching=True\n",
" )\n",
"\n",
" pipeline.submit(service_account=SERVICE_ACCOUNT, experiment=my_experiment)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploy the model on a Vertex AI endpoint\n",
"\n",
"Thanks to the integration of Vertex AI Endpoint, creating a live endpoint to serve the model we prefer is very straightforward."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get the model from the Model Registry \n",
"model = aip.Model(model_name=f'{MODEL_NAME}-fraction-10')\n",
"\n",
"# let's create a Vertex Endpoint where we will deploy the ML model\n",
"endpoint = aip.Endpoint.create(\n",
" display_name=ENDPOINT_DISPLAY_NAME,\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# deploy the BigQuery ML model on Vertex Endpoint\n",
"# have a coffe - this step can take up 10/15 minutes to finish\n",
"model.deploy(endpoint=endpoint, deployed_model_display_name='bqml-deployed-model')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Let's get a prediction from new data\n",
"inference_test = {\n",
" 'postal_code': '97700-000',\n",
" 'number_of_successful_orders': 0,\n",
" 'city': 'Santiago',\n",
" 'sum_previous_orders': 1,\n",
" 'number_of_unsuccessful_orders': 0,\n",
" 'day_of_week': 'WEEKDAY',\n",
" 'traffic_source': 'Facebook',\n",
" 'browser': 'Firefox',\n",
" 'hour_of_day': 20\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"my_prediction = endpoint.predict([inference_test])\n",
"\n",
"my_prediction"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# batch prediction on BigQuery\n",
"\n",
"explain_predict_query = \"\"\"\n",
"SELECT *\n",
"FROM ML.EXPLAIN_PREDICT(MODEL `{project_id}.{dataset}.{model_name}`,\n",
" (SELECT * EXCEPT (session_id, session_starting_ts, user_id, has_purchased) \n",
" FROM `{project_id}.{dataset}.ecommerce_abt`\n",
" WHERE extract(ISOYEAR FROM session_starting_ts) = 2023),\n",
" STRUCT(5 AS top_k_features, 0.5 AS threshold))\n",
"LIMIT 100\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# batch prediction on BigQuery\n",
"\n",
"client = bigquery_client = bigquery.Client(location=LOCATION, project=PROJECT_ID)\n",
"batch_predictions = bigquery_client.query(\n",
" explain_predict_query.format(\n",
" project_id=PROJECT_ID,\n",
" dataset=DATASET,\n",
" model_name=f'{MODEL_NAME}-fraction-10')\n",
" ).to_dataframe()\n",
"\n",
"batch_predictions"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Conclusions\n",
"\n",
"Thanks to this tutorial we were able to:\n",
"- Define a re-usable Vertex AI pipeline to train and evaluate BQ ML models\n",
"- Use a Vertex AI Experiment to keep track of multiple trainings for the same model with different paramenters (in this case a different split for train/test data)\n",
"- Deploy the preferred model on a Vertex AI managed Endpoint in order to serve the model for real-time use cases via API\n",
"- Make batch prediction via Big Query and see what are the top 5 features which influenced the algorithm output"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.8.9"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,2 @@
kfp==1.8.19
google-cloud-pipeline-components==1.0.39

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -0,0 +1,66 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Core resources.
locals {
service_encryption_keys = var.service_encryption_keys
shared_vpc_project = try(var.vpc_config.host_project, null)
subnet = (
local.use_shared_vpc
? var.vpc_config.subnet_self_link
: values(module.vpc.0.subnet_self_links)[0]
)
use_shared_vpc = var.vpc_config != null
vpc = (
local.use_shared_vpc
? var.vpc_config.network_self_link
: module.vpc.0.self_link
)
}
module "project" {
source = "../../../modules/project"
name = var.project_id
parent = try(var.project_create.parent, null)
billing_account = try(var.project_create.billing_account_id, null)
project_create = var.project_create != null
prefix = var.project_create == null ? null : var.prefix
services = [
"aiplatform.googleapis.com",
"bigquery.googleapis.com",
"bigquerystorage.googleapis.com",
"bigqueryreservation.googleapis.com",
"compute.googleapis.com",
"ml.googleapis.com",
"notebooks.googleapis.com",
"servicenetworking.googleapis.com",
"stackdriver.googleapis.com",
"storage.googleapis.com",
"storage-component.googleapis.com"
]
shared_vpc_service_config = local.shared_vpc_project == null ? null : {
attach = true
host_project = local.shared_vpc_project
}
service_encryption_key_ids = {
aiplatform = [try(local.service_encryption_keys.compute, null)]
compute = [try(local.service_encryption_keys.compute, null)]
bq = [try(local.service_encryption_keys.bq, null)]
storage = [try(local.service_encryption_keys.storage, null)]
}
service_config = {
disable_on_destroy = false, disable_dependent_services = false
}
}

View File

@ -0,0 +1,53 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Output variables.
output "bucket" {
description = "GCS Bucket URL."
value = module.bucket.url
}
output "dataset" {
description = "GCS Bucket URL."
value = module.dataset.id
}
output "notebook" {
description = "Vertex AI notebook details."
value = {
name = resource.google_notebooks_instance.playground.name
id = resource.google_notebooks_instance.playground.id
}
}
output "project" {
description = "Project id."
value = module.project.project_id
}
output "service-account-vertex" {
description = "Service account to be used for Vertex AI pipelines."
value = module.service-account-vertex.email
}
output "vertex-ai-metadata-store" {
description = "Vertex AI Metadata Store ID."
value = google_vertex_ai_metadata_store.store.id
}
output "vpc" {
description = "VPC Network."
value = local.vpc
}

View File

@ -0,0 +1,71 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Terraform variables.
variable "location" {
description = "The location where resources will be deployed."
type = string
default = "US"
}
variable "prefix" {
description = "Prefix used for resource names."
type = string
validation {
condition = var.prefix != ""
error_message = "Prefix cannot be empty."
}
}
variable "project_create" {
description = "Provide values if project creation is needed, use existing project if null. Parent format: folders/folder_id or organizations/org_id."
type = object({
billing_account_id = string
parent = string
})
default = null
}
variable "project_id" {
description = "Project id references existing project if `project_create` is null."
type = string
}
variable "region" {
description = "The region where resources will be deployed."
type = string
default = "us-central1"
}
variable "service_encryption_keys" {
description = "Cloud KMS to use to encrypt different services. The key location should match the service region."
type = object({
aiplatform = optional(string, null)
bq = optional(string, null)
compute = optional(string, null)
storage = optional(string, null)
})
default = null
}
variable "vpc_config" {
description = "Shared VPC network configurations to use. If null networks will be created in projects with pre-configured values."
type = object({
host_project = string
network_self_link = string
subnet_self_link = string
})
default = null
}

View File

@ -4,7 +4,7 @@
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
@ -12,19 +12,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: "Build and push the Toolbox container image"
terraform {
required_version = ">= 1.3.1"
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.55.0" # tftest
}
google-beta = {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}
on:
workflow_dispatch:
push:
branches:
- master
paths:
- 'modules/cloud-config-container/onprem/docker-images/toolbox/**'
jobs:
build-push-toolbox-container-image:
uses: ./.github/workflows/container-image.yml
with:
image_name: fabric-toolbox
docker_context: modules/cloud-config-container/onprem/docker-images/toolbox

View File

@ -0,0 +1,111 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description Vertex resources.
resource "google_vertex_ai_metadata_store" "store" {
provider = google-beta
project = module.project.project_id
name = "default"
description = "Vertex Ai Metadata Store"
region = var.region
dynamic "encryption_spec" {
for_each = try(var.service_encryption_keys.aiplatform, null) == null ? [] : [""]
content {
kms_key_name = try(var.service_encryption_keys.aiplatform, null)
}
}
# `state` value will be decided automatically based on the result of the configuration
lifecycle {
ignore_changes = [state]
}
}
module "service-account-notebook" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "notebook-sa"
iam_project_roles = {
(module.project.project_id) = [
"roles/bigquery.admin",
"roles/bigquery.jobUser",
"roles/bigquery.dataEditor",
"roles/bigquery.user",
"roles/dialogflow.client",
"roles/storage.admin",
"roles/aiplatform.user",
"roles/iam.serviceAccountUser"
]
}
}
module "service-account-vertex" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "vertex-sa"
iam_project_roles = {
(module.project.project_id) = [
"roles/bigquery.admin",
"roles/bigquery.jobUser",
"roles/bigquery.dataEditor",
"roles/bigquery.user",
"roles/dialogflow.client",
"roles/storage.admin",
"roles/aiplatform.user"
]
}
}
resource "google_notebooks_instance" "playground" {
name = "${var.prefix}-notebook"
location = format("%s-%s", var.region, "b")
machine_type = "e2-medium"
project = module.project.project_id
container_image {
repository = "gcr.io/deeplearning-platform-release/base-cpu"
tag = "latest"
}
install_gpu_driver = true
boot_disk_type = "PD_SSD"
boot_disk_size_gb = 110
disk_encryption = try(local.service_encryption_keys.compute != null, false) ? "CMEK" : null
kms_key = try(local.service_encryption_keys.compute, null)
no_public_ip = true
no_proxy_access = false
network = local.vpc
subnet = local.subnet
service_account = module.service-account-notebook.email
# Enable Secure Boot
shielded_instance_config {
enable_secure_boot = true
}
# Remove once terraform-provider-google/issues/9164 is fixed
lifecycle {
ignore_changes = [disk_encryption, kms_key]
}
#TODO Uncomment once terraform-provider-google/issues/9273 is fixed
# tags = ["ssh"]
depends_on = [
google_project_iam_member.shared_vpc,
]
}

View File

@ -0,0 +1,64 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tfdoc:file:description VPC resources.
module "vpc" {
source = "../../../modules/net-vpc"
count = local.use_shared_vpc ? 0 : 1
project_id = module.project.project_id
name = "${var.prefix}-vpc"
subnets = [
{
ip_cidr_range = "10.0.0.0/20"
name = "${var.prefix}-subnet"
region = var.region
}
]
}
module "vpc-firewall" {
source = "../../../modules/net-vpc-firewall"
count = local.use_shared_vpc ? 0 : 1
project_id = module.project.project_id
network = module.vpc.0.name
default_rules_config = {
admin_ranges = ["10.0.0.0/20"]
}
ingress_rules = {
#TODO Remove and rely on 'ssh' tag once terraform-provider-google/issues/9273 is fixed
("${var.prefix}-iap") = {
description = "Enable SSH from IAP on Notebooks."
source_ranges = ["35.235.240.0/20"]
targets = ["notebook-instance"]
rules = [{ protocol = "tcp", ports = [22] }]
}
}
}
module "cloudnat" {
source = "../../../modules/net-cloudnat"
count = local.use_shared_vpc ? 0 : 1
project_id = module.project.project_id
name = "${var.prefix}-default"
region = var.region
router_network = module.vpc.0.name
}
resource "google_project_iam_member" "shared_vpc" {
count = local.use_shared_vpc ? 1 : 0
project = var.vpc_config.host_project
role = "roles/compute.networkUser"
member = "serviceAccount:${module.project.service_accounts.robots.notebooks}"
}

View File

@ -46,9 +46,11 @@ module "test-vm" {
service_account = module.service-account-sql.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-10"
type = "pd-ssd"
size = 10
initialize_params = {
image = "projects/debian-cloud/global/images/family/debian-10"
type = "pd-ssd"
size = 10
}
}
encryption = var.service_encryption_keys != null ? {
encrypt_boot = true

View File

@ -134,10 +134,11 @@ module "vm_example" {
}
]
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-10"
type = "pd-ssd"
size = 10
encrypt_disk = true
initialize_params = {
image = "projects/debian-cloud/global/images/family/debian-10"
type = "pd-ssd"
size = 10
}
}
tags = ["ssh"]
encryption = {

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -96,14 +96,14 @@ service_encryption_keys = {
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [prefix](variables.tf#L78) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L96) | Project id, references existing project if `project_create` is null. | <code>string</code> | ✓ | |
| [prefix](variables.tf#L82) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L100) | Project id, references existing project if `project_create` is null. | <code>string</code> | ✓ | |
| [composer_config](variables.tf#L17) | Composer environment configuration. It accepts only following attributes: `environment_size`, `software_config` and `workloads_config`. See [attribute reference](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/composer_environment#argument-reference---cloud-composer-2) for details on settings variables. | <code title="object&#40;&#123;&#10; environment_size &#61; string&#10; software_config &#61; any&#10; workloads_config &#61; object&#40;&#123;&#10; scheduler &#61; object&#40;&#10; &#123;&#10; cpu &#61; number&#10; memory_gb &#61; number&#10; storage_gb &#61; number&#10; count &#61; number&#10; &#125;&#10; &#41;&#10; web_server &#61; object&#40;&#10; &#123;&#10; cpu &#61; number&#10; memory_gb &#61; number&#10; storage_gb &#61; number&#10; &#125;&#10; &#41;&#10; worker &#61; object&#40;&#10; &#123;&#10; cpu &#61; number&#10; memory_gb &#61; number&#10; storage_gb &#61; number&#10; min_count &#61; number&#10; max_count &#61; number&#10; &#125;&#10; &#41;&#10; &#125;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; environment_size &#61; &#34;ENVIRONMENT_SIZE_SMALL&#34;&#10; software_config &#61; &#123;&#10; image_version &#61; &#34;composer-2-airflow-2&#34;&#10; &#125;&#10; workloads_config &#61; null&#10;&#125;">&#123;&#8230;&#125;</code> |
| [iam_groups_map](variables.tf#L58) | Map of Role => groups to be added on the project. Example: { \"roles/composer.admin\" = [\"group:gcp-data-engineers@example.com\"]}. | <code>map&#40;list&#40;string&#41;&#41;</code> | | <code>null</code> |
| [network_config](variables.tf#L64) | Shared VPC network configurations to use. If null networks will be created in projects with preconfigured values. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; network_self_link &#61; string&#10; subnet_self_link &#61; string&#10; composer_secondary_ranges &#61; object&#40;&#123;&#10; pods &#61; string&#10; services &#61; string&#10; &#125;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [project_create](variables.tf#L87) | Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L101) | Reagion where instances will be deployed. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
| [service_encryption_keys](variables.tf#L107) | Cloud KMS keys to use to encrypt resources. Provide a key for each reagion in use. | <code>map&#40;string&#41;</code> | | <code>null</code> |
| [network_config](variables.tf#L64) | Shared VPC network configurations to use. If null networks will be created in projects with preconfigured values. | <code title="object&#40;&#123;&#10; host_project &#61; string&#10; network_self_link &#61; string&#10; subnet_self_link &#61; string&#10; composer_ip_ranges &#61; object&#40;&#123;&#10; cloudsql &#61; string&#10; gke_master &#61; string&#10; &#125;&#41;&#10; composer_secondary_ranges &#61; object&#40;&#123;&#10; pods &#61; string&#10; services &#61; string&#10; &#125;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [project_create](variables.tf#L91) | Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L105) | Reagion where instances will be deployed. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
| [service_encryption_keys](variables.tf#L111) | Cloud KMS keys to use to encrypt resources. Provide a key for each reagion in use. | <code>map&#40;string&#41;</code> | | <code>null</code> |
## Outputs
@ -113,7 +113,6 @@ service_encryption_keys = {
| [composer_dag_gcs](outputs.tf#L22) | The Cloud Storage prefix of the DAGs for the Cloud Composer environment. | |
<!-- END TFDOC -->
## Test
```hcl

View File

@ -67,6 +67,10 @@ variable "network_config" {
host_project = string
network_self_link = string
subnet_self_link = string
composer_ip_ranges = object({
cloudsql = string
gke_master = string
})
composer_secondary_ranges = object({
pods = string
services = string

View File

@ -191,10 +191,13 @@ The Data Platform is meant to be executed by a Service Account (or a regular use
There are three sets of variables you will need to fill in:
```tfvars
billing_account_id = "111111-222222-333333"
older_id = "folders/123456789012"
organization_domain = "domain.com"
prefix = "myco"
prefix = "dat-plat"
project_config = {
parent = "folders/1111111111"
billing_account_id = "1111111-2222222-33333333"
}
organization_domain = "domain.com"
~
```
For more fine details check variables on [`variables.tf`](./variables.tf) and update according to the desired configuration. Remember to create team groups described [below](#groups).

View File

@ -1,4 +1,6 @@
prefix = "prefix"
folder_id = "folders/123456789012"
billing_account_id = "111111-222222-333333"
organization_domain = "example.com"
prefix = "dat-plat"
project_config = {
parent = "folders/1111111111"
billing_account_id = "1111111-2222222-33333333"
}
organization_domain = "domain.com"

View File

@ -17,30 +17,35 @@ This sample creates several distinct groups of resources:
- One BigQuery dataset
## Virtual Private Cloud (VPC) design
As is often the case in real-world configurations, this blueprint accepts as input an existing Shared-VPC via the network_config variable. Make sure that 'container.googleapis.com', 'notebooks.googleapis.com' and 'servicenetworking.googleapis.com' are enabled in the VPC host project.
If the network_config variable is not provided, one VPC will be created in each project that supports network resources (load, transformation and orchestration).
## Deploy your enviroment
We assume the identiy running the following steps has the following role:
- resourcemanager.projectCreator in case a new project will be created.
- owner on the project in case you use an existing project.
Run Terraform init:
```
$ terraform init
terraform init
```
Configure the Terraform variable in your terraform.tfvars file. You need to spefify at least the following variables:
```
prefix = "prefix"
project_id = "data-001"
```
You can run now:
```
$ terraform apply
terraform apply
```
You can now connect to the Vertex AI notbook to perform your data analysy.
@ -81,5 +86,5 @@ module "test" {
parent = "folders/467898377"
}
}
# tftest modules=8 resources=39
# tftest modules=8 resources=40
```

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -73,9 +73,11 @@ module "nodes" {
}]
boot_disk = {
image = var.node_image
type = "pd-ssd"
size = var.boot_disk_size
initialize_params = {
image = var.node_image
type = "pd-ssd"
size = var.boot_disk_size
}
}
attached_disks = [{

View File

@ -1,20 +1,23 @@
# MLOps with Vertex AI
## Introduction
This example implements the infrastructure required to deploy an end-to-end [MLOps process](https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf) using [Vertex AI](https://cloud.google.com/vertex-ai) platform.
## GCP resources
This example implements the infrastructure required to deploy an end-to-end [MLOps process](https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf) using [Vertex AI](https://cloud.google.com/vertex-ai) platform.
## GCP resources
The blueprint will deploy all the required resources to have a fully functional MLOPs environment containing:
- Vertex Workbench (for the experimentation environment)
- GCP Project (optional) to host all the resources
- Isolated VPC network and a subnet to be used by Vertex and Dataflow. Alternatively, an external Shared VPC can be configured using the `network_config`variable.
- Isolated VPC network and a subnet to be used by Vertex and Dataflow. Alternatively, an external Shared VPC can be configured using the `network_config`variable.
- Firewall rule to allow the internal subnet communication required by Dataflow
- Cloud NAT required to reach the internet from the different computing resources (Vertex and Dataflow)
- GCS buckets to host Vertex AI and Cloud Build Artifacts. By default the buckets will be regional and should match the Vertex AI region for the different resources (i.e. Vertex Managed Dataset) and processes (i.e. Vertex trainining)
- BigQuery Dataset where the training data will be stored. This is optional, since the training data could be already hosted in an existing BigQuery dataset.
- Artifact Registry Docker repository to host the custom images.
- Service account (`mlops-[env]@`) with the minimum permissions required by Vertex AI and Dataflow (if this service is used inside of the Vertex AI Pipeline).
- Service account (`github@`) to be used by Workload Identity Federation, to federate Github identity (Optional).
- Service account (`github@`) to be used by Workload Identity Federation, to federate Github identity (Optional).
- Secret to store the Github SSH key to get access the CICD code repo.
![MLOps project description](./images/mlops_projects.png "MLOps project description")
@ -28,13 +31,14 @@ Assign roles relying on User groups is a way to decouple the final set of permis
We use the following groups to control access to resources:
- *Data Scientits* (gcp-ml-ds@<company.org>). They manage notebooks and create ML pipelines.
- *ML Engineers* (gcp-ml-eng@<company.org>). They manage the different Vertex resources.
- *ML Viewer* (gcp-ml-eng@<company.org>). Group with wiewer permission for the different resources.
- *ML Engineers* (gcp-ml-eng@<company.org>). They manage the different Vertex resources.
- *ML Viewer* (gcp-ml-eng@<company.org>). Group with wiewer permission for the different resources.
Please note that these groups are not suitable for production grade environments. Roles can be customized in the `main.tf`file.
## Instructions
### Deploy the experimentation environment
## Instructions
### Deploy the experimentation environment
- Create a `terraform.tfvars` file and specify the variables to match your desired configuration. You can use the provided `terraform.tfvars.sample` as reference.
- Run `terraform init` and `terraform apply`
@ -76,6 +80,7 @@ This blueprint can be used as a building block for setting up an end2end ML Ops
<!-- END TFDOC -->
## TODO
- Add support for User Managed Notebooks, SA permission option and non default SA for Single User mode.
- Improve default naming for local VPC and Cloud NAT
@ -105,5 +110,5 @@ module "test" {
parent = "folders/111111111111"
}
}
# tftest modules=12 resources=56
# tftest modules=12 resources=57
```

View File

@ -23,6 +23,10 @@ terraform {
source = "hashicorp/google-beta"
version = ">= 4.55.0" # tftest
}
local = {
source = "hashicorp/local"
version = "2.2.3"
}
}
}

View File

@ -21,6 +21,7 @@ They are meant to be used as minimal but complete starting points to create actu
### Multitenant GKE fleet
<a href="./multitenant-fleet/" title="GKE multitenant fleet"><img src="./multitenant-fleet/diagram.png" align="left" width="280px"></a> This [blueprint](./multitenant-fleet/) allows simple centralized management of similar sets of GKE clusters and their nodepools in a single project, and optional fleet management via GKE Hub templated configurations.
<br clear="left">
### Shared VPC with GKE and per-subnet support
@ -30,3 +31,9 @@ They are meant to be used as minimal but complete starting points to create actu
It is meant to be used as a starting point for most Shared VPC configurations, and to be integrated to the above blueprints where Shared VPC is needed in more complex network topologies.
<br clear="left">
### Autopilot
<a href="./autopilot" title="GKE autopilot"><img src="../networking/shared-vpc-gke/diagram.png" align="left" width="280px"></a> This [blueprint](./autopilot) creates an Autopilot cluster with Google-managed Prometheus enabled and installs an application that scales as the traffic that is hitting the load balancer exposing it grows.
<br clear="left">

View File

@ -0,0 +1,95 @@
# Load testing an application running on an autopilot cluster
This blueprint creates an Autopilot cluster with Google-managed Prometheus enabled and installs an application that scales as the traffic that is hitting the load balancer exposing it grows. It also installs the tooling required to distributed load test with [locust](https://locust.io) on that application and the monitoring tooling required to observe how things evolve in the cluster during the load test. Ansible is used to install the application and all the tooling on a management VM.
The diagram below depicts the architecture.
![Diagram](./diagram.png)
## Running the blueprint
1. Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fgke%2Fautopilot), then go through the following steps to create resources:
2. Initialize the terraform configuration
```
terraform init
```
3. Apply the terraform configuration
```
terraform apply -var project_id=my-project-id
```
4. Copy the IP addresses for grafana, the locust master.
4. Change to the ansible directory and run the following command
```
ansible-playbook -v playbook.yaml
```
5. Open to the locust master web interface url in your browser and start the load test
6. SSH to the management VM
```
gcloud compute ssh mgmt --project my-project
```
7. Run the following command to check that the application pods are running on different nodes than the load testing and monitoring tooling.
```
kubectl get pods -A -o wide
```
8. Run the following command to see how the application pods scale
```
kubectl get hpa -n sample -w
```
9. Run the following command to see how the cluster nodes scale
```
kubectl get nodes -n
```
Alternatively you can also check all the above using the dashboards available in grafana.
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [project_id](variables.tf#L68) | Project ID. | <code>string</code> | ✓ | |
| [cluster_network_config](variables.tf#L17) | Cluster network configuration. | <code title="object&#40;&#123;&#10; nodes_cidr_block &#61; string&#10; pods_cidr_block &#61; string&#10; services_cidr_block &#61; string&#10; master_authorized_cidr_blocks &#61; map&#40;string&#41;&#10; master_cidr_block &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; nodes_cidr_block &#61; &#34;10.0.1.0&#47;24&#34;&#10; pods_cidr_block &#61; &#34;172.16.0.0&#47;20&#34;&#10; services_cidr_block &#61; &#34;192.168.0.0&#47;24&#34;&#10; master_authorized_cidr_blocks &#61; &#123;&#10; internal &#61; &#34;10.0.0.0&#47;8&#34;&#10; &#125;&#10; master_cidr_block &#61; &#34;10.0.0.0&#47;28&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [mgmt_server_config](variables.tf#L37) | Management server configuration. | <code title="object&#40;&#123;&#10; disk_size &#61; number&#10; disk_type &#61; string&#10; image &#61; string&#10; instance_type &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; disk_size &#61; 50&#10; disk_type &#61; &#34;pd-ssd&#34;&#10; image &#61; &#34;projects&#47;ubuntu-os-cloud&#47;global&#47;images&#47;family&#47;ubuntu-2204-lts&#34;&#10; instance_type &#61; &#34;n1-standard-2&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [mgmt_subnet_cidr_block](variables.tf#L53) | Management subnet IP CIDR range. | <code>string</code> | | <code>&#34;10.0.2.0&#47;24&#34;</code> |
| [project_create](variables.tf#L59) | Parameters for the creation of the new project. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L73) | Region. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
| [vpc_create](variables.tf#L79) | Flag indicating whether the VPC should be created or not. | <code>bool</code> | | <code>true</code> |
| [vpc_name](variables.tf#L85) | VPC name. | <code>string</code> | | <code>&#34;vpc&#34;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [urls](outputs.tf#L17) | Grafanam, locust and application URLs. | |
<!-- END TFDOC -->
## Test
```hcl
module "test" {
source = "./fabric/blueprints/gke/autopilot"
project_create = {
billing_account_id = "12345-12345-12345"
parent = "folders/123456789"
}
project_id = "my-project"
}
# tftest modules=11 resources=34
```

View File

@ -0,0 +1,37 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
# tfdoc:file:description Ansible generated files.
resource "local_file" "vars_file" {
content = yamlencode({
cluster = module.cluster.name
region = var.region
project_id = module.project.project_id
app_url = local.urls["app"]
})
filename = "${path.module}/ansible/vars/vars.yaml"
file_permission = "0666"
}
resource "local_file" "gssh_file" {
content = templatefile("${path.module}/templates/gssh.sh.tpl", {
project_id = module.project.project_id
zone = local.zone
})
filename = "${path.module}/ansible/gssh.sh"
file_permission = "0777"
}

View File

@ -0,0 +1,8 @@
[defaults]
inventory = inventory/hosts.ini
timeout = 900
[ssh_connection]
pipelining = True
ssh_executable = ./gssh.sh
transfer_method = piped

View File

@ -0,0 +1 @@
mgmt

View File

@ -0,0 +1,128 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: mgmt
gather_facts: "no"
vars_files:
- vars/vars.yaml
environment:
USE_GKE_GCLOUD_AUTH_PLUGIN: True
tasks:
- name: Download the Google Cloud SDK package repository signing key
get_url:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
dest: /usr/share/keyrings/cloud.google.gpg
force: yes
become: true
become_user: root
- name: Add Google Cloud SDK package repository source
apt_repository:
filename: google-cloud-sdk
repo: "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main"
state: present
update_cache: yes
become: true
become_user: root
- name: Install dependencies
apt:
pkg:
- google-cloud-sdk-gke-gcloud-auth-plugin
- kubectl
state: present
become: true
become_user: root
- name: Enable bash completion for kubectl
shell:
cmd: kubectl completion bash > /etc/bash_completion.d/kubectl
creates: /etc/bash_completion.d/kubectl
become: true
become_user: root
- name: Get cluster credentials
shell: >
gcloud container clusters get-credentials {{ cluster }}
--region {{ region }}
--project {{ project_id }}
--internal-ip
- name: Render templates
template:
src: ../bundle/{{ item }}/kustomization.yaml.j2
dest: ../bundle/{{ item }}/kustomization.yaml
delegate_to: localhost
with_items:
- monitoring
- locust
- name: Remove bundle locally
local_action:
module: file
path: ../bundle.tar.gz
state: absent
- name: Archive bundle locally
archive:
path: ../bundle
dest: ../bundle.tar.gz
delegate_to: localhost
- name: Unarchive bundle remotely
unarchive:
src: ../bundle.tar.gz
dest: ~/
- name: Build locust image
shell: >
gcloud builds submit --tag {{ region }}-docker.pkg.dev/{{ project_id }}/registry/load-test:latest \
--project {{ project_id }} .
args:
chdir: ~/bundle/locust/image
- name: Enable scraping of kubelet and cAdvisor metrics
shell: >
kubectl patch operatorconfig config
-n gmp-public
--type=merge
-p '{"collection":{"kubeletScraping":{"interval": "30s"}}}'
- name: Deploy monitoring tooling
shell: >
kubectl apply -k .
args:
chdir: ~/bundle/monitoring
- name: Deploy app
shell: >
kubectl apply -k .
args:
chdir: ~/bundle/app
- name: Get forwarding rule name
shell: >
while true; do
forwarding_rule_name=$(kubectl get ingress -n sample -o=jsonpath='{.items[0].metadata.annotations.ingress\.kubernetes\.io\/forwarding-rule}')
if [ -n "$forwarding_rule_name" ]; then
echo $forwarding_rule_name
break
fi
sleep 10
done
register: forwarding_rule_name_output
- name: Set fact forwarding_url_name
set_fact:
forwarding_rule_name: "{{ forwarding_rule_name_output.stdout }}"
- name: Render template (HPA)
template:
src: ../bundle/app/hpa.yaml.j2
dest: ~/bundle/app/hpa.yaml
- name: Apply HPA manifest
shell: >
kubectl apply -f hpa.yaml
args:
chdir: ~/bundle/app
- name: Deploy locust
shell: >
kubectl apply -k .
args:
chdir: ~/bundle/locust

View File

@ -0,0 +1,37 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx
namespace: sample
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 50
metrics:
- type: External
external:
metric:
name: loadbalancing.googleapis.com|https|request_count
selector:
matchLabels:
resource.labels.forwarding_rule_name: {{ forwarding_rule_name }}
target:
type: AverageValue
averageValue: 5

View File

@ -0,0 +1,42 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backendconfig
namespace: sample
spec:
healthCheck:
requestPath: /
port: 80
type: HTTP
logging:
enable: true
sampleRate: 0.5
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: "app"
kubernetes.io/ingress.allow-http: "true"
name: ingress
namespace: sample
spec:
defaultBackend:
service:
name: nginx
port:
name: web

View File

@ -0,0 +1,18 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
resources:
- namespace.yaml
- nginx.yaml
- ingress.yaml

View File

@ -0,0 +1,18 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: sample

View File

@ -0,0 +1,129 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: sample
data:
nginx.conf: |
events {}
http {
server {
listen 80;
root /var/www/html;
location / {
return 200 'Hello World!';
}
}
server {
listen 8080;
location /stub_status {
stub_status on;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: sample
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
name: web
- containerPort: 8080
name: status
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readinessProbe:
httpGet:
path: /stub_status
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
failureThreshold: 1
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
memory: 10Mi
- name: nginx-prometheus-exporter
image: nginx/nginx-prometheus-exporter:0.10.0
ports:
- containerPort: 9113
name: metrics
env:
- name: SCRAPE_URI
value: http://localhost:8080/stub_status
resources:
requests:
cpu: 5m
memory: 5Mi
limits:
memory: 5Mi
volumes:
- name: nginx-config
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: sample
annotations:
annotations:
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/app-protocols: '{"web":"HTTP"}'
cloud.google.com/backend-config: '{"default": "backendconfig"}'
labels:
app: nginx
spec:
ports:
- name: web
port: 80
protocol: TCP
selector:
app: nginx
---
apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: nginx
namespace: sample
spec:
selector:
matchLabels:
app: nginx
endpoints:
- port: metrics
interval: 30s

View File

@ -0,0 +1,21 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM locustio/locust:latest
ADD locust-files /home/locust/locust-files
ADD run.sh /home/locust/run.sh
ENTRYPOINT ["/home/locust/run.sh"]

View File

@ -0,0 +1,65 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from locust import HttpUser, LoadTestShape, task, between
class TestUser(HttpUser):
host = os.getenv("URL", "http://nginx.sample.svc.cluster.local")
wait_time = between(int(os.getenv('MIN_WAIT_TIME'), 1),
int(os.getenv('MAX_WAIT_TIME'), 2))
@task
def home(self):
with self.client.get("/", catch_response=True) as response:
if response.status_code == 200:
response.success()
else:
logging.info('Response code is ' + str(response.status_code))
class CustomLoadShape(LoadTestShape):
stages = []
num_stages = int(os.getenv('NUM_STAGES', 20))
stage_duration = int(os.getenv('STAGE_DURATION', 60))
spawn_rate = int(os.getenv('SPAWN_RATE', 1))
new_users_per_stage = int(os.getenv('NEW_USERS_PER_STAGE', 10))
for i in range(1, num_stages + 1):
stages.append({
'duration': stage_duration * i,
'users': new_users_per_stage * i,
'spawn_rate': spawn_rate
})
for i in range(1, num_stages):
stages.append({
'duration': stage_duration * (num_stages + i),
'users': new_users_per_stage * (num_stages - i),
'spawn_rate': spawn_rate
})
def tick(self):
run_time = self.get_run_time()
for stage in self.stages:
if run_time < stage['duration']:
tick_data = (stage['users'], stage['spawn_rate'])
return tick_data
return None

View File

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
LOCUS_OPTS="-f /home/locust/locust-files"
LOCUST_MODE=${LOCUST_MODE:-standalone}
if [[ "$LOCUST_MODE" = "master" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --master"
elif [[ "$LOCUST_MODE" = "worker" ]]; then
LOCUS_OPTS="$LOCUS_OPTS --worker --master-host=$LOCUST_MASTER"
fi
locust $LOCUS_OPTS

View File

@ -0,0 +1,42 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backendconfig
namespace: locust
spec:
healthCheck:
requestPath: /
port: 8089
type: HTTP
logging:
enable: true
sampleRate: 0.5
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: locust
annotations:
kubernetes.io/ingress.global-static-ip-name: "locust"
kubernetes.io/ingress.allow-http: "true"
spec:
defaultBackend:
service:
name: locust-master-web
port:
name: loc-master-web

View File

@ -0,0 +1,66 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
resources:
- namespace.yaml
- master.yaml
- workers.yaml
- ingress.yaml
patches:
- target:
group: apps
version: v1
kind: Deployment
name: locust-master
namespace: locust
patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: locust-master
namespace: locust
spec:
template:
spec:
containers:
- name: locust-master
image: load-test-image
env:
- name: URL
value: {{ app_url }}
- target:
group: apps
version: v1
kind: Deployment
name: locust-worker
namespace: locust
patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: locust-worker
namespace: locust
spec:
template:
spec:
containers:
- name: locust-master
image: load-test-image
env:
- name: URL
value: {{ app_url }}
images:
- name: load-test-image
newName: {{ region }}-docker.pkg.dev/{{ project_id}}/registry/load-test
newTag: latest

View File

@ -0,0 +1,128 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: locust-master
namespace: locust
labels:
name: locust-master
spec:
replicas: 1
selector:
matchLabels:
app: locust-master
template:
metadata:
labels:
app: locust-master
spec:
tolerations:
- key: group
operator: Equal
value: "locust"
effect: NoSchedule
nodeSelector:
group: "locust"
containers:
- name: locust-master
image: load-test-image
env:
- name: LOCUST_MODE
value: master
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
resources:
requests:
cpu: 50m
memory: 50Mi
limits:
memory: 50Mi
- name: locust-prometheus-exporter
image: containersol/locust_exporter
ports:
- name: metrics
containerPort: 9646
resources:
requests:
cpu: 5m
memory: 5Mi
limits:
memory: 5Mi
---
kind: Service
apiVersion: v1
metadata:
name: locust-master
namespace: locust
labels:
app: locust-master
spec:
ports:
- port: 5557
targetPort: loc-master-p1
protocol: TCP
name: loc-master-p1
- port: 5558
targetPort: loc-master-p2
protocol: TCP
name: loc-master-p2
- port: 9646
targetPort: metrics
protocol: TCP
name: metrics
selector:
app: locust-master
---
kind: Service
apiVersion: v1
metadata:
name: locust-master-web
namespace: locust
annotations:
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/app-protocols: '{"loc-master-web":"HTTP"}'
cloud.google.com/backend-config: '{"default": "backendconfig"}'
labels:
app: locust-master
spec:
ports:
- port: 8089
targetPort: loc-master-web
protocol: TCP
name: loc-master-web
selector:
app: locust-master
---
apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: locust-master
namespace: locust
spec:
selector:
matchLabels:
app: locust-master
endpoints:
- port: metrics
interval: 30s

View File

@ -0,0 +1,18 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: locust

View File

@ -0,0 +1,52 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: locust-worker
namespace: locust
labels:
name: locust-worker
spec:
replicas: 5
selector:
matchLabels:
app: locust-worker
template:
metadata:
labels:
app: locust-worker
spec:
tolerations:
- key: group
operator: Equal
value: "locust"
effect: NoSchedule
nodeSelector:
group: "locust"
containers:
- name: locust-worker
image: load-test-image
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
value: locust-master
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
memory: 50Mi

View File

@ -0,0 +1,184 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ServiceAccount
metadata:
name: custom-metrics-stackdriver-adapter
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: custom-metrics-stackdriver-adapter
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: custom-metrics-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: custom-metrics-stackdriver-adapter
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics-resource-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: custom-metrics-stackdriver-adapter
namespace: monitoring
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-metrics-stackdriver-adapter
namespace: monitoring
labels:
run: custom-metrics-stackdriver-adapter
k8s-app: custom-metrics-stackdriver-adapter
spec:
replicas: 1
selector:
matchLabels:
run: custom-metrics-stackdriver-adapter
k8s-app: custom-metrics-stackdriver-adapter
template:
metadata:
labels:
run: custom-metrics-stackdriver-adapter
k8s-app: custom-metrics-stackdriver-adapter
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: custom-metrics-stackdriver-adapter
containers:
- image: gcr.io/gke-release/custom-metrics-stackdriver-adapter:v0.13.1-gke.0
imagePullPolicy: Always
name: pod-custom-metrics-stackdriver-adapter
command:
- /adapter
- --use-new-resource-model=false
resources:
limits:
cpu: 100m
memory: 150Mi
requests:
memory: 150Mi
---
apiVersion: v1
kind: Service
metadata:
labels:
run: custom-metrics-stackdriver-adapter
k8s-app: custom-metrics-stackdriver-adapter
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Adapter
name: custom-metrics-stackdriver-adapter
namespace: monitoring
spec:
ports:
- port: 443
protocol: TCP
targetPort: 443
selector:
run: custom-metrics-stackdriver-adapter
k8s-app: custom-metrics-stackdriver-adapter
type: ClusterIP
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 100
versionPriority: 100
service:
name: custom-metrics-stackdriver-adapter
namespace: monitoring
version: v1beta1
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 100
versionPriority: 200
service:
name: custom-metrics-stackdriver-adapter
namespace: monitoring
version: v1beta2
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.external.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: external.metrics.k8s.io
groupPriorityMinimum: 100
versionPriority: 100
service:
name: custom-metrics-stackdriver-adapter
namespace: monitoring
version: v1beta1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-metrics-reader
rules:
- apiGroups:
- "external.metrics.k8s.io"
resources:
- "*"
verbs:
- list
- get
- watch©
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-metrics-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-metrics-reader
subjects:
- kind: ServiceAccount
name: horizontal-pod-autoscaler
namespace: kube-system

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,79 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend
namespace: monitoring
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
serviceAccountName: frontend
tolerations:
- key: group
operator: Equal
value: monitoring
effect: NoSchedule
nodeSelector:
group: monitoring
automountServiceAccountToken: true
containers:
- name: frontend
image: "gke.gcr.io/prometheus-engine/frontend:v0.5.0-gke.0"
args:
- "--web.listen-address=:9090"
ports:
- name: web
containerPort: 9090
resources:
requests:
cpu: 10m
memory: 15Mi
limits:
memory: 15Mi
readinessProbe:
httpGet:
path: /-/ready
port: web
livenessProbe:
httpGet:
path: /-/healthy
port: web
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: monitoring
spec:
clusterIP: None
selector:
app: frontend
ports:
- name: web
port: 9090

View File

@ -0,0 +1,184 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana
namespace: monitoring
data:
allow-snippet-annotations: "false"
grafana.ini: |
[analytics]
check_for_updates = true
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
datasources.yaml: |
apiVersion: 1
datasources:
- access: proxy
editable: true
isDefault: true
jsonData:
timeInterval: 5s
name: Prometheus
orgId: 1
type: prometheus
url: http://frontend.monitoring.svc.cluster.local:9090
dashboardproviders.yaml: |
apiVersion: 1
providers:
- disableDeletion: false
folder: k8s
name: k8s
options:
path: /var/lib/grafana/dashboards/k8s
orgId: 1
type: file
- disableDeletion: false
folder: locust
name: locust
options:
path: /var/lib/grafana/dashboards/locust
orgId: 1
type: file
- disableDeletion: false
folder: nginx
name: nginx
options:
path: /var/lib/grafana/dashboards/nginx
orgId: 1
type: file
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
tolerations:
- key: group
operator: Equal
value: monitoring
effect: NoSchedule
nodeSelector:
group: monitoring
containers:
- name: grafana
image: grafana/grafana:8.3.4
ports:
- name: web
containerPort: 3000
env:
- name: GF_PATHS_DATA
value: /var/lib/grafana/
- name: GF_PATHS_LOGS
value: /var/log/grafana
- name: GF_PATHS_PLUGINS
value: /var/lib/grafana/plugins
- name: GF_PATHS_PROVISIONING
value: /etc/grafana/provisioning
- name: "GF_AUTH_ANONYMOUS_ENABLED"
value: "true"
- name: "GF_AUTH_ANONYMOUS_ORG_ROLE"
value: "Admin"
- name: "GF_AUTH_BASIC_ENABLED"
value: "false"
- name: "GF_SECURITY_ADMIN_PASSWORD"
value: "-"
- name: "GF_SECURITY_ADMIN_USER"
value: "-"
volumeMounts:
- name: config
mountPath: "/etc/grafana/grafana.ini"
subPath: grafana.ini
- name: storage
mountPath: "/var/lib/grafana"
- name: k8s-grafana-dashboards
mountPath: "/var/lib/grafana/dashboards/k8s"
- name: locust-grafana-dashboards
mountPath: "/var/lib/grafana/dashboards/locust"
- name: nginx-grafana-dashboards
mountPath: "/var/lib/grafana/dashboards/nginx"
- name: config
mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
subPath: "datasources.yaml"
- name: config
mountPath: "/etc/grafana/provisioning/dashboards/dashboardproviders.yaml"
subPath: "dashboardproviders.yaml"
resources:
requests:
cpu: 30m
memory: 100Mi
limits:
memory: 100Mi
livenessProbe:
failureThreshold: 10
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 60
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /api/health
port: 3000
volumes:
- name: config
configMap:
name: grafana
- name: k8s-grafana-dashboards
configMap:
name: k8s-grafana-dashboards
- name: locust-grafana-dashboards
configMap:
name: locust-grafana-dashboards
- name: nginx-grafana-dashboards
configMap:
name: nginx-grafana-dashboards
- name: storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: monitoring
annotations:
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/app-protocols: '{"web":"HTTP"}'
cloud.google.com/backend-config: '{"default": "backendconfig"}'
spec:
clusterIP: None
selector:
app: grafana
ports:
- name: web
port: 3000

View File

@ -0,0 +1,43 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backendconfig
namespace: monitoring
spec:
healthCheck:
requestPath: /api/health
port: 3000
type: HTTP
logging:
enable: true
sampleRate: 0.5
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.global-static-ip-name: "grafana"
kubernetes.io/ingress.allow-http: "true"
spec:
defaultBackend:
service:
name: grafana
port:
name: web

View File

@ -0,0 +1,342 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.3.0
namespace: gmp-public
name: kube-state-metrics
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
serviceName: kube-state-metrics
template:
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.3.0
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- amd64
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- name: kube-state-metric
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.3.0
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- --pod=$(POD_NAME)
- --pod-namespace=$(POD_NAMESPACE)
- --port=8080
- --telemetry-port=8081
ports:
- name: metrics
containerPort: 8080
- name: metrics-self
containerPort: 8081
resources:
requests:
cpu: 10m
memory: 50Mi
limits:
memory: 50Mi
securityContext:
allowPrivilegeEscalation: false
privileged: false
capabilities:
drop:
- all
runAsUser: 1000
runAsGroup: 1000
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 5
timeoutSeconds: 5
serviceAccountName: kube-state-metrics
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.3.0
namespace: gmp-public
name: kube-state-metrics
spec:
clusterIP: None
ports:
- name: metrics
port: 8080
targetPort: metrics
- name: metrics-self
port: 8081
targetPort: metrics-self
selector:
app.kubernetes.io/name: kube-state-metrics
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: gmp-public
name: kube-state-metrics
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.3.0
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gmp-public:kube-state-metrics
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.3.0
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gmp-public:kube-state-metrics
subjects:
- kind: ServiceAccount
namespace: gmp-public
name: kube-state-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gmp-public:kube-state-metrics
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.3.0
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- persistentvolumes
- namespaces
- endpoints
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
- ingresses
verbs:
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- deployments
- replicasets
verbs:
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- list
- watch
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
verbs:
- list
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
- ingresses
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
---
# TODO(pintohutch): bump to autoscaling/v2 when 1.23 is the default in the GKE
# stable release channel.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: kube-state-metrics
namespace: gmp-public
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: kube-state-metrics
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 60
behavior:
scaleDown:
policies:
- type: Pods
value: 1
# Under-utilization needs to persist for `periodSeconds` before any action can be taken.
# Current supported max from https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2/.
periodSeconds: 1800
# Current supported max from https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2beta2/.
stabilizationWindowSeconds: 3600
---
apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: kube-state-metrics
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/part-of: google-cloud-managed-prometheus
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
endpoints:
- port: metrics
interval: 30s
metricRelabeling:
- action: keep
regex: kube_(daemonset|deployment|pod|namespace|node|statefulset)_.+
sourceLabels: [__name__]
targetLabels:
metadata: [] # explicitly empty so the metric labels are respected
---
apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
namespace: gmp-public
name: kube-state-metrics
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/part-of: google-cloud-managed-prometheus
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
endpoints:
- port: metrics-self
interval: 30s

View File

@ -0,0 +1,72 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
resources:
- namespace.yaml
- frontend.yaml
- grafana.yaml
- ingress.yaml
- custom-stackdriver-metrics-adapter.yaml
- kube-state-metrics.yaml
configMapGenerator:
- name: k8s-grafana-dashboards
namespace: monitoring
options:
disableNameSuffixHash: true
files:
- dashboards/k8s-global.json
- dashboards/k8s-namespaces.json
- dashboards/k8s-nodes.json
- dashboards/k8s-pods.json
- name: locust-grafana-dashboards
namespace: monitoring
options:
disableNameSuffixHash: true
files:
- dashboards/locust.json
- name: nginx-grafana-dashboards
namespace: monitoring
options:
disableNameSuffixHash: true
files:
- dashboards/nginx.json
patches:
- target:
version: v1
kind: ServiceAccount
name: frontend
namespace: monitoring
patch: |-
- op: add
path: /metadata/annotations/iam.gke.io~1gcp-service-account
value: sa-monitoring@{{ project_id }}.iam.gserviceaccount.com
- target:
version: v1
kind: ServiceAccount
name: custom-metrics-stackdriver-adapter
namespace: monitoring
patch: |-
- op: add
path: /metadata/annotations/iam.gke.io~1gcp-service-account
value: sa-monitoring@{{ project_id }}.iam.gserviceaccount.com
- target:
group: apps
version: v1
kind: Deployment
name: frontend
namespace: monitoring
patch: |-
- op: add
path: /spec/template/spec/containers/0/args/-
value: "--query.project-id={{ project_id }}"

View File

@ -0,0 +1,18 @@
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: monitoring

View File

@ -0,0 +1,54 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
module "cluster" {
source = "../../../modules/gke-cluster"
project_id = module.project.project_id
name = "cluster"
location = var.region
vpc_config = {
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet-cluster"]
secondary_range_names = {
pods = "pods"
services = "services"
}
master_authorized_ranges = var.cluster_network_config.master_authorized_cidr_blocks
master_ipv4_cidr_block = var.cluster_network_config.master_cidr_block
}
enable_features = {
autopilot = true
}
monitoring_config = {
enenable_components = ["SYSTEM_COMPONENTS"]
managed_prometheus = true
}
cluster_autoscaling = {
auto_provisioning_defaults = {
service_account = module.node_sa.email
}
}
release_channel = "RAPID"
depends_on = [
module.project
]
}
module "node_sa" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "sa-node"
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -0,0 +1,25 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
urls = { for k, v in module.addresses.global_addresses : k => "http://${v.address}" }
}
module "addresses" {
source = "../../../modules/net-address"
project_id = module.project.project_id
global_addresses = ["grafana", "locust", "app"]
}

View File

@ -0,0 +1,66 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
module "project" {
source = "../../../modules/project"
billing_account = (var.project_create != null
? var.project_create.billing_account_id
: null
)
parent = (var.project_create != null
? var.project_create.parent
: null
)
project_create = var.project_create != null
name = var.project_id
services = [
"artifactregistry.googleapis.com",
"cloudbuild.googleapis.com",
"container.googleapis.com",
"compute.googleapis.com"
]
iam = {
"roles/monitoring.viewer" = [module.monitoring_sa.iam_email]
"roles/container.nodeServiceAccount" = [module.node_sa.iam_email]
"roles/container.admin" = [module.mgmt_server.service_account_iam_email]
"roles/storage.admin" = [module.mgmt_server.service_account_iam_email]
"roles/cloudbuild.builds.editor" = [module.mgmt_server.service_account_iam_email]
"roles/viewer" = [module.mgmt_server.service_account_iam_email]
}
}
module "monitoring_sa" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "sa-monitoring"
iam = {
"roles/iam.workloadIdentityUser" = [
"serviceAccount:${module.cluster.workload_identity_pool}[monitoring/frontend]",
"serviceAccount:${module.cluster.workload_identity_pool}[monitoring/custom-metrics-stackdriver-adapter]"
]
}
}
module "docker_artifact_registry" {
source = "../../../modules/artifact-registry"
project_id = module.project.project_id
location = var.region
format = "DOCKER"
id = "registry"
iam = {
"roles/artifactregistry.reader" = [module.node_sa.iam_email]
}
}

View File

@ -0,0 +1,42 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
zone = "${var.region}-b"
}
module "mgmt_server" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = local.zone
name = "mgmt"
instance_type = var.mgmt_server_config.instance_type
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet-mgmt"]
nat = false
addresses = null
}]
service_account_create = true
boot_disk = {
initialize_params = {
image = var.mgmt_server_config.image
type = var.mgmt_server_config.disk_type
size = var.mgmt_server_config.disk_size
}
}
tags = ["ssh"]
}

View File

@ -0,0 +1,20 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "urls" {
description = "Grafanam, locust and application URLs."
value = local.urls
}

View File

@ -0,0 +1,30 @@
#!/bin/bash
#
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
host="$${@: -2: 1}"
cmd="$${@: -1: 1}"
gcloud_args="
--tunnel-through-iap
--zone=${zone}
--project=${project_id}
--quiet
--no-user-output-enabled
--
-C
"
exec gcloud compute ssh "$host" $gcloud_args "$cmd"

View File

@ -0,0 +1,90 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "cluster_network_config" {
description = "Cluster network configuration."
type = object({
nodes_cidr_block = string
pods_cidr_block = string
services_cidr_block = string
master_authorized_cidr_blocks = map(string)
master_cidr_block = string
})
default = {
nodes_cidr_block = "10.0.1.0/24"
pods_cidr_block = "172.16.0.0/20"
services_cidr_block = "192.168.0.0/24"
master_authorized_cidr_blocks = {
internal = "10.0.0.0/8"
}
master_cidr_block = "10.0.0.0/28"
}
}
variable "mgmt_server_config" {
description = "Management server configuration."
type = object({
disk_size = number
disk_type = string
image = string
instance_type = string
})
default = {
disk_size = 50
disk_type = "pd-ssd"
image = "projects/ubuntu-os-cloud/global/images/family/ubuntu-2204-lts"
instance_type = "n1-standard-2"
}
}
variable "mgmt_subnet_cidr_block" {
description = "Management subnet IP CIDR range."
type = string
default = "10.0.2.0/24"
}
variable "project_create" {
description = "Parameters for the creation of the new project."
type = object({
billing_account_id = string
parent = string
})
default = null
}
variable "project_id" {
description = "Project ID."
type = string
}
variable "region" {
description = "Region."
type = string
default = "europe-west1"
}
variable "vpc_create" {
description = "Flag indicating whether the VPC should be created or not."
type = bool
default = true
}
variable "vpc_name" {
description = "VPC name."
type = string
nullable = false
default = "vpc"
}

View File

@ -0,0 +1,52 @@
/**
* Copyright 2023 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
module "vpc" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = var.vpc_name
vpc_create = var.vpc_create
subnets = [
{
ip_cidr_range = var.mgmt_subnet_cidr_block
name = "subnet-mgmt"
region = var.region
},
{
ip_cidr_range = var.cluster_network_config.nodes_cidr_block
name = "subnet-cluster"
region = var.region
secondary_ip_ranges = {
pods = var.cluster_network_config.pods_cidr_block
services = var.cluster_network_config.services_cidr_block
}
}
]
}
module "firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc.name
}
module "nat" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "nat"
router_network = module.vpc.name
}

View File

@ -30,9 +30,11 @@ module "mgmt_server" {
}]
service_account_create = true
boot_disk = {
image = var.mgmt_server_config.image
type = var.mgmt_server_config.disk_type
size = var.mgmt_server_config.disk_size
initialize_params = {
image = var.mgmt_server_config.image
type = var.mgmt_server_config.disk_type
size = var.mgmt_server_config.disk_size
}
}
}

View File

@ -6,15 +6,27 @@ They are meant to be used as minimal but complete starting points to create actu
## Blueprints
### Calling a private Cloud Function from on-premises
<a href="./private-cloud-function-from-onprem/" title="Private Cloud Function from On-premises"><img src="./private-cloud-function-from-onprem/diagram.png" align="left" width="280px"></a> This [blueprint](./private-cloud-function-from-onprem/) shows how to invoke a [private Google Cloud Function](https://cloud.google.com/functions/docs/networking/network-settings) from the on-prem environment via a [Private Service Connect endpoint](https://cloud.google.com/vpc/docs/private-service-connect#benefits-apis).
<br clear="left">
### Calling on-premise services through PSC and hybrid NEGs
<a href="./psc-hybrid/" title="Hybrid connectivity to on-premise services thrugh PSC"><img src="./psc-hybrid/diagram.png" align="left" width="280px"></a> This [blueprint](./psc-hybrid/) shows how to privately connect to on-premise services (IP + port) from GCP, leveraging [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/private-service-connect) and [Hybrid Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts).
<br clear="left">
### Decentralized firewall management
<a href="./decentralized-firewall/" title="Decentralized firewall management"><img src="./decentralized-firewall/diagram.png" align="left" width="280px"></a> This [blueprint](./decentralized-firewall/) shows how a decentralized firewall management can be organized using the [firewall factory](../factories/net-vpc-firewall-yaml/).
<br clear="left">
### Network filtering with Squid
### GLB and multi-regional daisy-chaining through hybrid NEGs
<a href="./filtering-proxy/" title="Network filtering with Squid"><img src="./filtering-proxy/squid.png" align="left" width="280px"></a> This [blueprint](./filtering-proxy/) how to deploy a filtering HTTP proxy to restrict Internet access, in a simplified setup using a VPC with two subnets and a Cloud DNS zone, and an optional MIG for scaling.
<a href="./glb-hybrid-neg-internal/" title="XGLB and multi-regional daisy-chaining through hybrid NEGs"><img src="./glb-hybrid-neg-internal/diagram.png" align="left" width="280px"></a> This [blueprint](./glb-hybrid-neg-internal/) shows the experimental use of hybrid NEGs behind external Global Load Balancers (GLBs) to connect to GCP instances living in spoke VPCs and behind Network Virtual Appliances (NVAs).
<br clear="left">
@ -24,14 +36,6 @@ They are meant to be used as minimal but complete starting points to create actu
<br clear="left">
### Hub and Spoke via Peering
<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering blueprint"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is connected to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.
The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workaround is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.
<br clear="left">
### Hub and Spoke via Dynamic VPN
<a href="./hub-and-spoke-vpn/" title="Hub and spoke via dynamic VPN"><img src="./hub-and-spoke-vpn/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-vpn/) implements a hub and spoke topology via dynamic VPN tunnels, a common design where peering cannot be used due to limitations on the number of spokes or connectivity to managed services.
@ -40,6 +44,14 @@ The blueprint shows how to implement spoke transitivity via BGP advertisements,
<br clear="left">
### Hub and Spoke via Peering
<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering blueprint"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is connected to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.
The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workaround is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.
<br clear="left">
### ILB as next hop
<a href="./ilb-next-hop/" title="ILB as next hop"><img src="./ilb-next-hop/diagram.png" align="left" width="280px"></a> This [blueprint](./ilb-next-hop/) allows testing [ILB as next hop](https://cloud.google.com/load-balancing/docs/internal/ilb-next-hop-overview) using simple Linux gateway VMS between two VPCs, to emulate virtual appliances. An optional additional ILB can be enabled to test multiple load balancer configurations and hashing.
@ -63,15 +75,9 @@ The emulated on-premises environment can be used to test access to different ser
-->
### Calling a private Cloud Function from on-premises
### Network filtering with Squid
<a href="./private-cloud-function-from-onprem/" title="Private Cloud Function from On-premises"><img src="./private-cloud-function-from-onprem/diagram.png" align="left" width="280px"></a> This [blueprint](./private-cloud-function-from-onprem/) shows how to invoke a [private Google Cloud Function](https://cloud.google.com/functions/docs/networking/network-settings) from the on-prem environment via a [Private Service Connect endpoint](https://cloud.google.com/vpc/docs/private-service-connect#benefits-apis).
<br clear="left">
### Calling on-premise services through PSC and hybrid NEGs
<a href="./psc-hybrid/" title="Hybrid connectivity to on-premise services thrugh PSC"><img src="./psc-hybrid/diagram.png" align="left" width="280px"></a> This [blueprint](./psc-hybrid/) shows how to privately connect to on-premise services (IP + port) from GCP, leveraging [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/private-service-connect) and [Hybrid Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts).
<a href="./filtering-proxy/" title="Network filtering with Squid"><img src="./filtering-proxy/squid.png" align="left" width="280px"></a> This [blueprint](./filtering-proxy/) how to deploy a filtering HTTP proxy to restrict Internet access, in a simplified setup using a VPC with two subnets and a Cloud DNS zone, and an optional MIG for scaling.
<br clear="left">

Some files were not shown because too many files have changed in this diff Show More