Merge branch 'master' into maunope/network-dashboards-updates
This commit is contained in:
commit
23807615f4
|
@ -8,6 +8,7 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
### BLUEPRINTS
|
||||
|
||||
- [[#879](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/879)] New PSC hybrid blueprint ([LucaPrete](https://github.com/LucaPrete)) <!-- 2022-10-16 08:18:41+00:00 -->
|
||||
- [[#880](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/880)] **incompatible change:** Refactor net-vpc module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 09:02:34+00:00 -->
|
||||
- [[#872](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/872)] added support 2nd generation cloud function ([som-nitjsr](https://github.com/som-nitjsr)) <!-- 2022-10-13 06:09:00+00:00 -->
|
||||
- [[#875](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/875)] **incompatible change:** Refactor GKE nodepool for Terraform 1.3, refactor GKE blueprints and FAST stage ([ludoo](https://github.com/ludoo)) <!-- 2022-10-12 10:59:37+00:00 -->
|
||||
|
@ -52,6 +53,8 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
### MODULES
|
||||
|
||||
- [[#890](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/890)] Add auto_delete and instance_redistribution_type to compute-vm and compute-mig modules. ([giovannibaratta](https://github.com/giovannibaratta)) <!-- 2022-10-16 19:19:46+00:00 -->
|
||||
- [[#883](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/883)] Fix csi-driver, logging and monitoring default values when autopilot … ([danielmarzini](https://github.com/danielmarzini)) <!-- 2022-10-14 15:30:54+00:00 -->
|
||||
- [[#880](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/880)] **incompatible change:** Refactor net-vpc module for Terraform 1.3 ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 09:02:34+00:00 -->
|
||||
- [[#872](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/872)] added support 2nd generation cloud function ([som-nitjsr](https://github.com/som-nitjsr)) <!-- 2022-10-13 06:09:00+00:00 -->
|
||||
- [[#877](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/877)] fix autoscaling block ([ludoo](https://github.com/ludoo)) <!-- 2022-10-12 14:44:48+00:00 -->
|
||||
|
@ -80,6 +83,10 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
### TOOLS
|
||||
|
||||
- [[#887](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/887)] Disable parallel execution of tests and plugin cache ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 17:52:38+00:00 -->
|
||||
- [[#886](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/886)] Revert "Improve handling of tf plugin cache in tests" ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 17:35:31+00:00 -->
|
||||
- [[#885](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/885)] Improve handling of tf plugin cache in tests ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 17:14:47+00:00 -->
|
||||
- [[#881](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/881)] Run tests in parallel using `pytest-xdist` ([ludoo](https://github.com/ludoo)) <!-- 2022-10-14 12:56:16+00:00 -->
|
||||
- [[#876](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/876)] Make changelog tool slower to work around inconsistencies in API results ([ludoo](https://github.com/ludoo)) <!-- 2022-10-12 12:49:32+00:00 -->
|
||||
- [[#865](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/865)] Enable FAST 00-cicd provider test ([ludoo](https://github.com/ludoo)) <!-- 2022-10-07 11:20:57+00:00 -->
|
||||
- [[#864](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/pull/864)] **incompatible change:** Bump terraform required version ([ludoo](https://github.com/ludoo)) <!-- 2022-10-07 10:51:56+00:00 -->
|
||||
|
|
|
@ -8,7 +8,7 @@ Currently available blueprints:
|
|||
- **data solutions** - [GCE/GCS CMEK via centralized Cloud KMS](./data-solutions/gcs-to-bq-with-least-privileges/), [Cloud Storage to Bigquery with Cloud Dataflow with least privileges](./data-solutions/gcs-to-bq-with-least-privileges/), [Data Platform Foundations](./data-solutions/data-platform-foundations/), [SQL Server AlwaysOn availability groups blueprint](./data-solutions/sqlserver-alwayson), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion/), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2/)
|
||||
- **factories** - [The why and the how of resource factories](./factories/README.md)
|
||||
- **GKE** - [GKE multitenant fleet](./gke/multitenant-fleet/), [Shared VPC with GKE support](./networking/shared-vpc-gke/), [Binary Authorization Pipeline](./gke/binauthz/), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api/)
|
||||
- **networking** - [hub and spoke via peering](./networking/hub-and-spoke-peering/), [hub and spoke via VPN](./networking/hub-and-spoke-vpn/), [DNS and Google Private Access for on-premises](./networking/onprem-google-access-dns/), [Shared VPC with GKE support](./networking/shared-vpc-gke/), [ILB as next hop](./networking/ilb-next-hop), [PSC for on-premises Cloud Function invocation](./networking/private-cloud-function-from-onprem/), [decentralized firewall](./networking/decentralized-firewall)
|
||||
- **networking** - [hub and spoke via peering](./networking/hub-and-spoke-peering/), [hub and spoke via VPN](./networking/hub-and-spoke-vpn/), [DNS and Google Private Access for on-premises](./networking/onprem-google-access-dns/), [Shared VPC with GKE support](./networking/shared-vpc-gke/), [ILB as next hop](./networking/ilb-next-hop), [Connecting to on-premise services leveraging PSC and hybrid NEGs](./networking/psc-hybrid/), [decentralized firewall](./networking/decentralized-firewall)
|
||||
- **serverless** - [Multi-region deployments for API Gateway](./serverless/api-gateway/)
|
||||
- **third party solutions** - [OpenShift cluster on Shared VPC](./third-party-solutions/openshift)
|
||||
|
||||
|
|
|
@ -0,0 +1,88 @@
|
|||
# Google Cloud BQ Factory
|
||||
|
||||
This module allows creation and management of BigQuery datasets and views as well as tables by defining them in well formatted `yaml` files.
|
||||
|
||||
Yaml abstraction for BQ can simplify users onboarding and also makes creation of tables easier compared to HCL.
|
||||
|
||||
Subfolders distinguish between views and tables and ensures easier navigation for users.
|
||||
|
||||
This factory is based on the [BQ dataset module](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/tree/master/modules/bigquery-dataset) which currently only supports tables and views. As soon as external table and materialized view support is added, factory will be enhanced accordingly.
|
||||
|
||||
You can create as many files as you like, the code will loop through it and create the required variables in order to execute everything accordingly.
|
||||
|
||||
## Example
|
||||
|
||||
### Terraform code
|
||||
|
||||
```hcl
|
||||
module "bq" {
|
||||
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric/modules/bigquery-dataset"
|
||||
|
||||
for_each = local.output
|
||||
project_id = var.project_id
|
||||
id = each.key
|
||||
views = try(each.value.views, null)
|
||||
tables = try(each.value.tables, null)
|
||||
}
|
||||
# tftest skip
|
||||
```
|
||||
|
||||
### Configuration Structure
|
||||
|
||||
```bash
|
||||
base_folder
|
||||
│
|
||||
├── tables
|
||||
│ ├── table_a.yaml
|
||||
│ ├── table_b.yaml
|
||||
├── views
|
||||
│ ├── view_a.yaml
|
||||
│ ├── view_b.yaml
|
||||
```
|
||||
|
||||
## YAML structure and definition formatting
|
||||
|
||||
### Tables
|
||||
|
||||
Table definition to be placed in a set of yaml files in the corresponding subfolder. Structure should look as following:
|
||||
|
||||
```yaml
|
||||
|
||||
dataset: # required name of the dataset the table is to be placed in
|
||||
table: # required descriptive name of the table
|
||||
schema: # required schema in JSON FORMAT Example: [{name: "test", type: "STRING"},{name: "test2", type: "INT64"}]
|
||||
labels: # not required, defaults to {}, Example: {"a":"thisislabela","b":"thisislabelb"}
|
||||
use_legacy_sql: boolean # not required, defaults to false
|
||||
deletion_protection: boolean # not required, defaults to false
|
||||
```
|
||||
|
||||
### Views
|
||||
View definition to be placed in a set of yaml files in the corresponding subfolder. Structure should look as following:
|
||||
|
||||
```yaml
|
||||
dataset: # required, name of the dataset the view is to be placed in
|
||||
view: # required, descriptive name of the view
|
||||
query: # required, SQL Query for the view in quotes
|
||||
labels: # not required, defaults to {}, Example: {"a":"thisislabela","b":"thisislabelb"}
|
||||
use_legacy_sql: bool # not required, defaults to false
|
||||
deletion_protection: bool # not required, defaults to false
|
||||
```
|
||||
|
||||
<!-- BEGIN TFDOC -->
|
||||
|
||||
## Variables
|
||||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [project_id](variables.tf#L27) | Project ID | <code>string</code> | ✓ | |
|
||||
| [tables_dir](variables.tf#L22) | Relative path for the folder storing table data. | <code>string</code> | ✓ | |
|
||||
| [views_dir](variables.tf#L17) | Relative path for the folder storing view data. | <code>string</code> | ✓ | |
|
||||
|
||||
<!-- END TFDOC -->
|
||||
|
||||
|
||||
## TODO
|
||||
|
||||
- [ ] add external table support
|
||||
- [ ] add materialized view support
|
||||
|
|
@ -0,0 +1,67 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
locals {
|
||||
views = {
|
||||
for f in fileset("${var.views_dir}", "**/*.yaml") :
|
||||
trimsuffix(f, ".yaml") => yamldecode(file("${var.views_dir}/${f}"))
|
||||
}
|
||||
|
||||
tables = {
|
||||
for f in fileset("${var.tables_dir}", "**/*.yaml") :
|
||||
trimsuffix(f, ".yaml") => yamldecode(file("${var.tables_dir}/${f}"))
|
||||
}
|
||||
|
||||
output = {
|
||||
for dataset in distinct([for v in values(merge(local.views, local.tables)) : v.dataset]) :
|
||||
dataset => {
|
||||
"views" = {
|
||||
for k, v in local.views :
|
||||
v.view => {
|
||||
friendly_name = v.view
|
||||
labels = try(v.labels, null)
|
||||
query = v.query
|
||||
use_legacy_sql = try(v.use_legacy_sql, false)
|
||||
deletion_protection = try(v.deletion_protection, false)
|
||||
}
|
||||
if v.dataset == dataset
|
||||
},
|
||||
"tables" = {
|
||||
for k, v in local.tables :
|
||||
v.table => {
|
||||
friendly_name = v.table
|
||||
labels = try(v.labels, null)
|
||||
options = try(v.options, null)
|
||||
partitioning = try(v.partitioning, null)
|
||||
schema = jsonencode(v.schema)
|
||||
use_legacy_sql = try(v.use_legacy_sql, false)
|
||||
deletion_protection = try(v.deletion_protection, false)
|
||||
}
|
||||
if v.dataset == dataset
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module "bq" {
|
||||
source = "../../../modules/bigquery-dataset"
|
||||
|
||||
for_each = local.output
|
||||
project_id = var.project_id
|
||||
id = each.key
|
||||
views = try(each.value.views, null)
|
||||
tables = try(each.value.tables, null)
|
||||
}
|
|
@ -0,0 +1,31 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
variable "views_dir" {
|
||||
description = "Relative path for the folder storing view data."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "tables_dir" {
|
||||
description = "Relative path for the folder storing table data."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "project_id" {
|
||||
description = "Project ID"
|
||||
type = string
|
||||
|
||||
}
|
|
@ -39,11 +39,16 @@ It is meant to be used as a starting point for most Shared VPC configurations, a
|
|||
<a href="./ilb-next-hop/" title="ILB as next hop"><img src="./ilb-next-hop/diagram.png" align="left" width="280px"></a> This [blueprint](./ilb-next-hop/) allows testing [ILB as next hop](https://cloud.google.com/load-balancing/docs/internal/ilb-next-hop-overview) using simple Linux gateway VMS between two VPCs, to emulate virtual appliances. An optional additional ILB can be enabled to test multiple load balancer configurations and hashing.
|
||||
<br clear="left">
|
||||
|
||||
### Calling a private Cloud Function from On-premises
|
||||
### Calling a private Cloud Function from on-premises
|
||||
|
||||
<a href="./private-cloud-function-from-onprem/" title="Private Cloud Function from On-premises"><img src="./private-cloud-function-from-onprem/diagram.png" align="left" width="280px"></a> This [blueprint](./private-cloud-function-from-onprem/) shows how to invoke a [private Google Cloud Function](https://cloud.google.com/functions/docs/networking/network-settings) from the on-prem environment via a [Private Service Connect endpoint](https://cloud.google.com/vpc/docs/private-service-connect#benefits-apis).
|
||||
<br clear="left">
|
||||
|
||||
### Calling on-premise services through PSC and hybrid NEGs
|
||||
|
||||
<a href="./psc-hybrid/" title="Hybrid connectivity to on-premise services thrugh PSC"><img src="./psc-hybrid/diagram.png" align="left" width="280px"></a> This [blueprint](./psc-hybrid/) shows how to privately connect to on-premise services (IP + port) from GCP, leveraging [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/private-service-connect) and [Hybrid Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts).
|
||||
<br clear="left">
|
||||
|
||||
### Decentralized firewall management
|
||||
|
||||
<a href="./decentralized-firewall/" title="Decentralized firewall management"><img src="./decentralized-firewall/diagram.png" align="left" width="280px"></a> This [blueprint](./decentralized-firewall/) shows how a decentralized firewall management can be organized using the [firewall factory](../factories/net-vpc-firewall-yaml/).
|
||||
|
|
|
@ -262,13 +262,14 @@ module "mig-proxy" {
|
|||
metric = var.autoscaling_metric
|
||||
}
|
||||
update_policy = {
|
||||
type = "PROACTIVE"
|
||||
minimal_action = "REPLACE"
|
||||
min_ready_sec = 60
|
||||
max_surge_type = "fixed"
|
||||
max_surge = 3
|
||||
max_unavailable_type = null
|
||||
max_unavailable = null
|
||||
instance_redistribution_type = "PROACTIVE"
|
||||
max_surge_type = "fixed"
|
||||
max_surge = 3
|
||||
max_unavailable_type = null
|
||||
max_unavailable = null
|
||||
minimal_action = "REPLACE"
|
||||
min_ready_sec = 60
|
||||
type = "PROACTIVE"
|
||||
}
|
||||
default_version = {
|
||||
instance_template = module.proxy-vm.template.self_link
|
||||
|
|
|
@ -0,0 +1,55 @@
|
|||
# Hybrid connectivity to on-premise services through PSC
|
||||
|
||||
The sample allows to connect to an on-prem service leveraging Private Service Connect (PSC).
|
||||
|
||||
It creates:
|
||||
|
||||
* A [producer](./psc-producer/README.md): a VPC exposing a PSC Service Attachment (SA), connecting to an internal regional TCP proxy load balancer, using a hybrid NEG backend that connects to an on-premises service (IP address + port)
|
||||
|
||||
* A [consumer](./psc-consumer/README.md): a VPC with a PSC endpoint pointing to the PSC SA exposed by the producer. The endpoint is accessible by clients through a local IP address on the consumer VPC.
|
||||
|
||||
![High-level diagram](diagram.png "High-level diagram")
|
||||
|
||||
## Sample modules
|
||||
|
||||
The blueprint makes use of the modules [psc-producer](psc-producer) and [psc-consumer](psc-consumer) contained in this folder. This is done so you can build on top of these building blocks, in order to support more complex scenarios.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before applying this Terraform
|
||||
|
||||
- On-premises
|
||||
- Allow ingress from *35.191.0.0/16* and *130.211.0.0/22* CIDRs (for HCs)
|
||||
- Allow ingress from the proxy-only subnet CIDR
|
||||
- GCP
|
||||
- Advertise from GCP to on-prem *35.191.0.0/16* and *130.211.0.0/22* CIDRs
|
||||
- Advertise from GCP to on-prem the proxy-only subnet CIDRs
|
||||
|
||||
## Relevant Links
|
||||
|
||||
* [Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect)
|
||||
|
||||
* [Hybrid connectivity Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts)
|
||||
|
||||
* [Regional TCP Proxy with Hybrid NEGs](https://cloud.google.com/load-balancing/docs/tcp/set-up-int-tcp-proxy-hybrid)
|
||||
|
||||
* [PSC approval](https://cloud.google.com/vpc/docs/configure-private-service-connect-producer#publish-service-explicit)
|
||||
<!-- BEGIN TFDOC -->
|
||||
|
||||
## Variables
|
||||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [dest_ip_address](variables.tf#L37) | On-prem service destination IP address. | <code>string</code> | ✓ | |
|
||||
| [prefix](variables.tf#L17) | Prefix to use for resource names. | <code>string</code> | ✓ | |
|
||||
| [producer](variables.tf#L88) | Producer configuration. | <code title="object({ subnet_main = string # CIDR subnet_proxy = string # CIDR subnet_psc = string # CIDR accepted_limits = map(number) # Accepted project ids => PSC endpoint limit })">object({…})</code> | ✓ | |
|
||||
| [project_id](variables.tf#L22) | When referncing existing projects, the id of the project where resources will be created. | <code>string</code> | ✓ | |
|
||||
| [region](variables.tf#L27) | Region where resources will be created. | <code>string</code> | ✓ | |
|
||||
| [subnet_consumer](variables.tf#L98) | Consumer subnet CIDR. | <code>string # CIDR</code> | ✓ | |
|
||||
| [zone](variables.tf#L32) | Zone where resources will be created. | <code>string</code> | ✓ | |
|
||||
| [dest_port](variables.tf#L42) | On-prem service destination port. | <code>string</code> | | <code>"80"</code> |
|
||||
| [project_create](variables.tf#L48) | Whether to automatically create a project. | <code>bool</code> | | <code>false</code> |
|
||||
| [vpc_config](variables.tf#L60) | VPC and subnet ids, in case existing VPCs are used. | <code title="object({ producer = object({ id = string subnet_main_id = string subnet_proxy_id = string subnet_psc_id = string }) consumer = object({ id = string subnet_main_id = string }) })">object({…})</code> | | <code title="{ producer = { id = "xxx" subnet_main_id = "xxx" subnet_proxy_id = "xxx" subnet_psc_id = "xxx" } consumer = { id = "xxx" subnet_main_id = "xxx" } }">{…}</code> |
|
||||
| [vpc_create](variables.tf#L54) | Whether to automatically create VPCs. | <code>bool</code> | | <code>true</code> |
|
||||
|
||||
<!-- END TFDOC -->
|
Binary file not shown.
After Width: | Height: | Size: 57 KiB |
|
@ -0,0 +1,136 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
locals {
|
||||
prefix = coalesce(var.prefix, "") == "" ? "" : "${var.prefix}-"
|
||||
project_id = (
|
||||
var.project_create
|
||||
? module.project.project_id
|
||||
: var.project_id
|
||||
)
|
||||
vpc_producer_id = (
|
||||
var.vpc_create
|
||||
? module.vpc_producer.network.id
|
||||
: var.vpc_config["producer"]["id"]
|
||||
)
|
||||
vpc_producer_main = (
|
||||
var.vpc_create
|
||||
? module.vpc_producer.subnets["${var.region}/${var.prefix}-main"].id
|
||||
: var.vpc_config["producer"]["subnet_main_id"]
|
||||
)
|
||||
vpc_producer_proxy = (
|
||||
var.vpc_create
|
||||
? module.vpc_producer.subnets_proxy_only["${var.region}/${var.prefix}-proxy"].id
|
||||
: var.vpc_config["producer"]["subnet_proxy_id"]
|
||||
)
|
||||
vpc_producer_psc = (
|
||||
var.vpc_create
|
||||
? module.vpc_producer.subnets_psc["${var.region}/${var.prefix}-psc"].id
|
||||
: var.vpc_config["producer"]["subnet_psc_id"]
|
||||
)
|
||||
vpc_consumer_id = (
|
||||
var.vpc_create
|
||||
? module.vpc_consumer.network.id
|
||||
: var.vpc_config["consumer"]["id"]
|
||||
)
|
||||
vpc_consumer_main = (
|
||||
var.vpc_create
|
||||
? module.vpc_consumer.subnets["${var.region}/${var.prefix}-consumer"].id
|
||||
: var.vpc_config["consumer"]["subnet_main_id"]
|
||||
)
|
||||
}
|
||||
|
||||
module "project" {
|
||||
source = "../../../modules/project"
|
||||
name = var.project_id
|
||||
project_create = var.project_create
|
||||
services = [
|
||||
"compute.googleapis.com"
|
||||
]
|
||||
}
|
||||
|
||||
# Producer
|
||||
module "vpc_producer" {
|
||||
source = "../../../modules/net-vpc"
|
||||
project_id = local.project_id
|
||||
name = "${local.prefix}producer"
|
||||
subnets = [
|
||||
{
|
||||
ip_cidr_range = var.producer["subnet_main"]
|
||||
name = "${var.prefix}-main"
|
||||
region = var.region
|
||||
secondary_ip_range = {}
|
||||
}
|
||||
]
|
||||
subnets_proxy_only = [
|
||||
{
|
||||
ip_cidr_range = var.producer["subnet_proxy"]
|
||||
name = "${local.prefix}proxy"
|
||||
region = var.region
|
||||
active = true
|
||||
}
|
||||
]
|
||||
subnets_psc = [
|
||||
{
|
||||
ip_cidr_range = var.producer["subnet_psc"]
|
||||
name = "${local.prefix}psc"
|
||||
region = var.region
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
module "psc_producer" {
|
||||
source = "./psc-producer"
|
||||
project_id = local.project_id
|
||||
name = var.prefix
|
||||
dest_ip_address = var.dest_ip_address
|
||||
dest_port = var.dest_port
|
||||
network = local.vpc_producer_id
|
||||
region = var.region
|
||||
zone = var.zone
|
||||
subnet = local.vpc_producer_main
|
||||
subnet_proxy = local.vpc_producer_proxy
|
||||
subnets_psc = [
|
||||
local.vpc_producer_psc
|
||||
]
|
||||
accepted_limits = var.producer["accepted_limits"]
|
||||
}
|
||||
|
||||
# Consumer
|
||||
|
||||
module "vpc_consumer" {
|
||||
source = "../../../modules/net-vpc"
|
||||
project_id = local.project_id
|
||||
name = "${local.prefix}consumer"
|
||||
subnets = [
|
||||
{
|
||||
ip_cidr_range = var.subnet_consumer
|
||||
name = "${local.prefix}consumer"
|
||||
region = var.region
|
||||
secondary_ip_range = {}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
module "psc_consumer" {
|
||||
source = "./psc-consumer"
|
||||
project_id = local.project_id
|
||||
name = "${local.prefix}consumer"
|
||||
region = var.region
|
||||
network = local.vpc_consumer_id
|
||||
subnet = local.vpc_consumer_main
|
||||
sa_id = module.psc_producer.service_attachment.id
|
||||
}
|
|
@ -0,0 +1,18 @@
|
|||
# PSC Consumer
|
||||
|
||||
The module creates a consumer VPC and a Private Service Connect (PSC) endpoint, pointing to the PSC Service Attachment (SA) specified.
|
||||
|
||||
<!-- BEGIN TFDOC -->
|
||||
|
||||
## Variables
|
||||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [name](variables.tf#L22) | Name of the resources created. | <code>string</code> | ✓ | |
|
||||
| [network](variables.tf#L32) | Consumer network id. | <code>string</code> | ✓ | |
|
||||
| [project_id](variables.tf#L17) | The ID of the project where this VPC will be created. | <code>string</code> | ✓ | |
|
||||
| [region](variables.tf#L27) | Region where resources will be created. | <code>string</code> | ✓ | |
|
||||
| [sa_id](variables.tf#L42) | PSC producer service attachment id. | <code>string</code> | ✓ | |
|
||||
| [subnet](variables.tf#L37) | Subnetwork id where resources will be associated. | <code>string</code> | ✓ | |
|
||||
|
||||
<!-- END TFDOC -->
|
|
@ -0,0 +1,33 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
resource "google_compute_address" "psc_endpoint_address" {
|
||||
name = var.name
|
||||
project = var.project_id
|
||||
address_type = "INTERNAL"
|
||||
subnetwork = var.subnet
|
||||
region = var.region
|
||||
}
|
||||
|
||||
resource "google_compute_forwarding_rule" "psc_ilb_consumer" {
|
||||
name = var.name
|
||||
project = var.project_id
|
||||
region = var.region
|
||||
target = var.sa_id
|
||||
load_balancing_scheme = ""
|
||||
network = var.network
|
||||
ip_address = google_compute_address.psc_endpoint_address.id
|
||||
}
|
|
@ -0,0 +1,45 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
variable "project_id" {
|
||||
description = "The ID of the project where this VPC will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "name" {
|
||||
description = "Name of the resources created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "region" {
|
||||
description = "Region where resources will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "network" {
|
||||
description = "Consumer network id."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "subnet" {
|
||||
description = "Subnetwork id where resources will be associated."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "sa_id" {
|
||||
description = "PSC producer service attachment id."
|
||||
type = string
|
||||
}
|
|
@ -0,0 +1,33 @@
|
|||
# PSC Producer
|
||||
|
||||
The module creates:
|
||||
|
||||
- a producer VPC
|
||||
- an internal regional TCP proxy load balancer with a hybrid Network Endpoint Group (NEG) backend, pointing to an on-prem service (IP + port)
|
||||
- a Private Service Connect Service Attachment (PSC SA) exposing the service to [PSC consumers](../psc-consumer/README.md)
|
||||
|
||||
<!-- BEGIN TFDOC -->
|
||||
|
||||
## Variables
|
||||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [accepted_limits](variables.tf#L68) | Incoming accepted projects with endpoints limit. | <code>map(number)</code> | ✓ | |
|
||||
| [dest_ip_address](variables.tf#L57) | On-prem service destination IP address. | <code>string</code> | ✓ | |
|
||||
| [name](variables.tf#L22) | Name of the resources created. | <code>string</code> | ✓ | |
|
||||
| [network](variables.tf#L37) | Producer network id. | <code>string</code> | ✓ | |
|
||||
| [project_id](variables.tf#L17) | The ID of the project where this VPC will be created. | <code>string</code> | ✓ | |
|
||||
| [region](variables.tf#L27) | Region where resources will be created. | <code>string</code> | ✓ | |
|
||||
| [subnet](variables.tf#L42) | Subnetwork id where resources will be associated. | <code>string</code> | ✓ | |
|
||||
| [subnet_proxy](variables.tf#L47) | L7 Regional load balancing subnet id. | <code>string</code> | ✓ | |
|
||||
| [subnets_psc](variables.tf#L52) | PSC NAT subnets. | <code>list(string)</code> | ✓ | |
|
||||
| [zone](variables.tf#L32) | Zone where resources will be created. | <code>string</code> | ✓ | |
|
||||
| [dest_port](variables.tf#L62) | On-prem service destination port. | <code>string</code> | | <code>"80"</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
| name | description | sensitive |
|
||||
|---|---|:---:|
|
||||
| [service_attachment](outputs.tf#L17) | The service attachment resource. | |
|
||||
|
||||
<!-- END TFDOC -->
|
|
@ -0,0 +1,107 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
# Hybrid NEG
|
||||
|
||||
resource "google_compute_network_endpoint_group" "neg" {
|
||||
name = var.name
|
||||
project = var.project_id
|
||||
network = var.network
|
||||
default_port = var.dest_port
|
||||
zone = "${var.region}-${var.zone}"
|
||||
network_endpoint_type = "NON_GCP_PRIVATE_IP_PORT"
|
||||
}
|
||||
|
||||
resource "google_compute_network_endpoint" "endpoint" {
|
||||
project = var.project_id
|
||||
network_endpoint_group = google_compute_network_endpoint_group.neg.name
|
||||
port = var.dest_port
|
||||
ip_address = var.dest_ip_address
|
||||
zone = "${var.region}-${var.zone}"
|
||||
}
|
||||
|
||||
# TCP Proxy ILB
|
||||
|
||||
resource "google_compute_region_health_check" "health_check" {
|
||||
name = var.name
|
||||
project = var.project_id
|
||||
region = var.region
|
||||
timeout_sec = 1
|
||||
check_interval_sec = 1
|
||||
|
||||
tcp_health_check {
|
||||
port = var.dest_port
|
||||
}
|
||||
}
|
||||
|
||||
resource "google_compute_region_backend_service" "backend_service" {
|
||||
name = var.name
|
||||
project = var.project_id
|
||||
region = var.region
|
||||
health_checks = [google_compute_region_health_check.health_check.id]
|
||||
load_balancing_scheme = "INTERNAL_MANAGED"
|
||||
protocol = "TCP"
|
||||
|
||||
backend {
|
||||
group = google_compute_network_endpoint_group.neg.self_link
|
||||
balancing_mode = "CONNECTION"
|
||||
failover = false
|
||||
capacity_scaler = 1.0
|
||||
max_connections = 100
|
||||
}
|
||||
}
|
||||
|
||||
resource "google_compute_region_target_tcp_proxy" "target_proxy" {
|
||||
provider = google-beta
|
||||
name = var.name
|
||||
region = var.region
|
||||
project = var.project_id
|
||||
backend_service = google_compute_region_backend_service.backend_service.id
|
||||
}
|
||||
|
||||
resource "google_compute_forwarding_rule" "forwarding_rule" {
|
||||
provider = google-beta
|
||||
name = var.name
|
||||
project = var.project_id
|
||||
region = var.region
|
||||
ip_protocol = "TCP"
|
||||
load_balancing_scheme = "INTERNAL_MANAGED"
|
||||
port_range = var.dest_port
|
||||
target = google_compute_region_target_tcp_proxy.target_proxy.id
|
||||
network = var.network
|
||||
subnetwork = var.subnet
|
||||
network_tier = "PREMIUM"
|
||||
}
|
||||
|
||||
# PSC Service Attachment
|
||||
|
||||
resource "google_compute_service_attachment" "service_attachment" {
|
||||
name = var.name
|
||||
project = var.project_id
|
||||
region = var.region
|
||||
enable_proxy_protocol = false
|
||||
connection_preference = "ACCEPT_MANUAL"
|
||||
nat_subnets = var.subnets_psc
|
||||
target_service = google_compute_forwarding_rule.forwarding_rule.id
|
||||
|
||||
dynamic "consumer_accept_lists" {
|
||||
for_each = var.accepted_limits
|
||||
content {
|
||||
project_id_or_num = consumer_accept_lists.key
|
||||
connection_limit = consumer_accept_lists.value
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
output "service_attachment" {
|
||||
description = "The service attachment resource."
|
||||
value = google_compute_service_attachment.service_attachment
|
||||
}
|
|
@ -0,0 +1,71 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
variable "project_id" {
|
||||
description = "The ID of the project where this VPC will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "name" {
|
||||
description = "Name of the resources created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "region" {
|
||||
description = "Region where resources will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "zone" {
|
||||
description = "Zone where resources will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "network" {
|
||||
description = "Producer network id."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "subnet" {
|
||||
description = "Subnetwork id where resources will be associated."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "subnet_proxy" {
|
||||
description = "L7 Regional load balancing subnet id."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "subnets_psc" {
|
||||
description = "PSC NAT subnets."
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "dest_ip_address" {
|
||||
description = "On-prem service destination IP address."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "dest_port" {
|
||||
description = "On-prem service destination port."
|
||||
type = string
|
||||
default = "80"
|
||||
}
|
||||
|
||||
variable "accepted_limits" {
|
||||
description = "Incoming accepted projects with endpoints limit."
|
||||
type = map(number)
|
||||
}
|
|
@ -0,0 +1,101 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
variable "prefix" {
|
||||
description = "Prefix to use for resource names."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "project_id" {
|
||||
description = "When referncing existing projects, the id of the project where resources will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "region" {
|
||||
description = "Region where resources will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "zone" {
|
||||
description = "Zone where resources will be created."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "dest_ip_address" {
|
||||
description = "On-prem service destination IP address."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "dest_port" {
|
||||
description = "On-prem service destination port."
|
||||
type = string
|
||||
default = "80"
|
||||
}
|
||||
|
||||
variable "project_create" {
|
||||
description = "Whether to automatically create a project."
|
||||
type = bool
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "vpc_create" {
|
||||
description = "Whether to automatically create VPCs."
|
||||
type = bool
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "vpc_config" {
|
||||
description = "VPC and subnet ids, in case existing VPCs are used."
|
||||
type = object({
|
||||
producer = object({
|
||||
id = string
|
||||
subnet_main_id = string
|
||||
subnet_proxy_id = string
|
||||
subnet_psc_id = string
|
||||
})
|
||||
consumer = object({
|
||||
id = string
|
||||
subnet_main_id = string
|
||||
})
|
||||
})
|
||||
default = {
|
||||
producer = {
|
||||
id = "xxx"
|
||||
subnet_main_id = "xxx"
|
||||
subnet_proxy_id = "xxx"
|
||||
subnet_psc_id = "xxx"
|
||||
}
|
||||
consumer = {
|
||||
id = "xxx"
|
||||
subnet_main_id = "xxx"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "producer" {
|
||||
description = "Producer configuration."
|
||||
type = object({
|
||||
subnet_main = string # CIDR
|
||||
subnet_proxy = string # CIDR
|
||||
subnet_psc = string # CIDR
|
||||
accepted_limits = map(number) # Accepted project ids => PSC endpoint limit
|
||||
})
|
||||
}
|
||||
|
||||
variable "subnet_consumer" {
|
||||
description = "Consumer subnet CIDR."
|
||||
type = string # CIDR
|
||||
}
|
|
@ -61,6 +61,6 @@ module "cloudsql" {
|
|||
tier = local.cloudsql_conf.tier
|
||||
databases = [local.cloudsql_conf.db]
|
||||
users = {
|
||||
"${local.cloudsql_conf.user}" = "${local.cloudsql_conf.pass}"
|
||||
"${local.cloudsql_conf.user}" = var.cloudsql_password
|
||||
}
|
||||
}
|
||||
|
|
|
@ -22,7 +22,6 @@ locals {
|
|||
tier = "db-g1-small"
|
||||
db = "wp-mysql"
|
||||
user = "admin"
|
||||
pass = var.cloudsql_password == null ? random_password.cloudsql_password.result : var.cloudsql_password
|
||||
}
|
||||
iam = {
|
||||
# CloudSQL
|
||||
|
@ -92,7 +91,7 @@ module "cloud_run" {
|
|||
"WORDPRESS_DATABASE_HOST" : module.cloudsql.ip
|
||||
"WORDPRESS_DATABASE_NAME" : local.cloudsql_conf.db
|
||||
"WORDPRESS_DATABASE_USER" : local.cloudsql_conf.user
|
||||
"WORDPRESS_DATABASE_PASSWORD" : local.cloudsql_conf.pass
|
||||
"WORDPRESS_DATABASE_PASSWORD" : var.cloudsql_password == null ? module.cloudsql.user_passwords[local.cloudsql_conf.user] : var.cloudsql_password
|
||||
"WORDPRESS_USERNAME" : local.wp_user
|
||||
"WORDPRESS_PASSWORD" : local.wp_pass
|
||||
}
|
||||
|
|
|
@ -22,7 +22,7 @@ output "cloud_run_service" {
|
|||
|
||||
output "cloudsql_password" {
|
||||
description = "CloudSQL password"
|
||||
value = local.cloudsql_conf.pass
|
||||
value = var.cloudsql_password == null ? module.cloudsql.user_passwords[local.cloudsql_conf.user] : var.cloudsql_password
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
|
|
|
@ -17,11 +17,11 @@ terraform {
|
|||
required_providers {
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = ">= 4.36.0" # tftest
|
||||
version = ">= 4.36.0"
|
||||
}
|
||||
google-beta = {
|
||||
source = "hashicorp/google-beta"
|
||||
version = ">= 4.36.0" # tftest
|
||||
version = ">= 4.36.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -35,8 +35,8 @@ To destroy a previous FAST deployment follow the instructions detailed in [clean
|
|||
- [Security](02-security/README.md)
|
||||
Manages centralized security configurations in a separate stage, and is typically owned by the security team. This stage implements VPC Security Controls via separate perimeters for environments and central services, and creates projects to host centralized KMS keys used by the whole organization. It's meant to be easily extended to include other security-related resources which are required, like Secret Manager.\
|
||||
Exports: KMS key ids
|
||||
- Networking ([VPN](02-networking-vpn/README.md)/[NVA](02-networking-nva/README.md)/[Peering](02-networking-peering/README.md))
|
||||
Manages centralized network resources in a separate stage, and is typically owned by the networking team. This stage implements a hub-and-spoke design, and includes connectivity via VPN to on-premises, and YAML-based factories for firewall rules (hierarchical and VPC-level) and subnets. It's currently available in three flavors: [spokes connected via VPN](02-networking-vpn/README.md), [and spokes connected via appliances](02-networking-nva/README.md), and [and spokes connected via VPC peering](02-networking-peering/README.md).\
|
||||
- Networking ([VPN](02-networking-vpn/README.md)/[NVA](02-networking-nva/README.md)/[Peering](02-networking-separate-envs/README.md)/[Separate environments](02-networking-separate-envs/README.md))
|
||||
Manages centralized network resources in a separate stage, and is typically owned by the networking team. This stage implements a hub-and-spoke design, and includes connectivity via VPN to on-premises, and YAML-based factories for firewall rules (hierarchical and VPC-level) and subnets. It's currently available in four flavors: [spokes connected via VPN](02-networking-vpn/README.md), [and spokes connected via appliances](02-networking-nva/README.md), [spokes connected via VPC peering](02-networking-peering/README.md), and [separated network environments](02-networking-separate-envs/README.md).\
|
||||
Exports: host project ids and numbers, vpc self links
|
||||
|
||||
## Environment-level resources (03)
|
||||
|
|
|
@ -340,7 +340,7 @@ module "nginx-mig" {
|
|||
per_instance_config = {},
|
||||
mig_config = {
|
||||
stateful_disks = {
|
||||
persistent-disk-1 = {
|
||||
repd-1 = {
|
||||
delete_rule = "NEVER"
|
||||
}
|
||||
}
|
||||
|
@ -461,9 +461,9 @@ module "nginx-mig" {
|
|||
| [stateful_config](variables.tf#L90) | Stateful configuration can be done by individual instances or for all instances in the MIG. They key in per_instance_config is the name of the specific instance. The key of the stateful_disks is the 'device_name' field of the resource. Please note that device_name is defined at the OS mount level, unlike the disk name. | <code title="object({ per_instance_config = map(object({ stateful_disks = map(object({ source = string mode = string # READ_WRITE | READ_ONLY delete_rule = string # NEVER | ON_PERMANENT_INSTANCE_DELETION })) metadata = map(string) update_config = object({ minimal_action = string # NONE | REPLACE | RESTART | REFRESH most_disruptive_allowed_action = string # REPLACE | RESTART | REFRESH | NONE remove_instance_state_on_destroy = bool }) })) mig_config = object({ stateful_disks = map(object({ delete_rule = string # NEVER | ON_PERMANENT_INSTANCE_DELETION })) }) })">object({…})</code> | | <code>null</code> |
|
||||
| [target_pools](variables.tf#L121) | Optional list of URLs for target pools to which new instances in the group are added. | <code>list(string)</code> | | <code>[]</code> |
|
||||
| [target_size](variables.tf#L127) | Group target size, leave null when using an autoscaler. | <code>number</code> | | <code>null</code> |
|
||||
| [update_policy](variables.tf#L133) | Update policy. Type can be 'OPPORTUNISTIC' or 'PROACTIVE', action 'REPLACE' or 'restart', surge type 'fixed' or 'percent'. | <code title="object({ type = string # OPPORTUNISTIC | PROACTIVE minimal_action = string # REPLACE | RESTART min_ready_sec = number max_surge_type = string # fixed | percent max_surge = number max_unavailable_type = string max_unavailable = number })">object({…})</code> | | <code>null</code> |
|
||||
| [versions](variables.tf#L147) | Additional application versions, target_type is either 'fixed' or 'percent'. | <code title="map(object({ instance_template = string target_type = string # fixed | percent target_size = number }))">map(object({…}))</code> | | <code>null</code> |
|
||||
| [wait_for_instances](variables.tf#L157) | Wait for all instances to be created/updated before returning. | <code>bool</code> | | <code>null</code> |
|
||||
| [update_policy](variables.tf#L133) | Update policy. Type can be 'OPPORTUNISTIC' or 'PROACTIVE', action 'REPLACE' or 'restart', surge type 'fixed' or 'percent'. | <code title="object({ instance_redistribution_type = optional(string, "PROACTIVE") # NONE | PROACTIVE. The attribute is ignored if regional is set to false. max_surge_type = string # fixed | percent max_surge = number max_unavailable_type = string max_unavailable = number minimal_action = string # REPLACE | RESTART min_ready_sec = number type = string # OPPORTUNISTIC | PROACTIVE })">object({…})</code> | | <code>null</code> |
|
||||
| [versions](variables.tf#L148) | Additional application versions, target_type is either 'fixed' or 'percent'. | <code title="map(object({ instance_template = string target_type = string # fixed | percent target_size = number }))">map(object({…}))</code> | | <code>null</code> |
|
||||
| [wait_for_instances](variables.tf#L158) | Wait for all instances to be created/updated before returning. | <code>bool</code> | | <code>null</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
@ -474,6 +474,3 @@ module "nginx-mig" {
|
|||
| [health_check](outputs.tf#L35) | Auto-created health-check resource. | |
|
||||
|
||||
<!-- END TFDOC -->
|
||||
## TODO
|
||||
|
||||
- [✓] add support for instance groups
|
||||
|
|
|
@ -264,9 +264,10 @@ resource "google_compute_region_instance_group_manager" "default" {
|
|||
for_each = var.update_policy == null ? [] : [var.update_policy]
|
||||
iterator = config
|
||||
content {
|
||||
type = config.value.type
|
||||
minimal_action = config.value.minimal_action
|
||||
min_ready_sec = config.value.min_ready_sec
|
||||
instance_redistribution_type = config.value.instance_redistribution_type
|
||||
type = config.value.type
|
||||
minimal_action = config.value.minimal_action
|
||||
min_ready_sec = config.value.min_ready_sec
|
||||
max_surge_fixed = (
|
||||
config.value.max_surge_type == "fixed" ? config.value.max_surge : null
|
||||
)
|
||||
|
|
|
@ -133,13 +133,14 @@ variable "target_size" {
|
|||
variable "update_policy" {
|
||||
description = "Update policy. Type can be 'OPPORTUNISTIC' or 'PROACTIVE', action 'REPLACE' or 'restart', surge type 'fixed' or 'percent'."
|
||||
type = object({
|
||||
type = string # OPPORTUNISTIC | PROACTIVE
|
||||
minimal_action = string # REPLACE | RESTART
|
||||
min_ready_sec = number
|
||||
max_surge_type = string # fixed | percent
|
||||
max_surge = number
|
||||
max_unavailable_type = string
|
||||
max_unavailable = number
|
||||
instance_redistribution_type = optional(string, "PROACTIVE") # NONE | PROACTIVE. The attribute is ignored if regional is set to false.
|
||||
max_surge_type = string # fixed | percent
|
||||
max_surge = number
|
||||
max_unavailable_type = string
|
||||
max_unavailable = number
|
||||
minimal_action = string # REPLACE | RESTART
|
||||
min_ready_sec = number
|
||||
type = string # OPPORTUNISTIC | PROACTIVE
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
|
|
@ -278,34 +278,34 @@ module "instance-group" {
|
|||
|
||||
| name | description | type | required | default |
|
||||
|---|---|:---:|:---:|:---:|
|
||||
| [name](variables.tf#L163) | Instance name. | <code>string</code> | ✓ | |
|
||||
| [network_interfaces](variables.tf#L168) | Network interfaces configuration. Use self links for Shared VPC, set addresses to null if not needed. | <code title="list(object({ nat = optional(bool, false) network = string subnetwork = string addresses = optional(object({ internal = string external = string }), null) alias_ips = optional(map(string), {}) nic_type = optional(string) }))">list(object({…}))</code> | ✓ | |
|
||||
| [project_id](variables.tf#L205) | Project id. | <code>string</code> | ✓ | |
|
||||
| [zone](variables.tf#L264) | Compute zone. | <code>string</code> | ✓ | |
|
||||
| [attached_disk_defaults](variables.tf#L17) | Defaults for attached disks options. | <code title="object({ mode = string replica_zone = string type = string })">object({…})</code> | | <code title="{ mode = "READ_WRITE" replica_zone = null type = "pd-balanced" }">{…}</code> |
|
||||
| [attached_disks](variables.tf#L31) | Additional disks, if options is null defaults will be used in its place. Source type is one of 'image' (zonal disks in vms and template), 'snapshot' (vm), 'existing', and null. | <code title="list(object({ name = string size = string source = optional(string) source_type = optional(string) options = optional( object({ mode = optional(string, "READ_WRITE") replica_zone = optional(string) type = optional(string, "pd-balanced") }), { mode = "READ_WRITE" replica_zone = null type = "pd-balanced" } ) }))">list(object({…}))</code> | | <code>[]</code> |
|
||||
| [boot_disk](variables.tf#L64) | Boot disk properties. | <code title="object({ auto_delete = optional(bool, true) image = optional(string, "projects/debian-cloud/global/images/family/debian-11") size = optional(number, 10) type = optional(string, "pd-balanced") })">object({…})</code> | | <code title="{ auto_delete = true image = "projects/debian-cloud/global/images/family/debian-11" type = "pd-balanced" size = 10 }">{…}</code> |
|
||||
| [can_ip_forward](variables.tf#L80) | Enable IP forwarding. | <code>bool</code> | | <code>false</code> |
|
||||
| [confidential_compute](variables.tf#L86) | Enable Confidential Compute for these instances. | <code>bool</code> | | <code>false</code> |
|
||||
| [create_template](variables.tf#L92) | Create instance template instead of instances. | <code>bool</code> | | <code>false</code> |
|
||||
| [description](variables.tf#L97) | Description of a Compute Instance. | <code>string</code> | | <code>"Managed by the compute-vm Terraform module."</code> |
|
||||
| [enable_display](variables.tf#L103) | Enable virtual display on the instances. | <code>bool</code> | | <code>false</code> |
|
||||
| [encryption](variables.tf#L109) | Encryption options. Only one of kms_key_self_link and disk_encryption_key_raw may be set. If needed, you can specify to encrypt or not the boot disk. | <code title="object({ encrypt_boot = optional(bool, false) disk_encryption_key_raw = optional(string) kms_key_self_link = optional(string) })">object({…})</code> | | <code>null</code> |
|
||||
| [group](variables.tf#L119) | Define this variable to create an instance group for instances. Disabled for template use. | <code title="object({ named_ports = map(number) })">object({…})</code> | | <code>null</code> |
|
||||
| [hostname](variables.tf#L127) | Instance FQDN name. | <code>string</code> | | <code>null</code> |
|
||||
| [iam](variables.tf#L133) | IAM bindings in {ROLE => [MEMBERS]} format. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [instance_type](variables.tf#L139) | Instance type. | <code>string</code> | | <code>"f1-micro"</code> |
|
||||
| [labels](variables.tf#L145) | Instance labels. | <code>map(string)</code> | | <code>{}</code> |
|
||||
| [metadata](variables.tf#L151) | Instance metadata. | <code>map(string)</code> | | <code>{}</code> |
|
||||
| [min_cpu_platform](variables.tf#L157) | Minimum CPU platform. | <code>string</code> | | <code>null</code> |
|
||||
| [options](variables.tf#L183) | Instance options. | <code title="object({ allow_stopping_for_update = optional(bool, true) deletion_protection = optional(bool, false) spot = optional(bool, false) termination_action = optional(string) })">object({…})</code> | | <code title="{ allow_stopping_for_update = true deletion_protection = false spot = false termination_action = null }">{…}</code> |
|
||||
| [scratch_disks](variables.tf#L210) | Scratch disks configuration. | <code title="object({ count = number interface = string })">object({…})</code> | | <code title="{ count = 0 interface = "NVME" }">{…}</code> |
|
||||
| [service_account](variables.tf#L222) | Service account email. Unused if service account is auto-created. | <code>string</code> | | <code>null</code> |
|
||||
| [service_account_create](variables.tf#L228) | Auto-create service account. | <code>bool</code> | | <code>false</code> |
|
||||
| [service_account_scopes](variables.tf#L236) | Scopes applied to service account. | <code>list(string)</code> | | <code>[]</code> |
|
||||
| [shielded_config](variables.tf#L242) | Shielded VM configuration of the instances. | <code title="object({ enable_secure_boot = bool enable_vtpm = bool enable_integrity_monitoring = bool })">object({…})</code> | | <code>null</code> |
|
||||
| [tag_bindings](variables.tf#L252) | Tag bindings for this instance, in key => tag value id format. | <code>map(string)</code> | | <code>null</code> |
|
||||
| [tags](variables.tf#L258) | Instance network tags for firewall rule targets. | <code>list(string)</code> | | <code>[]</code> |
|
||||
| [name](variables.tf#L180) | Instance name. | <code>string</code> | ✓ | |
|
||||
| [network_interfaces](variables.tf#L185) | Network interfaces configuration. Use self links for Shared VPC, set addresses to null if not needed. | <code title="list(object({ nat = optional(bool, false) network = string subnetwork = string addresses = optional(object({ internal = string external = string }), null) alias_ips = optional(map(string), {}) nic_type = optional(string) }))">list(object({…}))</code> | ✓ | |
|
||||
| [project_id](variables.tf#L222) | Project id. | <code>string</code> | ✓ | |
|
||||
| [zone](variables.tf#L281) | Compute zone. | <code>string</code> | ✓ | |
|
||||
| [attached_disk_defaults](variables.tf#L17) | Defaults for attached disks options. | <code title="object({ auto_delete = optional(bool, false) mode = string replica_zone = string type = string })">object({…})</code> | | <code title="{ auto_delete = true mode = "READ_WRITE" replica_zone = null type = "pd-balanced" }">{…}</code> |
|
||||
| [attached_disks](variables.tf#L38) | Additional disks, if options is null defaults will be used in its place. Source type is one of 'image' (zonal disks in vms and template), 'snapshot' (vm), 'existing', and null. | <code title="list(object({ name = string size = string source = optional(string) source_type = optional(string) options = optional( object({ auto_delete = optional(bool, false) mode = optional(string, "READ_WRITE") replica_zone = optional(string) type = optional(string, "pd-balanced") }), { auto_delete = true mode = "READ_WRITE" replica_zone = null type = "pd-balanced" } ) }))">list(object({…}))</code> | | <code>[]</code> |
|
||||
| [boot_disk](variables.tf#L81) | Boot disk properties. | <code title="object({ auto_delete = optional(bool, true) image = optional(string, "projects/debian-cloud/global/images/family/debian-11") size = optional(number, 10) type = optional(string, "pd-balanced") })">object({…})</code> | | <code title="{ auto_delete = true image = "projects/debian-cloud/global/images/family/debian-11" type = "pd-balanced" size = 10 }">{…}</code> |
|
||||
| [can_ip_forward](variables.tf#L97) | Enable IP forwarding. | <code>bool</code> | | <code>false</code> |
|
||||
| [confidential_compute](variables.tf#L103) | Enable Confidential Compute for these instances. | <code>bool</code> | | <code>false</code> |
|
||||
| [create_template](variables.tf#L109) | Create instance template instead of instances. | <code>bool</code> | | <code>false</code> |
|
||||
| [description](variables.tf#L114) | Description of a Compute Instance. | <code>string</code> | | <code>"Managed by the compute-vm Terraform module."</code> |
|
||||
| [enable_display](variables.tf#L120) | Enable virtual display on the instances. | <code>bool</code> | | <code>false</code> |
|
||||
| [encryption](variables.tf#L126) | Encryption options. Only one of kms_key_self_link and disk_encryption_key_raw may be set. If needed, you can specify to encrypt or not the boot disk. | <code title="object({ encrypt_boot = optional(bool, false) disk_encryption_key_raw = optional(string) kms_key_self_link = optional(string) })">object({…})</code> | | <code>null</code> |
|
||||
| [group](variables.tf#L136) | Define this variable to create an instance group for instances. Disabled for template use. | <code title="object({ named_ports = map(number) })">object({…})</code> | | <code>null</code> |
|
||||
| [hostname](variables.tf#L144) | Instance FQDN name. | <code>string</code> | | <code>null</code> |
|
||||
| [iam](variables.tf#L150) | IAM bindings in {ROLE => [MEMBERS]} format. | <code>map(list(string))</code> | | <code>{}</code> |
|
||||
| [instance_type](variables.tf#L156) | Instance type. | <code>string</code> | | <code>"f1-micro"</code> |
|
||||
| [labels](variables.tf#L162) | Instance labels. | <code>map(string)</code> | | <code>{}</code> |
|
||||
| [metadata](variables.tf#L168) | Instance metadata. | <code>map(string)</code> | | <code>{}</code> |
|
||||
| [min_cpu_platform](variables.tf#L174) | Minimum CPU platform. | <code>string</code> | | <code>null</code> |
|
||||
| [options](variables.tf#L200) | Instance options. | <code title="object({ allow_stopping_for_update = optional(bool, true) deletion_protection = optional(bool, false) spot = optional(bool, false) termination_action = optional(string) })">object({…})</code> | | <code title="{ allow_stopping_for_update = true deletion_protection = false spot = false termination_action = null }">{…}</code> |
|
||||
| [scratch_disks](variables.tf#L227) | Scratch disks configuration. | <code title="object({ count = number interface = string })">object({…})</code> | | <code title="{ count = 0 interface = "NVME" }">{…}</code> |
|
||||
| [service_account](variables.tf#L239) | Service account email. Unused if service account is auto-created. | <code>string</code> | | <code>null</code> |
|
||||
| [service_account_create](variables.tf#L245) | Auto-create service account. | <code>bool</code> | | <code>false</code> |
|
||||
| [service_account_scopes](variables.tf#L253) | Scopes applied to service account. | <code>list(string)</code> | | <code>[]</code> |
|
||||
| [shielded_config](variables.tf#L259) | Shielded VM configuration of the instances. | <code title="object({ enable_secure_boot = bool enable_vtpm = bool enable_integrity_monitoring = bool })">object({…})</code> | | <code>null</code> |
|
||||
| [tag_bindings](variables.tf#L269) | Tag bindings for this instance, in key => tag value id format. | <code>map(string)</code> | | <code>null</code> |
|
||||
| [tags](variables.tf#L275) | Instance network tags for firewall rule targets. | <code>list(string)</code> | | <code>[]</code> |
|
||||
|
||||
## Outputs
|
||||
|
||||
|
|
|
@ -284,7 +284,7 @@ resource "google_compute_instance_template" "default" {
|
|||
for_each = local.attached_disks
|
||||
iterator = config
|
||||
content {
|
||||
# auto_delete = config.value.options.auto_delete
|
||||
auto_delete = config.value.options.auto_delete
|
||||
device_name = config.value.name
|
||||
# Cannot use `source` with any of the fields in
|
||||
# [disk_size_gb disk_name disk_type source_image labels]
|
||||
|
|
|
@ -17,15 +17,22 @@
|
|||
variable "attached_disk_defaults" {
|
||||
description = "Defaults for attached disks options."
|
||||
type = object({
|
||||
auto_delete = optional(bool, false)
|
||||
mode = string
|
||||
replica_zone = string
|
||||
type = string
|
||||
})
|
||||
default = {
|
||||
auto_delete = true
|
||||
mode = "READ_WRITE"
|
||||
replica_zone = null
|
||||
type = "pd-balanced"
|
||||
}
|
||||
|
||||
validation {
|
||||
condition = var.attached_disk_defaults.mode == "READ_WRITE" || !var.attached_disk_defaults.auto_delete
|
||||
error_message = "auto_delete can only be specified on READ_WRITE disks."
|
||||
}
|
||||
}
|
||||
|
||||
variable "attached_disks" {
|
||||
|
@ -37,11 +44,13 @@ variable "attached_disks" {
|
|||
source_type = optional(string)
|
||||
options = optional(
|
||||
object({
|
||||
auto_delete = optional(bool, false)
|
||||
mode = optional(string, "READ_WRITE")
|
||||
replica_zone = optional(string)
|
||||
type = optional(string, "pd-balanced")
|
||||
}),
|
||||
{
|
||||
auto_delete = true
|
||||
mode = "READ_WRITE"
|
||||
replica_zone = null
|
||||
type = "pd-balanced"
|
||||
|
@ -59,6 +68,14 @@ variable "attached_disks" {
|
|||
]) == length(var.attached_disks)
|
||||
error_message = "Source type must be one of 'image', 'snapshot', 'attach', null."
|
||||
}
|
||||
|
||||
validation {
|
||||
condition = length([
|
||||
for d in var.attached_disks : d if d.options == null ||
|
||||
d.options.mode == "READ_WRITE" || !d.options.auto_delete
|
||||
]) == length(var.attached_disks)
|
||||
error_message = "auto_delete can only be specified on READ_WRITE disks."
|
||||
}
|
||||
}
|
||||
|
||||
variable "boot_disk" {
|
||||
|
|
|
@ -79,7 +79,11 @@ resource "google_container_cluster" "cluster" {
|
|||
)
|
||||
}
|
||||
gce_persistent_disk_csi_driver_config {
|
||||
enabled = var.enable_addons.gce_persistent_disk_csi_driver
|
||||
enabled = (
|
||||
var.enable_features.autopilot
|
||||
? true
|
||||
: var.enable_addons.gce_persistent_disk_csi_driver
|
||||
)
|
||||
}
|
||||
dynamic "gcp_filestore_csi_driver_config" {
|
||||
for_each = !var.enable_features.autopilot ? [""] : []
|
||||
|
@ -169,7 +173,7 @@ resource "google_container_cluster" "cluster" {
|
|||
}
|
||||
|
||||
dynamic "logging_config" {
|
||||
for_each = var.logging_config != null ? [""] : []
|
||||
for_each = var.logging_config != null && !var.enable_features.autopilot ? [""] : []
|
||||
content {
|
||||
enable_components = var.logging_config
|
||||
}
|
||||
|
@ -234,7 +238,7 @@ resource "google_container_cluster" "cluster" {
|
|||
}
|
||||
|
||||
dynamic "monitoring_config" {
|
||||
for_each = var.monitoring_config != null ? [""] : []
|
||||
for_each = var.monitoring_config != null && !var.enable_features.autopilot ? [""] : []
|
||||
content {
|
||||
enable_components = var.monitoring_config
|
||||
}
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
for item in items:
|
||||
item.add_marker(
|
||||
pytest.mark.xdist_group(name='/'.join(item.path.parent.parts[-2:])))
|
|
@ -0,0 +1,13 @@
|
|||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
|
@ -0,0 +1,23 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
module "bq" {
|
||||
source = "../../../../../blueprints/factories/bigquery-factory/"
|
||||
|
||||
project_id = "test-project"
|
||||
views_dir = "./views"
|
||||
tables_dir = "./tables"
|
||||
}
|
|
@ -0,0 +1,17 @@
|
|||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
dataset: dataset_a
|
||||
table: table_a
|
||||
schema: [{name: "test", type: "STRING"},{name: "test2", type: "INT64"}]
|
|
@ -0,0 +1,34 @@
|
|||
/**
|
||||
* Copyright 2022 Google LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
variable "views_dir" {
|
||||
description = "Relative path for the folder storing view data."
|
||||
type = string
|
||||
default = "/views"
|
||||
}
|
||||
|
||||
variable "tables_dir" {
|
||||
description = "Relative path for the folder storing table data."
|
||||
type = string
|
||||
default = "tables"
|
||||
}
|
||||
|
||||
variable "project_id" {
|
||||
description = "Project ID"
|
||||
type = string
|
||||
default = "test-project"
|
||||
|
||||
}
|
|
@ -0,0 +1,17 @@
|
|||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
dataset: dataset_b
|
||||
view: view_a
|
||||
query: "SELECT CURRENT_DATE() LIMIT 1"
|
|
@ -0,0 +1,19 @@
|
|||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
def test_resources(e2e_plan_runner):
|
||||
"Test that plan works and the numbers of resources is as expected."
|
||||
modules, resources = e2e_plan_runner()
|
||||
assert len(modules) > 0
|
||||
assert len(resources) > 0
|
|
@ -15,6 +15,7 @@
|
|||
from pathlib import Path
|
||||
|
||||
import marko
|
||||
import pytest
|
||||
|
||||
FABRIC_ROOT = Path(__file__).parents[2]
|
||||
MODULES_PATH = FABRIC_ROOT / 'modules/'
|
||||
|
@ -36,13 +37,14 @@ def pytest_generate_tests(metafunc):
|
|||
doc = marko.parse(readme.read_text())
|
||||
index = 0
|
||||
last_header = None
|
||||
mark = pytest.mark.xdist_group(name=module.name)
|
||||
for child in doc.children:
|
||||
if isinstance(child, marko.block.FencedCode) and child.lang == 'hcl':
|
||||
index += 1
|
||||
code = child.children[0].children
|
||||
if 'tftest skip' in code:
|
||||
continue
|
||||
examples.append(code)
|
||||
examples.append(pytest.param(code, marks=mark))
|
||||
path = module.relative_to(FABRIC_ROOT)
|
||||
name = f'{path}:{last_header}'
|
||||
if index > 1:
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
for item in items:
|
||||
item.add_marker(pytest.mark.xdist_group(name=item.path.parent.name))
|
|
@ -12,15 +12,8 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def resources(plan_runner):
|
||||
_, resources = plan_runner()
|
||||
return resources
|
||||
|
||||
|
||||
def test_resource_count(resources):
|
||||
def test_resource_count(plan_runner):
|
||||
"Test number of resources created."
|
||||
_, resources = plan_runner()
|
||||
assert len(resources) == 5
|
||||
|
|
|
@ -24,6 +24,7 @@ variable "attached_disk_defaults" {
|
|||
description = "Defaults for attached disks options."
|
||||
type = any
|
||||
default = {
|
||||
auto_delete = true
|
||||
mode = "READ_WRITE"
|
||||
replica_zone = null
|
||||
type = "pd-balanced"
|
||||
|
|
|
@ -12,8 +12,9 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
def test_types(plan_runner):
|
||||
_disks = '''[{
|
||||
_disks = '''[{
|
||||
name = "data1"
|
||||
size = "10"
|
||||
source_type = "image"
|
||||
|
@ -33,26 +34,35 @@ def test_types(plan_runner):
|
|||
options = null
|
||||
}]
|
||||
'''
|
||||
_, resources = plan_runner(attached_disks=_disks)
|
||||
assert len(resources) == 3
|
||||
disks = {r['values']['name']: r['values']
|
||||
for r in resources if r['type'] == 'google_compute_disk'}
|
||||
assert disks['test-data1']['size'] == 10
|
||||
assert disks['test-data2']['size'] == 20
|
||||
assert disks['test-data1']['image'] == 'image-1'
|
||||
assert disks['test-data1']['snapshot'] is None
|
||||
assert disks['test-data2']['snapshot'] == 'snapshot-2'
|
||||
assert disks['test-data2']['image'] is None
|
||||
instance = [r['values']
|
||||
for r in resources if r['type'] == 'google_compute_instance'][0]
|
||||
instance_disks = {d['source']: d['device_name']
|
||||
for d in instance['attached_disk']}
|
||||
assert instance_disks == {'test-data1': 'data1',
|
||||
'test-data2': 'data2', 'disk-3': 'data3'}
|
||||
_, resources = plan_runner(attached_disks=_disks)
|
||||
assert len(resources) == 3
|
||||
disks = {
|
||||
r['values']['name']: r['values']
|
||||
for r in resources if r['type'] == 'google_compute_disk'
|
||||
}
|
||||
assert disks['test-data1']['size'] == 10
|
||||
assert disks['test-data2']['size'] == 20
|
||||
assert disks['test-data1']['image'] == 'image-1'
|
||||
assert disks['test-data1']['snapshot'] is None
|
||||
assert disks['test-data2']['snapshot'] == 'snapshot-2'
|
||||
assert disks['test-data2']['image'] is None
|
||||
instance = [
|
||||
r['values'] for r in resources
|
||||
if r['type'] == 'google_compute_instance'
|
||||
][0]
|
||||
instance_disks = {
|
||||
d['source']: d['device_name']
|
||||
for d in instance['attached_disk']
|
||||
}
|
||||
assert instance_disks == {
|
||||
'test-data1': 'data1',
|
||||
'test-data2': 'data2',
|
||||
'disk-3': 'data3'
|
||||
}
|
||||
|
||||
|
||||
def test_options(plan_runner):
|
||||
_disks = '''[{
|
||||
_disks = '''[{
|
||||
name = "data1"
|
||||
size = "10"
|
||||
source_type = "image"
|
||||
|
@ -70,21 +80,26 @@ def test_options(plan_runner):
|
|||
}
|
||||
}]
|
||||
'''
|
||||
_, resources = plan_runner(attached_disks=_disks)
|
||||
assert len(resources) == 3
|
||||
disks_z = [r['values']
|
||||
for r in resources if r['type'] == 'google_compute_disk']
|
||||
disks_r = [r['values']
|
||||
for r in resources if r['type'] == 'google_compute_region_disk']
|
||||
assert len(disks_z) == len(disks_r) == 1
|
||||
instance = [r['values']
|
||||
for r in resources if r['type'] == 'google_compute_instance'][0]
|
||||
instance_disks = [d['device_name'] for d in instance['attached_disk']]
|
||||
assert instance_disks == ['data1', 'data2']
|
||||
_, resources = plan_runner(attached_disks=_disks)
|
||||
assert len(resources) == 3
|
||||
disks_z = [
|
||||
r['values'] for r in resources if r['type'] == 'google_compute_disk'
|
||||
]
|
||||
disks_r = [
|
||||
r['values'] for r in resources
|
||||
if r['type'] == 'google_compute_region_disk'
|
||||
]
|
||||
assert len(disks_z) == len(disks_r) == 1
|
||||
instance = [
|
||||
r['values'] for r in resources
|
||||
if r['type'] == 'google_compute_instance'
|
||||
][0]
|
||||
instance_disks = [d['device_name'] for d in instance['attached_disk']]
|
||||
assert instance_disks == ['data1', 'data2']
|
||||
|
||||
|
||||
def test_template(plan_runner):
|
||||
_disks = '''[{
|
||||
_disks = '''[{
|
||||
name = "data1"
|
||||
size = "10"
|
||||
source_type = "image"
|
||||
|
@ -102,9 +117,49 @@ def test_template(plan_runner):
|
|||
}
|
||||
}]
|
||||
'''
|
||||
_, resources = plan_runner(attached_disks=_disks,
|
||||
create_template="true")
|
||||
assert len(resources) == 1
|
||||
template = [r['values'] for r in resources if r['type']
|
||||
== 'google_compute_instance_template'][0]
|
||||
assert len(template['disk']) == 3
|
||||
_, resources = plan_runner(attached_disks=_disks, create_template="true")
|
||||
assert len(resources) == 1
|
||||
template = [
|
||||
r['values'] for r in resources
|
||||
if r['type'] == 'google_compute_instance_template'
|
||||
][0]
|
||||
assert len(template['disk']) == 3
|
||||
|
||||
|
||||
def test_auto_delete(plan_runner):
|
||||
_disks = '''[{
|
||||
name = "data1"
|
||||
size = "10"
|
||||
options = {
|
||||
auto_delete = true, mode = "READ_WRITE"
|
||||
}
|
||||
}, {
|
||||
name = "data2"
|
||||
size = "20"
|
||||
options = {
|
||||
auto_delete = false, mode = "READ_WRITE"
|
||||
},
|
||||
}, {
|
||||
name = "data3"
|
||||
size = "20"
|
||||
options = {
|
||||
mode = "READ_ONLY"
|
||||
}
|
||||
}]
|
||||
'''
|
||||
_, resources = plan_runner(attached_disks=_disks, create_template="true")
|
||||
assert len(resources) == 1
|
||||
template = [
|
||||
r['values'] for r in resources
|
||||
if r['type'] == 'google_compute_instance_template'
|
||||
][0]
|
||||
additional_disks = [
|
||||
d for d in template['disk'] if 'boot' not in d or d['boot'] != True
|
||||
]
|
||||
assert len(additional_disks) == 3
|
||||
disk_data1 = [d for d in additional_disks if d['disk_name'] == 'data1']
|
||||
disk_data2 = [d for d in additional_disks if d['disk_name'] == 'data2']
|
||||
disk_data3 = [d for d in additional_disks if d['disk_name'] == 'data3']
|
||||
assert len(disk_data1) == 1 and disk_data1[0]['auto_delete'] == True
|
||||
assert len(disk_data2) == 1 and disk_data2[0]['auto_delete'] == False
|
||||
assert len(disk_data3) == 1 and disk_data3[0]['auto_delete'] == False
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
# Copyright 2022 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
for item in items:
|
||||
item.add_marker(pytest.mark.xdist_group(name=item.path.parent.name))
|
|
@ -1,4 +1,5 @@
|
|||
pytest>=6.2.5
|
||||
pytest-xdist
|
||||
PyYAML>=6.0
|
||||
tftest>=1.6.3
|
||||
marko>=1.2.0
|
||||
|
|
Loading…
Reference in New Issue