This commit is contained in:
Lorenzo Caggioni 2020-07-06 14:37:13 +02:00
commit ccc4e0076a
175 changed files with 5897 additions and 516 deletions

1
.gitignore vendored
View File

@ -11,3 +11,4 @@ backend.tf
backend-config.hcl
credentials.json
key.json
terraform-ls.tf

View File

@ -4,6 +4,70 @@ All notable changes to this project will be documented in this file.
## [Unreleased]
- fix external IP assignment in `compute-vm`
## [2.3.0] - 2020-07-02
- new 'Cloud Storage to Bigquery with Cloud Dataflow' end to end data solution
- **incompatible change** additive IAM bindings are now keyed by identity instead of role, and use a single `iam_additive_bindings` variable, refer to [#103] for details
- set `delete_contents_on_destroy` in the foundations examples audit dataset to allow destroying
- trap errors raised by the `project` module on destroy
## [2.2.0] - 2020-06-29
- make project creation optional in `project` module to allow managing a pre-existing project
- new `cloud-endpoints` module
- new `cloud-function` module
## [2.1.0] - 2020-06-22
- **incompatible change** routes in the `net-vpc` module now interpolate the VPC name to ensure uniqueness, upgrading from a previous version will drop and recreate routes
- the top-level `docker-images` folder has been moved inside `modules/cloud-config-container/onprem`
- `dns_keys` output added to the `dns` module
- add `group-config` variable, `groups` and `group_self_links` outputs to `net-ilb` module to allow creating ILBs for externally managed instances
- make the IAM bindings depend on the compute instance in the `compute-vm` module
## [2.0.0] - 2020-06-11
- new `data-solutions` section and `cmek-via-centralized-kms` example
- **incompatible change** static VPN routes now interpolate the VPN gateway name to enforce uniqueness, upgrading from a previous version will drop and recreate routes
## [1.9.0] - 2020-06-10
- new `bigtable-instance` module
- add support for IAM bindings to `compute-vm` module
## [1.8.1] - 2020-06-07
- use `all` instead of specifying protocols in the admin firewall rule of the `net-vpc-firewall` module
- add support for encryption keys in `gcs` module
- set `next_hop_instance_zone` in `net-vpc` for next hop instance routes to avoid triggering recreation
## [1.8.0] - 2020-06-03
- **incompatible change** the `kms` module has been refactored and will be incompatible with previous state
- **incompatible change** robot and default service accounts outputs in the `project` module have been refactored and are now exposed via a single `service_account` output (cf [#82])
- add support for PD CSI driver in GKE module
- refactor `iam-service-accounts` module outputs to be more resilient
- add option to use private GCR to `cos-generic-metadata` module
## [1.7.0] - 2020-05-30
- add support for disk encryption to the `compute-vm` module
- new `datafusion` module
- new `container-registry` module
- new `artifact-registry` module
## [1.6.0] - 2020-05-20
- add output to `gke-cluster` exposing the cluster's CA certificate
- fix `gke-cluster` autoscaling options
- add support for Service Directory bound zones to the `dns` module
- new `service-directory` module
- new `source-repository` module
## [1.5.0] - 2020-05-11
- **incompatible change** the `bigquery` module has been removed and replaced by the new `bigquery-dataset` module
- **incompatible change** subnets in the `net-vpc` modules are now passed as a list instead of map, and all related variables for IAM and flow logs use `region/name` instead of `name` keys; it's now possible to have the same subnet name in different regions
- replace all references to the removed `resourceviews.googleapis.com` API with `container.googleapis.com`
@ -11,7 +75,7 @@ All notable changes to this project will be documented in this file.
- fix health checks in `compute-mig` and `net-ilb` modules
- new `cos-generic-metadata` module in the `cloud-config-container` suite
- new `envoy-traffic-director` module in the `cloud-config-container` suite
- new `pubsub` module (untested)
- new `pubsub` module
## [1.4.1] - 2020-05-02
@ -55,10 +119,22 @@ All notable changes to this project will be documented in this file.
- merge development branch with suite of new modules and end-to-end examples
[Unreleased]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.4.1...HEAD
[Unreleased]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v2.3.0...HEAD
[2.3.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v2.2.0...v2.3.0
[2.2.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v2.1.0...v2.2.0
[2.1.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v2.0.0...v2.1.0
[2.0.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.9.0...v2.0.0
[1.9.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.8.1...v1.9.0
[1.8.1]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.8.0...v1.8.1
[1.8.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.7.0...v1.8.0
[1.7.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.6.0...v1.7.0
[1.6.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.5.0...v1.6.0
[1.5.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.4.1...v1.5.0
[1.4.1]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.4.0...v1.4.1
[1.4.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.3.0...v1.4.0
[1.3.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.2...v1.3.0
[1.2.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.1...v1.2
[1.1.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v1.0...v1.1
[1.0.0]: https://github.com/terraform-google-modules/cloud-foundation-fabric/compare/v0.1...v1.0
[#82]: https://github.com/terraform-google-modules/cloud-foundation-fabric/pull/82
[#103]: https://github.com/terraform-google-modules/cloud-foundation-fabric/pull/103

View File

@ -19,8 +19,9 @@ Currently available examples:
- **foundations** - [single level hierarchy](./foundations/environments/) (environments), [multiple level hierarchy](./foundations/business-units/) (business units + environments)
- **infrastructure** - [hub and spoke via peering](./infrastructure/hub-and-spoke-peering/), [hub and spoke via VPN](./infrastructure/hub-and-spoke-vpn/), [DNS and Google Private Access for on-premises](./infrastructure/onprem-google-access-dns/), [Shared VPC with GKE support](./infrastructure/shared-vpc-gke/)
- **data solutions** - [GCE/GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms/), [Cloud Storage to Bigquery with Cloud Dataflow](./data-solutions/gcs-to-bq-with-dataflow/)
For more information see the README files in the [foundations](./foundations/) and [infrastructure](./infrastructure/) folders.
For more information see the README files in the [foundations](./foundations/), [infrastructure](./infrastructure/) and [data solutions](./data-solutions/) folders.
## Modules
@ -33,9 +34,11 @@ The current list of modules supports most of the core foundational and networkin
Currently available modules:
- **foundational** - [folders](./modules/folders), [log sinks](./modules/logging-sinks), [organization](./modules/organization), [project](./modules/project), [service accounts](./modules/iam-service-accounts)
- **networking** - [VPC](./modules/net-vpc), [VPC firewall](./modules/net-vpc-firewall), [VPC peering](./modules/net-vpc-peering), [VPN static](./modules/net-vpn-static), [VPN dynamic](./modules/net-vpn-dynamic), [VPN HA](./modules/net-vpn-ha), [NAT](./modules/net-cloudnat), [address reservation](./modules/net-address), [DNS](./modules/dns), [L4 ILB](./modules/net-ilb)
- **networking** - [VPC](./modules/net-vpc), [VPC firewall](./modules/net-vpc-firewall), [VPC peering](./modules/net-vpc-peering), [VPN static](./modules/net-vpn-static), [VPN dynamic](./modules/net-vpn-dynamic), [VPN HA](./modules/net-vpn-ha), [NAT](./modules/net-cloudnat), [address reservation](./modules/net-address), [DNS](./modules/dns), [L4 ILB](./modules/net-ilb), [Service Directory](./modules/service-directory), [Cloud Endpoints](./modules/cloudenpoints)
- **compute** - [VM/VM group](./modules/compute-vm), [MIG](./modules/compute-mig), [GKE cluster](./modules/gke-cluster), [GKE nodepool](./modules/gke-nodepool), [COS container](./modules/cos-container) (coredns, mysql, onprem, squid)
- **data** - [GCS](./modules/gcs), [BigQuery dataset](./modules/bigquery-dataset)
- **data** - [GCS](./modules/gcs), [BigQuery dataset](./modules/bigquery-dataset), [Pub/Sub](./modules/pubsub), [Datafusion](./modules/datafusion), [Bigtable instance](./modules/bigtable-instance)
- **development** - [Cloud Source Repository](./modules/source-repository), [Container Registry](./modules/container-registry), [Artifact Registry](./modules/artifact-registry)
- **security** - [KMS](./modules/kms), [SecretManager](./modules/secret-manager)
- **serverless** - [Cloud Functions](./cloud-functions)
For more information and usage examples see each module's README file.

16
data-solutions/README.md Normal file
View File

@ -0,0 +1,16 @@
# GCP Data Services examples
The examples in this folder implement **typical data service topologies** and **end-to-end scenarios**, that allow testing specific features like Cloud KMS to encrypt your data, or VPC-SC to mitigate data exfiltration.
They are meant to be used as minimal but complete starting points to create actual infrastructure, and as playgrounds to experiment with specific Google Cloud features.
## Examples
### GCE and GCS CMEK via centralized Cloud KMS
<a href="./cmek-via-centralized-kms/" title="CMEK on Cloud Storage and Compute Engine via centralized Cloud KMS"><img src="./cmek-via-centralized-kms/diagram.png" align="left" width="280px"></a> This [example](./cmek-via-centralized-kms/) implements [CMEK](https://cloud.google.com/kms/docs/cmek) for GCS and GCE, via keys hosted in KMS running in a centralized project. The example shows the basic resources and permissions for the typical use case of application projects implementing encryption at rest via a centrally managed KMS service.
<br clear="left">
### Cloud Storage to Bigquery with Cloud Dataflow
<a href="./gcs-to-bq-with-dataflow/" title="Cloud Storage to Bigquery with Cloud Dataflow"><img src="./gcs-to-bq-with-dataflow/diagram.png" align="left" width="280px"></a> This [example](./gcs-to-bq-with-dataflow/) implements [Cloud Storage](https://cloud.google.com/kms/docs/cmek) to Bigquery data import using Cloud Dataflow.
All resources use CMEK hosted in Cloud KMS running in a centralized project. The example shows the basic resources and permissions for the typical use case to read, transform and import data from Cloud Storage to Bigquery.

View File

@ -0,0 +1,58 @@
# GCE and GCS CMEK via centralized Cloud KMS
This example creates a sample centralized [Cloud KMS](https://cloud.google.com/kms?hl=it) configuration, and uses it to implement CMEK for [Cloud Storage](https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys) and [Compute Engine](https://cloud.google.com/compute/docs/disks/customer-managed-encryption) in a separate project.
The example is designed to match real-world use cases with a minimum amount of resources, and be used as a starting point for scenarios where application projects implement CMEK using keys managed by a central team. It also includes the IAM wiring needed to make such scenarios work.
This is the high level diagram:
![High-level diagram](diagram.png "High-level diagram")
## Managed resources and services
This sample creates several distinct groups of resources:
- projects
- Cloud KMS project
- Service Project configured for GCE instances and GCS buckets
- networking
- VPC network
- One subnet
- Firewall rules for [SSH access via IAP](https://cloud.google.com/iap/docs/using-tcp-forwarding) and open communication within the VPC
- IAM
- One service account for the GGE instance
- KMS
- One key ring
- One crypto key (Procection level: softwere) for Cloud Engine
- One crypto key (Protection level: softwere) for Cloud Storage
- GCE
- One instance encrypted with a CMEK Cryptokey hosted in Cloud KMS
- GCS
- One bucket encrypted with a CMEK Cryptokey hosted in Cloud KMS
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| billing_account | Billing account id used as default for new projects. | <code title="">string</code> | ✓ | |
| root_node | The resource name of the parent Folder or Organization. Must be of the form folders/folder_id or organizations/org_id. | <code title="">string</code> | ✓ | |
| *location* | The location where resources will be deployed. | <code title="">string</code> | | <code title="">europe</code> |
| *project_kms_name* | Name for the new KMS Project. | <code title="">string</code> | | <code title="">my-project-kms-001</code> |
| *project_service_name* | Name for the new Service Project. | <code title="">string</code> | | <code title="">my-project-service-001</code> |
| *region* | The region where resources will be deployed. | <code title="">string</code> | | <code title="">europe-west1</code> |
| *vpc_ip_cidr_range* | Ip range used in the subnet deployef in the Service Project. | <code title="">string</code> | | <code title="">10.0.0.0/20</code> |
| *vpc_name* | Name of the VPC created in the Service Project. | <code title="">string</code> | | <code title="">local</code> |
| *vpc_subnet_name* | Name of the subnet created in the Service Project. | <code title="">string</code> | | <code title="">subnet</code> |
| *zone* | The zone where resources will be deployed. | <code title="">string</code> | | <code title="">europe-west1-b</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| bucket | GCS Bucket Cloud KMS crypto keys. | |
| bucket_keys | GCS Bucket Cloud KMS crypto keys. | |
| projects | Project ids. | |
| vm | GCE VMs. | |
| vm_keys | GCE VM Cloud KMS crypto keys. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,20 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
backend "gcs" {
bucket = ""
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

View File

@ -0,0 +1,155 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
# Projects #
###############################################################################
module "project-service" {
source = "../../modules/project"
name = var.project_service_name
parent = var.root_node
billing_account = var.billing_account
services = [
"compute.googleapis.com",
"servicenetworking.googleapis.com",
"storage-component.googleapis.com"
]
oslogin = true
}
module "project-kms" {
source = "../../modules/project"
name = var.project_kms_name
parent = var.root_node
billing_account = var.billing_account
services = [
"cloudkms.googleapis.com",
"servicenetworking.googleapis.com"
]
oslogin = true
}
###############################################################################
# Networking #
###############################################################################
module "vpc" {
source = "../../modules/net-vpc"
project_id = module.project-service.project_id
name = var.vpc_name
subnets = [
{
ip_cidr_range = var.vpc_ip_cidr_range
name = var.vpc_subnet_name
region = var.region
secondary_ip_range = {}
}
]
}
module "vpc-firewall" {
source = "../../modules/net-vpc-firewall"
project_id = module.project-service.project_id
network = module.vpc.name
admin_ranges_enabled = true
admin_ranges = [var.vpc_ip_cidr_range]
}
###############################################################################
# KMS #
###############################################################################
module "kms" {
source = "../../modules/kms"
project_id = module.project-kms.project_id
keyring = {
name = "my-keyring",
location = var.location
}
keys = { key-gce = null, key-gcs = null }
key_iam_roles = {
key-gce = ["roles/cloudkms.cryptoKeyEncrypterDecrypter"]
}
key_iam_members = {
key-gce = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
"serviceAccount:${module.project-service.service_accounts.robots.compute}",
]
},
key-gcs = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
"serviceAccount:${module.project-service.service_accounts.robots.storage}",
]
}
}
}
###############################################################################
# GCE #
###############################################################################
module "kms_vm_example" {
source = "../../modules/compute-vm"
project_id = module.project-service.project_id
region = var.region
zone = var.zone
name = "kms-vm"
network_interfaces = [{
network = module.vpc.self_link,
subnetwork = module.vpc.subnet_self_links["${var.region}/subnet"],
nat = false,
addresses = null
}]
attached_disks = [
{
name = "attacheddisk"
size = 10
image = null
options = {
auto_delete = true
mode = null
source = null
type = null
}
}
]
instance_count = 1
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-10"
type = "pd-ssd"
size = 10
encrypt_disk = true
}
tags = ["ssh"]
encryption = {
encrypt_boot = true
disk_encryption_key_raw = null
kms_key_self_link = module.kms.key_self_links.key-gce
}
}
###############################################################################
# GCS #
###############################################################################
module "kms-gcs" {
source = "../../modules/gcs"
project_id = module.project-service.project_id
prefix = "my-bucket-001"
names = ["kms-gcs"]
encryption_keys = {
kms-gcs = module.kms.keys.key-gce.self_link,
}
}

View File

@ -0,0 +1,53 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
output "bucket" {
description = "GCS Bucket Cloud KMS crypto keys."
value = {
for bucket in module.kms-gcs.buckets :
bucket.name => bucket.url
}
}
output "bucket_keys" {
description = "GCS Bucket Cloud KMS crypto keys."
value = {
for bucket in module.kms-gcs.buckets :
bucket.name => bucket.encryption
}
}
output "projects" {
description = "Project ids."
value = {
service-project = module.project-service.project_id
kms-project = module.project-kms.project_id
}
}
output "vm" {
description = "GCE VMs."
value = {
for instance in module.kms_vm_example.instances :
instance.name => instance.network_interface.0.network_ip
}
}
output "vm_keys" {
description = "GCE VM Cloud KMS crypto keys."
value = {
for instance in module.kms_vm_example.instances :
instance.name => instance.boot_disk.0.kms_key_self_link
}
}

View File

@ -0,0 +1,72 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
variable "billing_account" {
description = "Billing account id used as default for new projects."
type = string
}
variable "location" {
description = "The location where resources will be deployed."
type = string
default = "europe"
}
variable "project_service_name" {
description = "Name for the new Service Project."
type = string
default = "my-project-service-001"
}
variable "project_kms_name" {
description = "Name for the new KMS Project."
type = string
default = "my-project-kms-001"
}
variable "region" {
description = "The region where resources will be deployed."
type = string
default = "europe-west1"
}
variable "root_node" {
description = "The resource name of the parent Folder or Organization. Must be of the form folders/folder_id or organizations/org_id."
type = string
}
variable "vpc_name" {
description = "Name of the VPC created in the Service Project."
type = string
default = "local"
}
variable "vpc_subnet_name" {
description = "Name of the subnet created in the Service Project."
type = string
default = "subnet"
}
variable "vpc_ip_cidr_range" {
description = "Ip range used in the subnet deployef in the Service Project."
type = string
default = "10.0.0.0/20"
}
variable "zone" {
description = "The zone where resources will be deployed."
type = string
default = "europe-west1-b"
}

View File

@ -0,0 +1,17 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
required_version = ">= 0.12.6"
}

View File

@ -0,0 +1,138 @@
# Cloud Storage to Bigquery with Cloud Dataflow
This example creates the infrastructure needed to run a [Cloud Dataflow](https://cloud.google.com/dataflow) pipeline to import data from [GCS](https://cloud.google.com/storage) to [Bigquery](https://cloud.google.com/bigquery).
The solution will use:
- internal IPs for GCE and Dataflow instances
- CMEK encription for GCS bucket, GCE instances, DataFlow instances and BigQuery tables
- Cloud NAT to let resources comunicate to the Internet, run system updates, and install packages
The example is designed to match real-world use cases with a minimum amount of resources. It can be used as a starting point for more complex scenarios.
This is the high level diagram:
![GCS to Biquery High-level diagram](diagram.png "GCS to Biquery High-level diagram")
## Managed resources and services
This sample creates several distinct groups of resources:
- projects
- Cloud KMS project
- Service Project configured for GCE instances, GCS buckets, Dataflow instances and BigQuery tables
- networking
- VPC network
- One subnet
- Firewall rules for [SSH access via IAP](https://cloud.google.com/iap/docs/using-tcp-forwarding) and open communication within the VPC
- IAM
- One service account for GGE instances
- One service account for Dataflow instances
- One service account for Bigquery tables
- KMS
- One contintent key ring (example: 'Europe')
- One crypto key (Procection level: softwere) for Cloud Engine
- One crypto key (Protection level: softwere) for Cloud Storage
- One regional key ring ('example: 'europe-west1')
- One crypto key (Protection level: softwere) for Cloud Dataflow
- GCE
- One instance encrypted with a CMEK Cryptokey hosted in Cloud KMS
- GCS
- One bucket encrypted with a CMEK Cryptokey hosted in Cloud KMS
- BQ
- One dataset encrypted with a CMEK Cryptokey hosted in Cloud KMS
- Two tables encrypted with a CMEK Cryptokey hosted in Cloud KMS
## Test your environment with Cloud Dataflow
You can now connect to the GCE instance with the following command:
```hcl
gcloud compute ssh vm-example-1
```
You can run now the simple pipeline you can find [here](./script/data_ingestion/). Once you have installed required packages and copied a file into the GCS bucket, you can trigger the pipeline using internal ips with a command simila to:
```hcl
python data_ingestion.py \
--runner=DataflowRunner \
--max_num_workers=10 \
--autoscaling_algorithm=THROUGHPUT_BASED \
--region=### REGION ### \
--staging_location=gs://### TEMP BUCKET NAME ###/ \
--temp_location=gs://### TEMP BUCKET NAME ###/ \
--project=### PROJECT ID ### \
--input=gs://### DATA BUCKET NAME###/### FILE NAME ###.csv \
--output=### DATASET NAME ###.### TABLE NAME ### \
--service_account_email=### SERVICE ACCOUNT EMAIL ### \
--network=### NETWORK NAME ### \
--subnetwork=### SUBNET NAME ### \
--dataflow_kms_key=### CRYPTOKEY ID ### \
--no_use_public_ips
```
for example:
```hcl
python data_ingestion.py \
--runner=DataflowRunner \
--max_num_workers=10 \
--autoscaling_algorithm=THROUGHPUT_BASED \
--region=europe-west1 \
--staging_location=gs://lc-001-eu-df-tmplocation/ \
--temp_location=gs://lc-001-eu-df-tmplocation/ \
--project=lcaggio-demo \
--input=gs://lc-eu-data/person.csv \
--output=bq_dataset.df_import \
--service_account_email=df-test@lcaggio-demo.iam.gserviceaccount.com \
--network=local \
--subnetwork=regions/europe-west1/subnetworks/subnet \
--dataflow_kms_key=projects/lcaggio-demo-kms/locations/europe-west1/keyRings/my-keyring-regional/cryptoKeys/key-df \
--no_use_public_ips
```
You can check data imported into Google BigQuery from the Google Cloud Console UI.
## Test your environment with 'bq' CLI
You can now connect to the GCE instance with the following command:
```hcl
gcloud compute ssh vm-example-1
```
You can run now a simple 'bq load' command to import data into Bigquery. Below an example command:
```hcl
bq load \
--source_format=CSV \
bq_dataset.bq_import \
gs://my-bucket/person.csv \
schema_bq_import.json
```
You can check data imported into Google BigQuery from the Google Cloud Console UI.
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| billing_account | Billing account id used as default for new projects. | <code title="">string</code> | ✓ | |
| project_kms_name | Name for the new KMS Project. | <code title="">string</code> | ✓ | |
| project_service_name | Name for the new Service Project. | <code title="">string</code> | ✓ | |
| root_node | The resource name of the parent Folder or Organization. Must be of the form folders/folder_id or organizations/org_id. | <code title="">string</code> | ✓ | |
| *location* | The location where resources will be deployed. | <code title="">string</code> | | <code title="">europe</code> |
| *region* | The region where resources will be deployed. | <code title="">string</code> | | <code title="">europe-west1</code> |
| *ssh_source_ranges* | IP CIDR ranges that will be allowed to connect via SSH to the onprem instance. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">["0.0.0.0/0"]</code> |
| *vpc_ip_cidr_range* | Ip range used in the subnet deployef in the Service Project. | <code title="">string</code> | | <code title="">10.0.0.0/20</code> |
| *vpc_name* | Name of the VPC created in the Service Project. | <code title="">string</code> | | <code title="">local</code> |
| *vpc_subnet_name* | Name of the subnet created in the Service Project. | <code title="">string</code> | | <code title="">subnet</code> |
| *zone* | The zone where resources will be deployed. | <code title="">string</code> | | <code title="">europe-west1-b</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| bq_tables | Bigquery Tables. | |
| buckets | GCS Bucket Cloud KMS crypto keys. | |
| projects | Project ids. | |
| vm | GCE VMs. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,20 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
backend "gcs" {
bucket = ""
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

View File

@ -0,0 +1,342 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
locals {
vm-startup-script = join("\n", [
"#! /bin/bash",
"apt-get update && apt-get install -y bash-completion git python3-venv gcc build-essential python-dev python3-dev",
"pip3 install --upgrade setuptools pip"
])
}
###############################################################################
# Projects - Centralized #
###############################################################################
module "project-service" {
source = "../../modules/project"
name = var.project_service_name
parent = var.root_node
billing_account = var.billing_account
services = [
"compute.googleapis.com",
"servicenetworking.googleapis.com",
"storage-component.googleapis.com",
"bigquery.googleapis.com",
"bigquerystorage.googleapis.com",
"bigqueryreservation.googleapis.com",
"dataflow.googleapis.com",
"cloudkms.googleapis.com",
]
oslogin = true
}
module "project-kms" {
source = "../../modules/project"
name = var.project_kms_name
parent = var.root_node
billing_account = var.billing_account
services = [
"cloudkms.googleapis.com",
]
}
###############################################################################
# Project Service Accounts #
###############################################################################
module "service-account-bq" {
source = "../../modules/iam-service-accounts"
project_id = module.project-service.project_id
names = ["bq-test"]
iam_project_roles = {
(module.project-service.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
"roles/bigquery.admin"
]
}
}
module "service-account-gce" {
source = "../../modules/iam-service-accounts"
project_id = module.project-service.project_id
names = ["gce-test"]
iam_project_roles = {
(module.project-service.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
"roles/dataflow.admin",
"roles/iam.serviceAccountUser",
"roles/bigquery.dataOwner",
"roles/bigquery.jobUser" # Needed to import data using 'bq' command
]
}
}
module "service-account-df" {
source = "../../modules/iam-service-accounts"
project_id = module.project-service.project_id
names = ["df-test"]
iam_project_roles = {
(module.project-service.project_id) = [
"roles/dataflow.worker",
"roles/bigquery.dataOwner",
"roles/bigquery.metadataViewer",
"roles/storage.objectViewer",
"roles/bigquery.jobUser"
]
}
}
data "google_bigquery_default_service_account" "bq_sa" {
project = module.project-service.project_id
}
data "google_storage_project_service_account" "gcs_account" {
project = module.project-service.project_id
}
###############################################################################
# KMS #
###############################################################################
module "kms" {
source = "../../modules/kms"
project_id = module.project-kms.project_id
keyring = {
name = "my-keyring",
location = var.location
}
keys = { key-gce = null, key-gcs = null, key-bq = null }
key_iam_roles = {
key-gce = ["roles/cloudkms.cryptoKeyEncrypterDecrypter"]
key-gcs = ["roles/cloudkms.cryptoKeyEncrypterDecrypter"]
key-bq = ["roles/cloudkms.cryptoKeyEncrypterDecrypter"]
}
key_iam_members = {
key-gce = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
"serviceAccount:${module.project-service.service_accounts.robots.compute}",
]
},
key-gcs = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
#"serviceAccount:${module.project-service.service_accounts.robots.storage}",
"serviceAccount:${data.google_storage_project_service_account.gcs_account.email_address}"
]
},
key-bq = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
# TODO: Find a better place to store BQ service account
#"serviceAccount:${module.project-service.service_accounts.default.bq}",
"serviceAccount:${data.google_bigquery_default_service_account.bq_sa.email}",
]
},
}
}
module "kms-regional" {
source = "../../modules/kms"
project_id = module.project-kms.project_id
keyring = {
name = "my-keyring-regional",
location = var.region
}
keys = { key-df = null }
key_iam_roles = {
key-df = ["roles/cloudkms.cryptoKeyEncrypterDecrypter"]
}
key_iam_members = {
key-df = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
"serviceAccount:${module.project-service.service_accounts.robots.dataflow}",
"serviceAccount:${module.project-service.service_accounts.robots.compute}",
]
}
}
}
###############################################################################
# Networking #
###############################################################################
module "vpc" {
source = "../../modules/net-vpc"
project_id = module.project-service.project_id
name = var.vpc_name
subnets = [
{
ip_cidr_range = var.vpc_ip_cidr_range
name = var.vpc_subnet_name
region = var.region
secondary_ip_range = {}
}
]
}
module "vpc-firewall" {
source = "../../modules/net-vpc-firewall"
project_id = module.project-service.project_id
network = module.vpc.name
admin_ranges_enabled = true
admin_ranges = [var.vpc_ip_cidr_range]
}
module "nat" {
source = "../../modules/net-cloudnat"
project_id = module.project-service.project_id
region = var.region
name = "default"
router_network = module.vpc.name
}
###############################################################################
# GCE #
###############################################################################
module "vm_example" {
source = "../../modules/compute-vm"
project_id = module.project-service.project_id
region = var.region
zone = var.zone
name = "vm-example"
network_interfaces = [{
network = module.vpc.self_link,
subnetwork = module.vpc.subnet_self_links["${var.region}/${var.vpc_subnet_name}"],
nat = false,
addresses = null
}]
attached_disks = [
{
name = "attacheddisk"
size = 10
image = null
options = {
auto_delete = true
mode = null
source = null
type = null
}
}
]
instance_count = 2
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-10"
type = "pd-ssd"
size = 10
encrypt_disk = true
}
encryption = {
encrypt_boot = true
disk_encryption_key_raw = null
kms_key_self_link = module.kms.key_self_links.key-gce
}
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
}
###############################################################################
# GCS #
###############################################################################
module "kms-gcs" {
source = "../../modules/gcs"
project_id = module.project-service.project_id
prefix = module.project-service.project_id
names = ["data", "df-tmplocation"]
iam_roles = {
data = ["roles/storage.admin","roles/storage.objectViewer"],
df-tmplocation = ["roles/storage.admin"]
}
iam_members = {
data = {
"roles/storage.admin" = [
"serviceAccount:${module.service-account-gce.email}",
],
"roles/storage.viewer" = [
"serviceAccount:${module.service-account-df.email}",
],
},
df-tmplocation = {
"roles/storage.admin" = [
"serviceAccount:${module.service-account-gce.email}",
"serviceAccount:${module.service-account-df.email}",
]
}
}
encryption_keys = {
data = module.kms.keys.key-gcs.self_link,
df-tmplocation = module.kms.keys.key-gcs.self_link,
}
force_destroy = {
data = true,
df-tmplocation = true,
}
}
###############################################################################
# BQ #
###############################################################################
module "bigquery-dataset" {
source = "../../modules/bigquery-dataset"
project_id = module.project-service.project_id
id = "bq_dataset"
access_roles = {
reader-group = { role = "READER", type = "domain" }
owner = { role = "OWNER", type = "user_by_email" }
}
access_identities = {
reader-group = "caggioland.com"
owner = module.service-account-bq.email
}
encryption_key = module.kms.keys.key-bq.self_link
tables = {
bq_import = {
friendly_name = "BQ import"
labels = {}
options = null
partitioning = {
field = null
range = null # use start/end/interval for range
time = null
}
schema = file("schema_bq_import.json")
options = {
clustering = null
expiration_time = null
encryption_key = module.kms.keys.key-bq.self_link
}
},
df_import = {
friendly_name = "Dataflow import"
labels = {}
options = null
partitioning = {
field = null
range = null # use start/end/interval for range
time = null
}
schema = file("schema_df_import.json")
options = {
clustering = null
expiration_time = null
encryption_key = module.kms.keys.key-bq.self_link
}
}
}
}

View File

@ -0,0 +1,42 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
output "bq_tables" {
description = "Bigquery Tables."
value = module.bigquery-dataset.table_ids
}
output "buckets" {
description = "GCS Bucket Cloud KMS crypto keys."
value = {
for bucket in module.kms-gcs.buckets :
bucket.name => bucket.url
}
}
output "projects" {
description = "Project ids."
value = {
service-project = module.project-service.project_id
kms-project = module.project-kms.project_id
}
}
output "vm" {
description = "GCE VMs."
value = {
for instance in module.vm_example.instances :
instance.name => instance.network_interface.0.network_ip
}
}

View File

@ -0,0 +1,14 @@
[
{
"name": "name",
"type": "STRING"
},
{
"name": "surname",
"type": "STRING"
},
{
"name": "age",
"type": "NUMERIC"
}
]

View File

@ -0,0 +1,22 @@
[
{
"mode": "NULLABLE",
"name": "name",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "surname",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "age",
"type": "NUMERIC"
},
{
"mode": "NULLABLE",
"name": "_TIMESTAMP",
"type": "TIMESTAMP"
}
]

View File

@ -0,0 +1,4 @@
# Sripts
In this section you can find two simple scripts to test your environment:
- [Data ingestion](./data_ingestion/): a simple Apache Beam Python pipeline to import data from Google Cloud Storage into Bigquery.
- [Person details generator](./person_details_generator/): a simple script to generate some random data to test your environment.

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,99 @@
# Ingest CSV files from GCS into Bigquery
In this example we create a Python [Apache Beam](https://beam.apache.org/) pipeline running on [Google Cloud Dataflow](https://cloud.google.com/dataflow/) to import CSV files into BigQuery adding a timestamp to each row. Below the architecture used:
![Apache Beam pipeline to import CSV from GCS into BQ](diagram.png)
The architecture uses:
* [Google Cloud Storage]() to store CSV source files
* [Google Cloud Dataflow](https://cloud.google.com/dataflow/) to read files from Google Cloud Storage, Transform data base on the structure of the file and import the data into Google BigQuery
* [Google BigQuery](https://cloud.google.com/bigquery/) to store data in a Data Lake.
You can use this script as a starting point to import your files into Google BigQuery. You'll probably need to adapt the script logic to your requirements.
## 1. Prerequisites
- Up and running GCP project with enabled billing account
- gcloud installed and initiated to your project
- Google Cloud Dataflow API enabled
- Google Cloud Storage Bucket containing the file to import (CSV format) containings name, surnames and age. Example: `Mario,Rossi,30`.
- Google Cloud Storage Bucket for temp and staging Google Dataflow files
- Google BigQuery dataset
- [Python](https://www.python.org/) >= 3.7 and python-dev module
- gcc
- Google Cloud [Application Default Credentials](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login)
## 2. Create virtual environment
Create a new virtual environment (recommended) and install requirements:
```
virtualenv env
source ./env/bin/activate
pip3 install --upgrade setuptools pip
pip3 install -r requirements.txt
```
## 4. Upload files into Google Cloud Storage
Upload files to be imported into Google Bigquery in a Google Cloud Storage Bucket. You can use `gsutil` using a command like:
```
gsutil cp [LOCAL_OBJECT_LOCATION] gs://[DESTINATION_BUCKET_NAME]/
```
Files need to be in CSV format,For example:
```
Enrico,Bianchi,20
Mario,Rossi,30
```
You can use the [person_details_generator](../person_details_generator/) script if you want to create random person details.
## 5. Run pipeline
You can check parameters accepted by the `data_ingestion.py` script with the following command:
```
python pipelines/data_ingestion --help
```
You can run the pipeline locally with the following command:
```
python data_ingestion.py \
--runner=DirectRunner \
--project=###PUT HERE PROJECT ID### \
--input=###PUT HERE THE FILE TO IMPORT. EXAMPLE: gs://bucket_name/person.csv ### \
--output=###PUT HERE BQ DATASET.TABLE###
```
or you can run the pipeline on Google Dataflow using the following command:
```
python data_ingestion.py \
--runner=DataflowRunner \
--max_num_workers=100 \
--autoscaling_algorithm=THROUGHPUT_BASED \
--region=###PUT HERE REGION### \
--staging_location=###PUT HERE GCS STAGING LOCATION### \
--temp_location=###PUT HERE GCS TMP LOCATION###\
--project=###PUT HERE PROJECT ID### \
--input=###PUT HERE GCS BUCKET NAME. EXAMPLE: gs://bucket_name/person.csv### \
--output=###PUT HERE BQ DATASET NAME. EXAMPLE: bq_dataset.df_import### \
```
Below an example to run the pipeline specifying Network and Subnetwork, using private IPs and using a KMS key to encrypt data at rest:
```
python data_ingestion.py \
--runner=DataflowRunner \
--max_num_workers=100 \
--autoscaling_algorithm=THROUGHPUT_BASED \
--region=###PUT HERE REGION### \
--staging_location=###PUT HERE GCS STAGING LOCATION### \
--temp_location=###PUT HERE GCS TMP LOCATION###\
--project=###PUT HERE PROJECT ID### \
--network=###PUT HERE YOUR NETWORK### \
--subnetwork=###PUT HERE YOUR SUBNETWORK. EXAMPLE: regions/europe-west1/subnetworks/subnet### \
--dataflowKmsKey=###PUT HERE KMES KEY. Example: projects/lcaggio-d-4-kms/locations/europe-west1/keyRings/my-keyring-regional/cryptoKeys/key-df### \
--input=###PUT HERE GCS BUCKET NAME. EXAMPLE: gs://bucket_name/person.csv### \
--output=###PUT HERE BQ DATASET NAME. EXAMPLE: bq_dataset.df_import### \
--no_use_public_ips
```
## 6. Check results
You can check data imported into Google BigQuery from the Google Cloud Console UI.

View File

@ -0,0 +1,3 @@
apache-beam[gcp]
setuptools
wheel

View File

@ -0,0 +1,134 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Dataflow pipeline. Reads a CSV file and writes to a BQ table adding a timestamp.
"""
import argparse
import logging
import re
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
class DataIngestion:
"""A helper class which contains the logic to translate the file into
a format BigQuery will accept."""
def parse_method(self, string_input):
"""Translate CSV row to dictionary.
Args:
string_input: A comma separated list of values in the form of
name,surname
Example string_input: lorenzo,caggioni
Returns:
A dict mapping BigQuery column names as keys
example output:
{
'name': 'mario',
'surname': 'rossi',
'age': 30
}
"""
# Strip out carriage return, newline and quote characters.
values = re.split(",", re.sub('\r\n', '', re.sub('"', '',
string_input)))
row = dict(
zip(('name', 'surname', 'age'),
values))
return row
class InjectTimestamp(beam.DoFn):
"""A class which add a timestamp for each row.
Args:
element: A dictionary mapping BigQuery column names
Example:
{
'name': 'mario',
'surname': 'rossi',
'age': 30
}
Returns:
The input dictionary with a timestamp value added
Example:
{
'name': 'mario',
'surname': 'rossi',
'age': 30
'_TIMESTAMP': 1545730073
}
"""
def process(self, element):
import time
element['_TIMESTAMP'] = int(time.mktime(time.gmtime()))
return [element]
def run(argv=None):
"""The main function which creates the pipeline and runs it."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input',
dest='input',
required=False,
help='Input file to read. This can be a local file or '
'a file in a Google Storage Bucket.')
parser.add_argument(
'--output',
dest='output',
required=False,
help='Output BQ table to write results to.')
# Parse arguments from the command line.
known_args, pipeline_args = parser.parse_known_args(argv)
# DataIngestion is a class we built in this script to hold the logic for
# transforming the file into a BigQuery table.
data_ingestion = DataIngestion()
# Initiate the pipeline using the pipeline arguments
p = beam.Pipeline(options=PipelineOptions(pipeline_args))
(p
# Read the file. This is the source of the pipeline.
| 'Read from a File' >> beam.io.ReadFromText(known_args.input)
# Translates CSV row to a dictionary object consumable by BigQuery.
| 'String To BigQuery Row' >>
beam.Map(lambda s: data_ingestion.parse_method(s))
# Add the timestamp on each row
| 'Inject Timestamp - ' >> beam.ParDo(InjectTimestamp())
# Write data to Bigquery
| 'Write to BigQuery' >> beam.io.Write(
beam.io.BigQuerySink(
# BigQuery table name.
known_args.output,
# Bigquery table schema
schema='name:STRING,surname:STRING,age:NUMERIC,_TIMESTAMP:TIMESTAMP',
# Creates the table in BigQuery if it does not yet exist.
create_disposition=beam.io.BigQueryDisposition.CREATE_NEVER,
# Deletes all data in the BigQuery table before writing.
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE)))
p.run().wait_until_finish()
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,17 @@
# Create random Person PII data
In this example you can find a Python script to generate Person PII data in a CSV file format.
To know how to use the script run:
```hcl
python3 person_details_generator.py --help
```
## Example
To create a file 'person.csv' with 10000 of random person details data you can run:
```hcl
python3 person_details_generator.py \
--count 10000 \
--output person.csv
```

View File

@ -0,0 +1,47 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generate random person PIIs based on arrays of names and surnames."""
import click
import logging
import random
@click.command()
@click.option("--count", default=100, help="Number of generated names.")
@click.option("--output", default=False, help=(
"Name of the output file. Content will be overwritten. "
"If not defined, standard output will be used."))
@click.option("--first_names", default="Lorenzo,Giacomo,Chiara,Miriam", help=(
"String of Names, comma separated. Default 'Lorenzo,Giacomo,Chiara,Miriam'"))
@click.option("--last_names", default="Rossi, Bianchi,Brambilla,Caggioni", help=(
"String of Names, comma separated. Default 'Rossi,Bianchi,Brambilla,Caggioni'"))
def main(count=100, output=False, first_names=None, last_names=None):
generated_names = "".join(
random.choice(first_names.split(',')) + "," +
random.choice(last_names.split(',')) + "," +
str(random.randint(1, 100)) + "\n" for _ in range(count))[:-1]
if output:
f = open(output, "w")
f.write(generated_names)
f.close()
else:
print(generated_names)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
main()

View File

@ -0,0 +1,76 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
variable "billing_account" {
description = "Billing account id used as default for new projects."
type = string
}
variable "location" {
description = "The location where resources will be deployed."
type = string
default = "europe"
}
variable "project_service_name" {
description = "Name for the new Service Project."
type = string
}
variable "project_kms_name" {
description = "Name for the new KMS Project."
type = string
}
variable "region" {
description = "The region where resources will be deployed."
type = string
default = "europe-west1"
}
variable "root_node" {
description = "The resource name of the parent Folder or Organization. Must be of the form folders/folder_id or organizations/org_id."
type = string
}
variable "vpc_name" {
description = "Name of the VPC created in the Service Project."
type = string
default = "local"
}
variable "vpc_subnet_name" {
description = "Name of the subnet created in the Service Project."
type = string
default = "subnet"
}
variable "vpc_ip_cidr_range" {
description = "Ip range used in the subnet deployef in the Service Project."
type = string
default = "10.0.0.0/20"
}
variable "zone" {
description = "The zone where resources will be deployed."
type = string
default = "europe-west1-b"
}
variable "ssh_source_ranges" {
description = "IP CIDR ranges that will be allowed to connect via SSH to the onprem instance."
type = list(string)
default = ["0.0.0.0/0"]
}

View File

@ -0,0 +1,17 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
terraform {
required_version = ">= 0.12.6"
}

View File

@ -29,14 +29,15 @@ module "shared-folder" {
# Terraform project
module "tf-project" {
source = "../../modules/project"
name = "terraform"
parent = module.shared-folder.id
prefix = var.prefix
billing_account = var.billing_account_id
iam_additive_members = { "roles/owner" = var.iam_terraform_owners }
iam_additive_roles = ["roles/owner"]
services = var.project_services
source = "../../modules/project"
name = "terraform"
parent = module.shared-folder.id
prefix = var.prefix
billing_account = var.billing_account_id
iam_additive_bindings = {
for name in var.iam_terraform_owners : (name) => ["roles/owner"]
}
services = var.project_services
}
# Bootstrap Terraform state GCS bucket
@ -115,6 +116,12 @@ module "audit-dataset" {
project_id = module.audit-project.project_id
id = "audit_export"
friendly_name = "Audit logs export."
# disable delete on destroy for actual use
options = {
default_table_expiration_ms = null
default_partition_expiration_ms = null
delete_contents_on_destroy = true
}
}
module "audit-log-sinks" {
@ -140,12 +147,9 @@ module "shared-project" {
parent = module.shared-folder.id
prefix = var.prefix
billing_account = var.billing_account_id
iam_additive_members = {
"roles/owner" = var.iam_shared_owners
iam_additive_bindings = {
for name in var.iam_shared_owners : (name) => ["roles/owner"]
}
iam_additive_roles = [
"roles/owner"
]
services = var.project_services
}

View File

@ -39,12 +39,10 @@ If no shared services are needed, the shared service project module can of cours
| root_node | Root node for the new hierarchy, either 'organizations/org_id' or 'folders/folder_id'. | <code title="">string</code> | ✓ | |
| *audit_filter* | Audit log filter used for the log sink. | <code title="">string</code> | | <code title="&#60;&#60;END&#10;logName: &#34;&#47;logs&#47;cloudaudit.googleapis.com&#37;2Factivity&#34;&#10;OR&#10;logName: &#34;&#47;logs&#47;cloudaudit.googleapis.com&#37;2Fsystem_event&#34;&#10;END">...</code> |
| *gcs_location* | GCS bucket location. | <code title="">string</code> | | <code title="">EU</code> |
| *iam_assets_editors* | Shared assets project editors, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *iam_assets_owners* | Shared assets project owners, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *iam_audit_viewers* | Audit project viewers, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *iam_billing_config* | Control granting billing user role to service accounts. Target the billing account by default. | <code title="object&#40;&#123;&#10;grant &#61; bool&#10;target_org &#61; bool&#10;&#125;&#41;">object({...})</code> | | <code title="&#123;&#10;grant &#61; true&#10;target_org &#61; false&#10;&#125;">...</code> |
| *iam_folder_roles* | List of roles granted to each service account on its respective folder (excluding XPN roles). | <code title="list&#40;string&#41;">list(string)</code> | | <code title="&#91;&#10;&#34;roles&#47;compute.networkAdmin&#34;,&#10;&#34;roles&#47;owner&#34;,&#10;&#34;roles&#47;resourcemanager.folderViewer&#34;,&#10;&#34;roles&#47;resourcemanager.projectCreator&#34;,&#10;&#93;">...</code> |
| *iam_sharedsvc_owners* | Shared services project owners, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *iam_shared_owners* | Shared services project owners, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *iam_terraform_owners* | Terraform project owners, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *iam_xpn_config* | Control granting Shared VPC creation roles to service accounts. Target the root node by default. | <code title="object&#40;&#123;&#10;grant &#61; bool&#10;target_org &#61; bool&#10;&#125;&#41;">object({...})</code> | | <code title="&#123;&#10;grant &#61; true&#10;target_org &#61; true&#10;&#125;">...</code> |
| *project_services* | Service APIs enabled by default in new projects. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="&#91;&#10;&#34;container.googleapis.com&#34;,&#10;&#34;stackdriver.googleapis.com&#34;,&#10;&#93;">...</code> |

View File

@ -19,14 +19,15 @@
# Terraform project
module "tf-project" {
source = "../../modules/project"
name = "terraform"
parent = var.root_node
prefix = var.prefix
billing_account = var.billing_account_id
iam_additive_members = { "roles/owner" = var.iam_terraform_owners }
iam_additive_roles = ["roles/owner"]
services = var.project_services
source = "../../modules/project"
name = "terraform"
parent = var.root_node
prefix = var.prefix
billing_account = var.billing_account_id
iam_additive_bindings = {
for name in var.iam_terraform_owners : (name) => ["roles/owner"]
}
services = var.project_services
}
# per-environment service accounts
@ -130,6 +131,12 @@ module "audit-dataset" {
project_id = module.audit-project.project_id
id = "audit_export"
friendly_name = "Audit logs export."
# disable delete on destroy for actual use
options = {
default_table_expiration_ms = null
default_partition_expiration_ms = null
delete_contents_on_destroy = true
}
}
module "audit-log-sinks" {
@ -156,12 +163,9 @@ module "sharedsvc-project" {
parent = var.root_node
prefix = var.prefix
billing_account = var.billing_account_id
iam_additive_members = {
"roles/owner" = var.iam_sharedsvc_owners
iam_additive_bindings = {
for name in var.iam_shared_owners : (name) => ["roles/owner"]
}
iam_additive_roles = [
"roles/owner"
]
services = var.project_services
}

View File

@ -38,18 +38,6 @@ variable "gcs_location" {
default = "EU"
}
variable "iam_assets_editors" {
description = "Shared assets project editors, in IAM format."
type = list(string)
default = []
}
variable "iam_assets_owners" {
description = "Shared assets project owners, in IAM format."
type = list(string)
default = []
}
variable "iam_audit_viewers" {
description = "Audit project viewers, in IAM format."
type = list(string)
@ -79,7 +67,7 @@ variable "iam_folder_roles" {
]
}
variable "iam_sharedsvc_owners" {
variable "iam_shared_owners" {
description = "Shared services project owners, in IAM format."
type = list(string)
default = []

View File

@ -42,7 +42,7 @@ If a single router and VPN gateway are used in the hub to manage all tunnels, pa
| project_id | Project id for all resources. | <code title="">string</code> | ✓ | |
| *bgp_asn* | BGP ASNs. | <code title="map&#40;number&#41;">map(number)</code> | | <code title="&#123;&#10;hub &#61; 64513&#10;spoke-1 &#61; 64514&#10;spoke-2 &#61; 64515&#10;&#125;">...</code> |
| *bgp_custom_advertisements* | BGP custom advertisement IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;hub-to-spoke-1 &#61; &#34;10.0.32.0&#47;20&#34;&#10;hub-to-spoke-2 &#61; &#34;10.0.16.0&#47;20&#34;&#10;&#125;">...</code> |
| *bgp_interface_ranges* | None | <code title=""></code> | | <code title="&#123;&#10;spoke-1 &#61; &#34;169.254.1.0&#47;30&#34;&#10;spoke-2 &#61; &#34;169.254.1.4&#47;30&#34;&#10;&#125;">...</code> |
| *bgp_interface_ranges* | BGP interface IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;spoke-1 &#61; &#34;169.254.1.0&#47;30&#34;&#10;spoke-2 &#61; &#34;169.254.1.4&#47;30&#34;&#10;&#125;">...</code> |
| *ip_ranges* | IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;hub-a &#61; &#34;10.0.0.0&#47;24&#34;&#10;hub-b &#61; &#34;10.0.8.0&#47;24&#34;&#10;spoke-1-a &#61; &#34;10.0.16.0&#47;24&#34;&#10;spoke-1-b &#61; &#34;10.0.24.0&#47;24&#34;&#10;spoke-2-a &#61; &#34;10.0.32.0&#47;24&#34;&#10;spoke-2-b &#61; &#34;10.0.40.0&#47;24&#34;&#10;&#125;">...</code> |
| *regions* | VPC regions. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;a &#61; &#34;europe-west1&#34;&#10;b &#61; &#34;europe-west2&#34;&#10;&#125;">...</code> |

View File

@ -153,16 +153,16 @@ The VPN used to connect to the on-premises environment does not account for HA,
| project_id | Project id for all resources. | <code title="">string</code> | ✓ | |
| *bgp_asn* | BGP ASNs. | <code title="map&#40;number&#41;">map(number)</code> | | <code title="&#123;&#10;gcp &#61; 64513&#10;onprem &#61; 64514&#10;&#125;">...</code> |
| *bgp_interface_ranges* | BGP interface IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;gcp &#61; &#34;169.254.1.0&#47;30&#34;&#10;&#125;">...</code> |
| *dns_forwarder_address* | Address of the DNS server used to forward queries from on-premises. | <code title="">string</code> | | <code title="">10.0.0.2</code> |
| *forwarder_address* | GCP DNS inbound policy forwarder address. | <code title="">string</code> | | <code title="">10.0.0.2</code> |
| *ip_ranges* | IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;gcp &#61; &#34;10.0.0.0&#47;24&#34;&#10;onprem &#61; &#34;10.0.16.0&#47;24&#34;&#10;&#125;">...</code> |
| *region* | VPC region. | <code title="">string</code> | | <code title="">europe-west1</code> |
| *resolver_address* | GCP DNS resolver address for the inbound policy. | <code title="">string</code> | | <code title="">10.0.0.2</code> |
| *ssh_source_ranges* | IP CIDR ranges that will be allowed to connect via SSH to the onprem instance. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">["0.0.0.0/0"]</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| foo | None | |
| onprem-instance | Onprem instance details. | |
| test-instance | Test instance details. | |
<!-- END TFDOC -->

View File

@ -13,5 +13,5 @@
# limitations under the License.
terraform {
required_version = ">= 0.12"
required_version = ">= 0.12.6"
}

View File

@ -17,6 +17,7 @@
###############################################################################
# the container.hostServiceAgentUser role is needed for GKE on shared VPC
# see: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc#grant_host_service_agent_role
module "project-host" {
source = "../../modules/project"
@ -30,7 +31,7 @@ module "project-host" {
]
iam_members = {
"roles/container.hostServiceAgentUser" = [
"serviceAccount:${module.project-svc-gke.gke_service_account}"
"serviceAccount:${module.project-svc-gke.service_accounts.robots.container-engine}"
]
"roles/owner" = var.owners_host
}
@ -81,12 +82,6 @@ module "project-svc-gke" {
# Networking #
################################################################################
# the service project GKE robot needs the `hostServiceAgent` role throughout
# the entire life of its clusters; the `iam_project_id` project output is used
# here to set the project id so that the VPC depends on that binding, and any
# cluster using it then also depends on it indirectly; you can of course use
# the `project_id` output instead if you don't care about destroying
# subnet IAM bindings control which identities can use the individual subnets
module "vpc-shared" {
@ -122,16 +117,16 @@ module "vpc-shared" {
iam_members = {
"${var.region}/gce" = {
"roles/compute.networkUser" = concat(var.owners_gce, [
"serviceAccount:${module.project-svc-gce.cloudsvc_service_account}",
"serviceAccount:${module.project-svc-gce.service_accounts.cloud_services}",
])
}
"${var.region}/gke" = {
"roles/compute.networkUser" = concat(var.owners_gke, [
"serviceAccount:${module.project-svc-gke.cloudsvc_service_account}",
"serviceAccount:${module.project-svc-gke.gke_service_account}",
"serviceAccount:${module.project-svc-gke.service_accounts.cloud_services}",
"serviceAccount:${module.project-svc-gke.service_accounts.robots.container-engine}",
])
"roles/compute.securityAdmin" = [
"serviceAccount:${module.project-svc-gke.gke_service_account}",
"serviceAccount:${module.project-svc-gke.service_accounts.robots.container-engine}",
]
}
}

View File

@ -21,13 +21,15 @@ Specific modules also offer support for non-authoritative bindings (e.g. `google
- [address reservation](./net-address)
- [Cloud DNS](./dns)
- [Cloud NAT](./net-cloudnat)
- [Cloud Endpoints](./endpoints)
- [L4 Internal Load Balancer](./net-ilb)
- [Service Directory](./service-directory)
- [VPC](./net-vpc)
- [VPC firewall](./net-vpc-firewall)
- [VPC peering](./net-vpc-peering)
- [VPN static](./net-vpn-static)
- [VPN dynamic](./net-vpn-dynamic)
- [VPN HA](./net-vpn-ha))
- [VPN HA](./net-vpn-ha)
- [ ] TODO: xLB modules
## Compute/Container
@ -41,9 +43,22 @@ Specific modules also offer support for non-authoritative bindings (e.g. `google
## Data
- [BigQuery dataset](./bigquery-dataset)
- [Datafusion](./datafusion)
- [GCS](./gcs)
- [Pub/Sub](./pubsub)
- [Bigtable instance](./bigtable-instance)
## Development
- [Artifact Registry](./artifact-registry)
- [Container Registry](./container-registry)
- [Source Repository](./source-repository)
## Security
- [Cloud KMS](./kms)
- [Secret Manager](./secret-manager)
## Serverless
- [Cloud Functions](./cloud-function)

View File

@ -0,0 +1,46 @@
# Network Endpoint Group Module
This modules allows creating zonal network endpoint groups.
Note: this module will integrated into a general-purpose load balancing module in the future.
## Example
```hcl
module "neg" {
source = "./modules/net-neg"
project_id = "myproject"
name = "myneg"
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["europe-west1/default"]
zone = "europe-west1-b"
endpoints = [
for instance in module.vm.instances :
{
instance = instance.name
port = 80
ip_address = instance.network_interface[0].network_ip
}
]
}
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| endpoints | List of (instance, port, address) of the NEG | <code title="list&#40;object&#40;&#123;&#10;instance &#61; string&#10;port &#61; number&#10;ip_address &#61; string&#10;&#125;&#41;&#41;">list(object({...}))</code> | ✓ | |
| name | NEG name | <code title="">string</code> | ✓ | |
| network | Name or self link of the VPC used for the NEG. Use the self link for Shared VPC. | <code title="">string</code> | ✓ | |
| project_id | NEG project id. | <code title="">string</code> | ✓ | |
| subnetwork | VPC subnetwork name or self link. | <code title="">string</code> | ✓ | |
| zone | NEG zone | <code title="">string</code> | ✓ | |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| id | Network endpoint group ID | |
| self_lnk | Network endpoint group self link | |
| size | Size of the network endpoint group | |
<!-- END TFDOC -->

View File

@ -0,0 +1,33 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
resource "google_compute_network_endpoint_group" "group" {
project = var.project_id
name = var.name
network = var.network
subnetwork = var.subnetwork
zone = var.zone
}
resource "google_compute_network_endpoint" "endpoint" {
for_each = { for endpoint in var.endpoints : endpoint.instance => endpoint }
project = var.project_id
network_endpoint_group = google_compute_network_endpoint_group.group.name
instance = each.value.instance
port = each.value.port
ip_address = each.value.ip_address
zone = var.zone
}

View File

@ -0,0 +1,30 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "id" {
description = "Network endpoint group ID"
value = google_compute_network_endpoint_group.group.name
}
output "size" {
description = "Size of the network endpoint group"
value = google_compute_network_endpoint_group.group.size
}
output "self_lnk" {
description = "Network endpoint group self link"
value = google_compute_network_endpoint_group.group.self_link
}

View File

@ -0,0 +1,49 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "project_id" {
description = "NEG project id."
type = string
}
variable "name" {
description = "NEG name"
type = string
}
variable "network" {
description = "Name or self link of the VPC used for the NEG. Use the self link for Shared VPC."
type = string
}
variable "subnetwork" {
description = "VPC subnetwork name or self link."
type = string
}
variable "zone" {
description = "NEG zone"
type = string
}
variable "endpoints" {
description = "List of (instance, port, address) of the NEG"
type = list(object({
instance = string
port = number
ip_address = string
}))
}

View File

@ -0,0 +1,19 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
terraform {
required_version = ">= 0.12.6"
}

View File

@ -0,0 +1,43 @@
# Google Cloud Artifact Registry Module
This module simplifies the creation of repositories using Google Cloud Artifact Registry.
Note: Artifact Registry is still in beta, hence this module currently uses the beta provider.
## Example
```hcl
module "docker_artifact_registry" {
source = "./modules/artifact-registry"
project_id = "myproject"
location = "europe-west1"
format = "DOCKER"
id = "myregistry"
iam_roles = ["roles/artifactregistry.admin"]
iam_members = {
"roles/artifactregistry.admin" = ["group:cicd@example.com"]
}
}
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| id | Repository id | <code title="">string</code> | ✓ | |
| project_id | Registry project id. | <code title="">string</code> | ✓ | |
| *description* | An optional description for the repository | <code title="">string</code> | | <code title="">Terraform-managed registry</code> |
| *format* | Repository format. One of DOCKER or UNSPECIFIED | <code title="">string</code> | | <code title="">DOCKER</code> |
| *iam_members* | Map of member lists used to set authoritative bindings, keyed by role. | <code title="map&#40;list&#40;string&#41;&#41;">map(list(string))</code> | | <code title="">{}</code> |
| *iam_roles* | List of roles used to set authoritative bindings. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *labels* | Labels to be attached to the registry. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="">{}</code> |
| *location* | Registry location. Use `gcloud beta artifacts locations list' to get valid values | <code title="">string</code> | | <code title=""></code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| id | Repository id | |
| name | Repository name | |
<!-- END TFDOC -->

View File

@ -0,0 +1,35 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
resource "google_artifact_registry_repository" "registry" {
provider = google-beta
project = var.project_id
location = var.location
description = var.description
format = var.format
labels = var.labels
repository_id = var.id
}
resource "google_artifact_registry_repository_iam_binding" "bindings" {
provider = google-beta
for_each = toset(var.iam_roles)
project = var.project_id
location = google_artifact_registry_repository.registry.location
repository = google_artifact_registry_repository.registry.name
role = each.value
members = lookup(var.iam_members, each.value, [])
}

View File

@ -0,0 +1,25 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "id" {
description = "Repository id"
value = google_artifact_registry_repository.registry.id
}
output "name" {
description = "Repository name"
value = google_artifact_registry_repository.registry.name
}

View File

@ -0,0 +1,61 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "iam_members" {
description = "Map of member lists used to set authoritative bindings, keyed by role."
type = map(list(string))
default = {}
}
variable "iam_roles" {
description = "List of roles used to set authoritative bindings."
type = list(string)
default = []
}
variable "location" {
description = "Registry location. Use `gcloud beta artifacts locations list' to get valid values"
type = string
default = ""
}
variable "project_id" {
description = "Registry project id."
type = string
}
variable "labels" {
description = "Labels to be attached to the registry."
type = map(string)
default = {}
}
variable "format" {
description = "Repository format. One of DOCKER or UNSPECIFIED"
type = string
default = "DOCKER"
}
variable "description" {
description = "An optional description for the repository"
type = string
default = "Terraform-managed registry"
}
variable "id" {
description = "Repository id"
type = string
}

View File

@ -0,0 +1,19 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
terraform {
required_version = ">= 0.12.6"
}

View File

@ -20,7 +20,7 @@ The access variables are split into `access_roles` and `access_identities` varia
```hcl
module "bigquery-dataset" {
source = "./modules/bigquery-dataset"
project_id = "my-project
project_id = "my-project"
id = "my-dataset"
access_roles = {
reader-group = { role = "READER", type = "group_by_email" }
@ -40,7 +40,7 @@ Dataset options are set via the `options` variable. all options must be specifie
```hcl
module "bigquery-dataset" {
source = "./modules/bigquery-dataset"
project_id = "my-project
project_id = "my-project"
id = "my-dataset"
options = {
default_table_expiration_ms = 3600000
@ -57,7 +57,7 @@ Tables are created via the `tables` variable, or the `view` variable for views.
```hcl
module "bigquery-dataset" {
source = "./modules/bigquery-dataset"
project_id = "my-project
project_id = "my-project"
id = "my-dataset"
tables = {
table_a = {
@ -76,7 +76,7 @@ If partitioning is needed, populate the `partitioning` variable using either the
```hcl
module "bigquery-dataset" {
source = "./modules/bigquery-dataset"
project_id = "my-project
project_id = "my-project"
id = "my-dataset"
tables = {
table_a = {
@ -99,7 +99,7 @@ To create views use the `view` variable. If you're querying a table created by t
```hcl
module "bigquery-dataset" {
source = "./modules/bigquery-dataset"
project_id = "my-project
project_id = "my-project"
id = "my-dataset"
tables = {
table_a = {
@ -158,6 +158,3 @@ module "bigquery-dataset" {
| views | View resources. | |
<!-- END TFDOC -->
## TODO
- [ ] add support for tables

View File

@ -0,0 +1,65 @@
# Google Cloud BigTable Module
This module allows managing a single BigTable instance, including access configuration and tables.
## TODO
- [ ] support bigtable_gc_policy
- [ ] support bigtable_app_profile
## Examples
### Simple instance with access configuration
```hcl
module "big-table-instance" {
source = "./modules/bigtable-instance"
project_id = "my-project"
name = "instance"
cluster_id = "instance"
instance_type = "PRODUCTION"
tables = {
test1 = { table_options = null },
test2 = { table_options = {
split_keys = ["a", "b", "c"]
column_family = null
}
}
}
iam_roles = ["viewer"]
iam_members = {
viewer = ["user:viewer@testdomain.com"]
}
}
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| name | The name of the Cloud Bigtable instance. | <code title="">string</code> | ✓ | |
| project_id | Id of the project where datasets will be created. | <code title="">string</code> | ✓ | |
| zone | The zone to create the Cloud Bigtable cluster in. | <code title="">string</code> | ✓ | |
| *cluster_id* | The ID of the Cloud Bigtable cluster. | <code title="">string</code> | | <code title="">europe-west1</code> |
| *deletion_protection* | Whether or not to allow Terraform to destroy the instance. Unless this field is set to false in Terraform state, a terraform destroy or terraform apply that would delete the instance will fail. | <code title=""></code> | | <code title="">true</code> |
| *display_name* | The human-readable display name of the Bigtable instance. | <code title=""></code> | | <code title="">null</code> |
| *iam_members* | Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the instance are preserved. | <code title="map&#40;list&#40;string&#41;&#41;">map(list(string))</code> | | <code title="">{}</code> |
| *iam_roles* | Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *instance_type* | None | <code title="">string</code> | | <code title="">DEVELOPMENT</code> |
| *num_nodes* | The number of nodes in your Cloud Bigtable cluster. | <code title="">number</code> | | <code title="">1</code> |
| *storage_type* | The storage type to use. | <code title="">string</code> | | <code title="">SSD</code> |
| *table_options_defaults* | Default option of tables created in the BigTable instance. | <code title="object&#40;&#123;&#10;split_keys &#61; list&#40;string&#41;&#10;column_family &#61; string&#10;&#125;&#41;">object({...})</code> | | <code title="&#123;&#10;split_keys &#61; &#91;&#93;&#10;column_family &#61; null&#10;&#125;">...</code> |
| *tables* | Tables to be created in the BigTable instance. | <code title="map&#40;object&#40;&#123;&#10;table_options &#61; object&#40;&#123;&#10;split_keys &#61; list&#40;string&#41;&#10;column_family &#61; string&#10;&#125;&#41;&#10;&#125;&#41;&#41;">map(object({...}))</code> | | <code title="">{}</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| id | An identifier for the resource with format projects/{{project}}/instances/{{name}}. | |
| instance | BigTable intance. | |
| table_ids | Map of fully qualified table ids keyed by table name. | |
| tables | Table resources. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,68 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
tables = {
for k, v in var.tables : k => v.table_options != null ? v.table_options : var.table_options_defaults
}
iam_roles_bindings = {
for k in var.iam_roles : k => lookup(var.iam_members, k, [])
}
}
resource "google_bigtable_instance" "default" {
project = var.project_id
name = var.name
cluster {
cluster_id = var.cluster_id
zone = var.zone
storage_type = var.storage_type
}
instance_type = var.instance_type
display_name = var.display_name == null ? var.display_name : var.name
deletion_protection = var.deletion_protection
}
resource "google_bigtable_instance_iam_binding" "default" {
for_each = local.iam_roles_bindings
project = var.project_id
instance = google_bigtable_instance.default.name
role = "roles/bigtable.${each.key}"
members = each.value
}
resource "google_bigtable_table" "default" {
for_each = local.tables
project = var.project_id
instance_name = google_bigtable_instance.default.name
name = each.key
split_keys = each.value.split_keys
dynamic column_family {
for_each = each.value.column_family != null ? [""] : []
content {
family = each.value.column_family
}
}
# lifecycle {
# prevent_destroy = true
# }
}

View File

@ -0,0 +1,46 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "id" {
description = "An identifier for the resource with format projects/{{project}}/instances/{{name}}."
value = google_bigtable_instance.default.id
depends_on = [
google_bigtable_instance_iam_binding,
google_bigtable_table
]
}
output "instance" {
description = "BigTable intance."
value = google_bigtable_instance.default
depends_on = [
google_bigtable_instance_iam_binding,
google_bigtable_table
]
}
output "tables" {
description = "Table resources."
value = google_bigtable_table.default
}
output "table_ids" {
description = "Map of fully qualified table ids keyed by table name."
value = { for k, v in google_bigtable_table.default : v.name => v.id }
}

View File

@ -0,0 +1,99 @@
/**
* Copyright 2019 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "iam_roles" {
description = "Authoritative for a given role. Updates the IAM policy to grant a role to a list of members."
type = list(string)
default = []
}
variable "iam_members" {
description = "Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the instance are preserved."
type = map(list(string))
default = {}
}
variable "cluster_id" {
description = "The ID of the Cloud Bigtable cluster."
type = string
default = "europe-west1"
}
variable "deletion_protection" {
description = "Whether or not to allow Terraform to destroy the instance. Unless this field is set to false in Terraform state, a terraform destroy or terraform apply that would delete the instance will fail."
default = true
}
variable "display_name" {
description = "The human-readable display name of the Bigtable instance."
default = null
}
variable "instance_type" {
description = "The instance type to create. One of \"DEVELOPMENT\" or \"PRODUCTION\". Defaults to \"DEVELOPMENT\""
type = string
default = "DEVELOPMENT"
}
variable "name" {
description = "The name of the Cloud Bigtable instance."
type = string
}
variable "num_nodes" {
description = "The number of nodes in your Cloud Bigtable cluster."
type = number
default = 1
}
variable "project_id" {
description = "Id of the project where datasets will be created."
type = string
}
variable "storage_type" {
description = "The storage type to use."
type = string
default = "SSD"
}
variable "tables" {
description = "Tables to be created in the BigTable instance."
type = map(object({
table_options = object({
split_keys = list(string)
column_family = string
})
}))
default = {}
}
variable "table_options_defaults" {
description = "Default option of tables created in the BigTable instance."
type = object({
split_keys = list(string)
column_family = string
})
default = {
split_keys = []
column_family = null
}
}
variable "zone" {
description = "The zone to create the Cloud Bigtable cluster in."
type = string
}

View File

@ -0,0 +1,19 @@
/**
* Copyright 2019 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
terraform {
required_version = ">= 0.12.6"
}

View File

@ -24,7 +24,7 @@ This example will create a `cloud-config` that uses the module's defaults, creat
```hcl
module "cos-coredns" {
source = "./modules/cos-container/coredns"
source = "./modules/cloud-config-container/coredns"
}
# use it as metadata in a compute instance or template
@ -40,8 +40,8 @@ This example will create a `cloud-config` using a custom CoreDNS configuration,
```hcl
module "cos-coredns" {
source = "./modules/cos-container/coredns"
coredns_config = "./modules/cos-container/coredns/Corefile-hosts"
source = "./modules/cloud-config-container/coredns"
coredns_config = "./modules/cloud-config-container/coredns/Corefile-hosts"
files = {
"/etc/coredns/example.hosts" = {
content = "127.0.0.2 foo.example.org foo"
@ -57,7 +57,7 @@ This example shows how to create the single instance optionally managed by the m
```hcl
module "cos-coredns" {
source = "./modules/cos-container/coredns"
source = "./modules/cloud-config-container/coredns"
test_instance = {
project_id = "my-project"
zone = "europe-west1-b"

View File

@ -21,9 +21,7 @@ module "cos-envoy" {
container_args = "-c /etc/envoy/envoy.yaml --log-level info --allow-unknown-static-fields"
container_volumes = [
{ host = "/etc/envoy/envoy.yaml",
container = "/etc/envoy/envoy.yaml"
}
{ host = "/etc/envoy/envoy.yaml", container = "/etc/envoy/envoy.yaml" }
]
docker_args = "--network host --pid host"
@ -63,25 +61,26 @@ module "cos-envoy" {
<!-- BEGIN TFDOC -->
## Variables
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| container\_image | Container image. | `string` | n/a | yes |
| boot\_commands | List of cloud-init `bootcmd`s | `list(string)` | `[]` | no |
| cloud\_config | Cloud config template path. If provided, takes precedence over all other arguments. | `string` | `null` | no |
| config\_variables | Additional variables used to render the template passed via `cloud_config` | `map(any)` | `{}` | no |
| container\_args | Arguments for container | `string` | `""` | no |
| container\_name | Name of the container to be run | `string` | `"container"` | no |
| container\_volumes | List of volumes | <pre>list(object({<br> host = string,<br> container = string<br> }))</pre> | `[]` | no |
| docker\_args | Extra arguments to be passed for docker | `string` | `null` | no |
| file\_defaults | Default owner and permissions for files. | <pre>object({<br> owner = string<br> permissions = string<br> })</pre> | <pre>{<br> "owner": "root",<br> "permissions": "0644"<br>}</pre> | no |
| files | Map of extra files to create on the instance, path as key. Owner and permissions will use defaults if null. | <pre>map(object({<br> content = string<br> owner = string<br> permissions = string<br> }))</pre> | `{}` | no |
| gcp\_logging | Should container logs be sent to Google Cloud Logging | `bool` | `true` | no |
| run\_commands | List of cloud-init `runcmd`s | `list(string)` | `[]` | no |
| users | List of usernames to be created. If provided, first user will be used to run the container. | <pre>list(object({<br> username = string,<br> uid = number,<br> }))</pre> | `[]` | no |
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| container_image | Container image. | <code title="">string</code> | ✓ | |
| *authenticate_gcr* | Setup docker to pull images from private GCR. Requires at least one user since the token is stored in the home of the first user defined. | <code title="">bool</code> | | <code title="">false</code> |
| *boot_commands* | List of cloud-init `bootcmd`s | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *cloud_config* | Cloud config template path. If provided, takes precedence over all other arguments. | <code title="">string</code> | | <code title="">null</code> |
| *config_variables* | Additional variables used to render the template passed via `cloud_config` | <code title="map&#40;any&#41;">map(any)</code> | | <code title="">{}</code> |
| *container_args* | Arguments for container | <code title="">string</code> | | <code title=""></code> |
| *container_name* | Name of the container to be run | <code title="">string</code> | | <code title="">container</code> |
| *container_volumes* | List of volumes | <code title="list&#40;object&#40;&#123;&#10;host &#61; string,&#10;container &#61; string&#10;&#125;&#41;&#41;">list(object({...}))</code> | | <code title="">[]</code> |
| *docker_args* | Extra arguments to be passed for docker | <code title="">string</code> | | <code title="">null</code> |
| *file_defaults* | Default owner and permissions for files. | <code title="object&#40;&#123;&#10;owner &#61; string&#10;permissions &#61; string&#10;&#125;&#41;">object({...})</code> | | <code title="&#123;&#10;owner &#61; &#34;root&#34;&#10;permissions &#61; &#34;0644&#34;&#10;&#125;">...</code> |
| *files* | Map of extra files to create on the instance, path as key. Owner and permissions will use defaults if null. | <code title="map&#40;object&#40;&#123;&#10;content &#61; string&#10;owner &#61; string&#10;permissions &#61; string&#10;&#125;&#41;&#41;">map(object({...}))</code> | | <code title="">{}</code> |
| *gcp_logging* | Should container logs be sent to Google Cloud Logging | <code title="">bool</code> | | <code title="">true</code> |
| *run_commands* | List of cloud-init `runcmd`s | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *users* | List of usernames to be created. If provided, first user will be used to run the container. | <code title="list&#40;object&#40;&#123;&#10;username &#61; string,&#10;uid &#61; number,&#10;&#125;&#41;&#41;">list(object({...}))</code> | | <code title="&#91;&#10;&#93;">...</code> |
## Outputs
| Name | Description |
|------|-------------|
| cloud\_config | Rendered cloud-config file to be passed as user-data instance metadata. |
| name | description | sensitive |
|---|---|:---:|
| cloud_config | Rendered cloud-config file to be passed as user-data instance metadata. | |
<!-- END TFDOC -->

View File

@ -44,6 +44,10 @@ write_files:
After=gcr-online.target docker.socket
Wants=gcr-online.target docker.socket docker-events-collector.service
[Service]
%{ if authenticate_gcr && length(users) > 0 ~}
Environment="HOME=/home/${users[0].username}"
ExecStartPre=/usr/bin/docker-credential-gcr configure-docker
%{ endif ~}
ExecStart=/usr/bin/docker run --rm --name=${container_name} \
%{ if length(users) > 0 ~}
--user=${users[0].uid} \

View File

@ -26,6 +26,7 @@ locals {
gcp_logging = var.gcp_logging
run_commands = var.run_commands
users = var.users
authenticate_gcr = var.authenticate_gcr
}))
files = {
for path, attrs in var.files : path => {

View File

@ -108,3 +108,9 @@ variable "users" {
default = [
]
}
variable "authenticate_gcr" {
description = "Setup docker to pull images from private GCR. Requires at least one user since the token is stored in the home of the first user defined."
type = bool
default = false
}

View File

@ -2,6 +2,10 @@
This module manages a `cloud-config` configuration that starts a containerized Envoy Proxy on Container Optimized OS connected to Traffic Director. The default configuration creates a reverse proxy exposed on the node's port 80. Traffic routing policies and management should be managed by other means via Traffic Director.
The generated cloud config is rendered in the `cloud_config` output, and is meant to be used in instances or instance templates via the `user-data` metadata.
This module depends on the [`cos-generic-metadata` module](../cos-generic-metadata) being the parent folder. If you change its location be sure to adjust the `source` attribute in `main.tf`.
## Examples
### Default configuration
@ -46,14 +50,14 @@ module "vm-cos" {
<!-- BEGIN TFDOC -->
## Variables
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| envoy\_image | Envoy Proxy container image to use. | `string` | `"envoyproxy/envoy:v1.14.1"` | no |
| gcp\_logging | Should container logs be sent to Google Cloud Logging | `bool` | `true` | no |
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| *envoy_image* | Envoy Proxy container image to use. | <code title="">string</code> | | <code title="">envoyproxy/envoy:v1.14.1</code> |
| *gcp_logging* | Should container logs be sent to Google Cloud Logging | <code title="">bool</code> | | <code title="">true</code> |
## Outputs
| Name | Description |
|------|-------------|
| cloud\_config | Rendered cloud-config file to be passed as user-data instance metadata. |
| name | description | sensitive |
|---|---|:---:|
| cloud_config | Rendered cloud-config file to be passed as user-data instance metadata. | |
<!-- END TFDOC -->

View File

@ -15,7 +15,7 @@
*/
module "cos-envoy-td" {
source = "./modules/cos-generic-metadata"
source = "../cos-generic-metadata"
boot_commands = [
"systemctl start node-problem-detector",
@ -26,9 +26,7 @@ module "cos-envoy-td" {
container_args = "-c /etc/envoy/envoy.yaml --log-level info --allow-unknown-static-fields"
container_volumes = [
{ host = "/etc/envoy/envoy.yaml",
container = "/etc/envoy/envoy.yaml"
}
{ host = "/etc/envoy/envoy.yaml", container = "/etc/envoy/envoy.yaml" }
]
docker_args = "--network host --pid host"

View File

@ -1 +0,0 @@
../../cos-generic-metadata

View File

@ -74,5 +74,4 @@ bootcmd:
runcmd:
- iptables -I INPUT 1 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
- systemctl daemon-reload
- systemctl restart systemd-resolved.service
- systemctl start nginx
- systemctl start nginx

View File

@ -4,11 +4,11 @@ This module manages a `cloud-config` configuration that starts an emulated on-pr
The emulated on-premises infrastructure is composed of:
- a Strongswan container managing the VPN tunnel to GCP
- a [Strongswan container](./docker-images/strongswan) managing the VPN tunnel to GCP
- an optional Bird container managing the BGP session
- a CoreDNS container servng local DNS and forwarding to GCP
- an Nginx container serving a simple static web page
- a generic Linux container used as a jump host inside the on-premises network
- a [generic Linux container](./docker-images/toolbox) used as a jump host inside the on-premises network
A [complete scenario using this module](../../../infrastructure/onprem-google-access-dns) is available in the infrastructure examples.

View File

@ -0,0 +1,3 @@
# Supporting container images
The images in this folder are used by the [`onprem` module](../).

View File

@ -1,9 +1,14 @@
# StrongSwan docker container
### [strongSwan](https://www.strongswan.org/) is an OpenSource IPsec-based VPN Solution
## Build
```bash
gcloud builds submit . --config=cloudbuild.yaml
```
## Docker compose example
### Docker compose example
```yaml
version: "3"
services:
@ -37,8 +42,3 @@ services:
- "/var/lib/docker-compose/onprem/bird/bird.conf:/etc/bird/bird.conf:ro"
```
### Build
```bash
gcloud builds submit . --config=cloudbuild.yaml
```

View File

@ -3,7 +3,7 @@
Lightweight container with some basic console tools used for testing and probing.
## Building
## Build
```bash
gcloud builds submit . --config=cloudbuild.yaml

View File

@ -0,0 +1,162 @@
# Cloud Function Module
Cloud Function management, with support for IAM roles and optional bucket creation.
The GCS object used for deployment uses a hash of the bundle zip contents in its name, which ensures change tracking and avoids recreating the function if the GCS object is deleted and needs recreating.
## TODO
- [ ] add support for `ingress_settings`
- [ ] add support for `vpc_connector` and `vpc_connector_egress_settings`
- [ ] add support for `source_repository`
## Examples
### HTTP trigger
This deploys a Cloud Function with an HTTP endpoint, using a pre-existing GCS bucket for deployment, setting the service account to the Cloud Function default one, and delegating access control to the containing project.
```hcl
module "cf-http" {
source = "../modules/net-cloudnat"
project_id = "my-project"
name = "test-cf-http"
bucket_name = "test-cf-bundles"
bundle_config = {
source_dir = "my-cf-source-folder
output_path = "bundle.zip"
}
}
```
### Non-HTTP triggers
Other trigger types other than HTTP are configured via the `trigger_config` variable. This example shows a PubSub trigger.
```hcl
module "cf-http" {
source = "../modules/net-cloudnat"
project_id = "my-project"
name = "test-cf-http"
bucket_name = "test-cf-bundles"
bundle_config = {
source_dir = "my-cf-source-folder
output_path = "bundle.zip"
}
trigger_config = {
event = "google.pubsub.topic.publish"
resource = local.my-topic
retry = null
}
}
```
### Controlling HTTP access
To allow anonymous access to the function, grant the `roles/cloudfunctions.invoker` role to the special `allUsers` identifier. Use specific identities (service accounts, groups, etc.) instead of `allUsers` to only allow selective access.
```hcl
module "cf-http" {
source = "../modules/net-cloudnat"
project_id = "my-project"
name = "test-cf-http"
bucket_name = "test-cf-bundles"
bundle_config = {
source_dir = "my-cf-source-folder
output_path = "bundle.zip"
}
iam_roles = ["roles/cloudfunctions.invoker"]
iam_members = {
"roles/cloudfunctions.invoker" = ["allUsers"]
}
}
```
### GCS bucket creation
You can have the module auto-create the GCS bucket used for deployment via the `bucket_config` variable. Setting `bucket_config.location` to `null` will also use the function region for GCS.
```hcl
module "cf-http" {
source = "../modules/net-cloudnat"
project_id = "my-project"
name = "test-cf-http"
bucket_name = "test-cf-bundles"
bucket_config = {
location = null
lifecycle_delete_age = 1
}
bundle_config = {
source_dir = "my-cf-source-folder
output_path = "bundle.zip"
}
}
```
### Service account management
To use a custom service account managed by the module, set `service_account_create` to `true` and leave `service_account` set to `null` value (default).
```hcl
module "cf-http" {
source = "../modules/net-cloudnat"
project_id = "my-project"
name = "test-cf-http"
bucket_name = "test-cf-bundles"
bundle_config = {
source_dir = "my-cf-source-folder
output_path = "bundle.zip"
}
service_account_create = true
}
```
To use an externally managed service account, pass its email in `service_account` and leave `service_account_create` to `false` (the default).
```hcl
module "cf-http" {
source = "../modules/net-cloudnat"
project_id = "my-project"
name = "test-cf-http"
bucket_name = "test-cf-bundles"
bundle_config = {
source_dir = "my-cf-source-folder
output_path = "bundle.zip"
}
service_account = local.service_account_email
}
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| bucket_name | Name of the bucket that will be used for the function code. It will be created with prefix prepended if bucket_config is not null. | <code title="">string</code> | ✓ | |
| bundle_config | Cloud function source folder and generated zip bundle paths. Output path defaults to '/tmp/bundle.zip' if null. | <code title="object&#40;&#123;&#10;source_dir &#61; string&#10;output_path &#61; string&#10;&#125;&#41;">object({...})</code> | ✓ | |
| name | Name used for cloud function and associated resources. | <code title="">string</code> | ✓ | |
| project_id | Project id used for all resources. | <code title="">string</code> | ✓ | |
| *bucket_config* | Enable and configure auto-created bucket. Set fields to null to use defaults. | <code title="object&#40;&#123;&#10;location &#61; string&#10;lifecycle_delete_age &#61; number&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
| *environment_variables* | Cloud function environment variables. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="">{}</code> |
| *function_config* | Cloud function configuration. | <code title="object&#40;&#123;&#10;entry_point &#61; string&#10;instances &#61; number&#10;memory &#61; number&#10;runtime &#61; string&#10;timeout &#61; number&#10;&#125;&#41;">object({...})</code> | | <code title="&#123;&#10;entry_point &#61; &#34;main&#34;&#10;instances &#61; 1&#10;memory &#61; 256&#10;runtime &#61; &#34;python37&#34;&#10;timeout &#61; 180&#10;&#125;">...</code> |
| *iam_members* | Map of member lists used to set authoritative bindings, keyed by role. Ignored for template use. | <code title="map&#40;list&#40;string&#41;&#41;">map(list(string))</code> | | <code title="">{}</code> |
| *iam_roles* | List of roles used to set authoritative bindings. Ignored for template use. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *labels* | Resource labels | <code title="map&#40;string&#41;">map(string)</code> | | <code title="">{}</code> |
| *prefix* | Optional prefix used for resource names. | <code title="">string</code> | | <code title="">null</code> |
| *region* | Region used for all resources. | <code title="">string</code> | | <code title="">europe-west1</code> |
| *service_account* | Service account email. Unused if service account is auto-created. | <code title="">string</code> | | <code title="">null</code> |
| *service_account_create* | Auto-create service account. | <code title="">bool</code> | | <code title="">false</code> |
| *trigger_config* | Function trigger configuration. Leave null for HTTP trigger. | <code title="object&#40;&#123;&#10;event &#61; string&#10;resource &#61; string&#10;retry &#61; bool&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| bucket | Bucket resource (only if auto-created). | |
| bucket_name | Bucket name. | |
| function | Cloud function resources. | |
| function_name | Cloud function name. | |
| service_account | Service account resource. | |
| service_account_email | Service account email. | |
| service_account_iam_email | Service account email. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,122 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
bucket = (
var.bucket_name != null
? var.bucket_name
: (
length(google_storage_bucket.bucket) > 0
? google_storage_bucket.bucket[0].name
: null
)
)
prefix = var.prefix == null ? "" : "${var.prefix}-"
service_account_email = (
var.service_account_create
? (
length(google_service_account.service_account) > 0
? google_service_account.service_account[0].email
: null
)
: var.service_account
)
}
resource "google_cloudfunctions_function" "function" {
project = var.project_id
region = var.region
name = "${local.prefix}${var.name}"
description = "Terraform managed."
runtime = var.function_config.runtime
available_memory_mb = var.function_config.memory
max_instances = var.function_config.instances
timeout = var.function_config.timeout
entry_point = var.function_config.entry_point
environment_variables = var.environment_variables
service_account_email = local.service_account_email
source_archive_bucket = local.bucket
source_archive_object = google_storage_bucket_object.bundle.name
labels = var.labels
trigger_http = var.trigger_config == null ? true : null
dynamic event_trigger {
for_each = var.trigger_config == null ? [] : [""]
content {
event_type = var.trigger_config.event
resource = var.trigger_config.resource
dynamic failure_policy {
for_each = var.trigger_config.retry == null ? [] : [""]
content {
retry = var.trigger_config.retry
}
}
}
}
}
resource "google_cloudfunctions_function_iam_binding" "default" {
for_each = toset(var.iam_roles)
project = var.project_id
region = var.region
cloud_function = google_cloudfunctions_function.function.name
role = each.value
members = try(var.iam_members[each.value], {})
}
resource "google_storage_bucket" "bucket" {
count = var.bucket_config == null ? 0 : 1
project = var.project_id
name = "${local.prefix}${var.bucket_name}"
location = (
var.bucket_config.location == null
? var.region
: var.bucket_config.location
)
labels = var.labels
dynamic lifecycle_rule {
for_each = var.bucket_config.lifecycle_delete_age == null ? [] : [""]
content {
action { type = "Delete" }
condition { age = var.bucket_config.lifecycle_delete_age }
}
}
}
resource "google_storage_bucket_object" "bundle" {
name = "bundle-${data.archive_file.bundle.output_md5}.zip"
bucket = local.bucket
source = data.archive_file.bundle.output_path
}
data "archive_file" "bundle" {
type = "zip"
source_dir = var.bundle_config.source_dir
output_path = (
var.bundle_config.output_path == null
? "/tmp/bundle.zip"
: var.bundle_config.output_path
)
}
resource "google_service_account" "service_account" {
count = var.service_account_create ? 1 : 0
project = var.project_id
account_id = "tf-cf-${var.name}"
display_name = "Terraform Cloud Function ${var.name}."
}

View File

@ -0,0 +1,55 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "bucket" {
description = "Bucket resource (only if auto-created)."
value = var.bucket_config == null ? null : google_storage_bucket.bucket.0
}
output "bucket_name" {
description = "Bucket name."
value = local.bucket
}
output "function" {
description = "Cloud function resources."
value = google_cloudfunctions_function.function
}
output "function_name" {
description = "Cloud function name."
value = google_cloudfunctions_function.function.name
}
output "service_account" {
description = "Service account resource."
value = (
var.service_account_create ? google_service_account.service_account[0] : null
)
}
output "service_account_email" {
description = "Service account email."
value = local.service_account_email
}
output "service_account_iam_email" {
description = "Service account email."
value = join("", [
"serviceAccount:",
local.service_account_email == null ? "" : local.service_account_email
])
}

View File

@ -0,0 +1,123 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "bucket_config" {
description = "Enable and configure auto-created bucket. Set fields to null to use defaults."
type = object({
location = string
lifecycle_delete_age = number
})
default = null
}
variable "bucket_name" {
description = "Name of the bucket that will be used for the function code. It will be created with prefix prepended if bucket_config is not null."
type = string
}
variable "bundle_config" {
description = "Cloud function source folder and generated zip bundle paths. Output path defaults to '/tmp/bundle.zip' if null."
type = object({
source_dir = string
output_path = string
})
}
variable "environment_variables" {
description = "Cloud function environment variables."
type = map(string)
default = {}
}
variable "iam_members" {
description = "Map of member lists used to set authoritative bindings, keyed by role. Ignored for template use."
type = map(list(string))
default = {}
}
variable "iam_roles" {
description = "List of roles used to set authoritative bindings. Ignored for template use."
type = list(string)
default = []
}
variable "function_config" {
description = "Cloud function configuration."
type = object({
entry_point = string
instances = number
memory = number
runtime = string
timeout = number
})
default = {
entry_point = "main"
instances = 1
memory = 256
runtime = "python37"
timeout = 180
}
}
variable "labels" {
description = "Resource labels"
type = map(string)
default = {}
}
variable "name" {
description = "Name used for cloud function and associated resources."
type = string
}
variable "prefix" {
description = "Optional prefix used for resource names."
type = string
default = null
}
variable "project_id" {
description = "Project id used for all resources."
type = string
}
variable "region" {
description = "Region used for all resources."
type = string
default = "europe-west1"
}
variable "service_account" {
description = "Service account email. Unused if service account is auto-created."
type = string
default = null
}
variable "service_account_create" {
description = "Auto-create service account."
type = bool
default = false
}
variable "trigger_config" {
description = "Function trigger configuration. Leave null for HTTP trigger."
type = object({
event = string
resource = string
retry = bool
})
default = null
}

View File

@ -0,0 +1,19 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
terraform {
required_version = ">= 0.12.6"
}

View File

@ -165,7 +165,7 @@ module "nginx-mig" {
| project_id | Project id. | <code title="">string</code> | ✓ | |
| *auto_healing_policies* | Auto-healing policies for this group. | <code title="object&#40;&#123;&#10;health_check &#61; string&#10;initial_delay_sec &#61; number&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
| *autoscaler_config* | Optional autoscaler configuration. Only one of 'cpu_utilization_target' 'load_balancing_utilization_target' or 'metric' can be not null. | <code title="object&#40;&#123;&#10;max_replicas &#61; number&#10;min_replicas &#61; number&#10;cooldown_period &#61; number&#10;cpu_utilization_target &#61; number&#10;load_balancing_utilization_target &#61; number&#10;metric &#61; object&#40;&#123;&#10;name &#61; string&#10;single_instance_assignment &#61; number&#10;target &#61; number&#10;type &#61; string &#35; GAUGE, DELTA_PER_SECOND, DELTA_PER_MINUTE&#10;filter &#61; string&#10;&#125;&#41;&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
| *health_check_config* | Optional auto-created helth check configuration, use the output self-link to set it in the auto healing policy. Refer to examples for usage. | <code title="object&#40;&#123;&#10;type &#61; string &#35; http https tcp ssl http2&#10;check &#61; map&#40;any&#41; &#35; actual health check block attributes&#10;config &#61; map&#40;number&#41; &#35; interval, thresholds, timeout&#10;logging &#61; bool&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
| *health_check_config* | Optional auto-created health check configuration, use the output self-link to set it in the auto healing policy. Refer to examples for usage. | <code title="object&#40;&#123;&#10;type &#61; string &#35; http https tcp ssl http2&#10;check &#61; map&#40;any&#41; &#35; actual health check block attributes&#10;config &#61; map&#40;number&#41; &#35; interval, thresholds, timeout&#10;logging &#61; bool&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
| *named_ports* | Named ports. | <code title="map&#40;number&#41;">map(number)</code> | | <code title="">null</code> |
| *regional* | Use regional instance group. When set, `location` should be set to the region. | <code title="">bool</code> | | <code title="">false</code> |
| *target_pools* | Optional list of URLs for target pools to which new instances in the group are added. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |

View File

@ -31,12 +31,57 @@ module "simple-vm-example" {
}
```
### Disk encryption with Cloud KMS
This example shows how to control disk encryption via the the `encryption` variable, in this case the self link to a KMS CryptoKey that will be used to encrypt boot and attached disk. Managing the key with the `../kms` module is of course possible, but is not shown here.
```hcl
module "kms-vm-example" {
source = "../modules/compute-vm"
project_id = local.project_id
region = local.region
zone = local.zone
name = "kms-test"
network_interfaces = [{
network = local.network_self_link,
subnetwork = local.subnet_self_link,
nat = false,
addresses = null
}]
attached_disks = [
{
name = "attached-disk"
size = 10
image = null
options = {
auto_delete = true
mode = null
source = null
type = null
}
}
]
service_account_create = true
instance_count = 1
boot_disk = {
image = "projects/debian-cloud/global/images/family/debian-10"
type = "pd-ssd"
size = 10
}
encryption = {
encrypt_boot = true
disk_encryption_key_raw = null
kms_key_self_link = local.kms_key.self_link
}
}
```
### Instance template
This example shows how to use the module to manage an instance template that defines an additional attached disk for each instance, and overrides defaults for the boot disk image and service account.
```hcl
module "debian-test" {
module "cos-test" {
source = "../modules/compute-vm"
project_id = "my-project"
region = "europe-west1"
@ -86,11 +131,10 @@ module "instance-group" {
}
service_account = local.service_account_email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
use_instance_template = true
metadata = {
user-data = local.cloud_config
}
group = {}
group = { named_ports = {} }
}
```
@ -108,8 +152,11 @@ module "instance-group" {
| *attached_disk_defaults* | Defaults for attached disks options. | <code title="object&#40;&#123;&#10;auto_delete &#61; bool&#10;mode &#61; string&#10;type &#61; string&#10;source &#61; string&#10;&#125;&#41;">object({...})</code> | | <code title="&#123;&#10;auto_delete &#61; true&#10;source &#61; null&#10;mode &#61; &#34;READ_WRITE&#34;&#10;type &#61; &#34;pd-ssd&#34;&#10;&#125;">...</code> |
| *attached_disks* | Additional disks, if options is null defaults will be used in its place. | <code title="list&#40;object&#40;&#123;&#10;name &#61; string&#10;image &#61; string&#10;size &#61; string&#10;options &#61; object&#40;&#123;&#10;auto_delete &#61; bool&#10;mode &#61; string&#10;source &#61; string&#10;type &#61; string&#10;&#125;&#41;&#10;&#125;&#41;&#41;">list(object({...}))</code> | | <code title="">[]</code> |
| *boot_disk* | Boot disk properties. | <code title="object&#40;&#123;&#10;image &#61; string&#10;size &#61; number&#10;type &#61; string&#10;&#125;&#41;">object({...})</code> | | <code title="&#123;&#10;image &#61; &#34;projects&#47;debian-cloud&#47;global&#47;images&#47;family&#47;debian-10&#34;&#10;type &#61; &#34;pd-ssd&#34;&#10;size &#61; 10&#10;&#125;">...</code> |
| *encryption* | Encryption options. Only one of kms_key_self_link and disk_encryption_key_raw may be set. If needed, you can specify to encrypt or not the boot disk. | <code title="object&#40;&#123;&#10;encrypt_boot &#61; bool&#10;disk_encryption_key_raw &#61; string&#10;kms_key_self_link &#61; string&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
| *group* | Define this variable to create an instance group for instances. Disabled for template use. | <code title="object&#40;&#123;&#10;named_ports &#61; map&#40;number&#41;&#10;&#125;&#41;">object({...})</code> | | <code title="">null</code> |
| *hostname* | Instance FQDN name. | <code title="">string</code> | | <code title="">null</code> |
| *iam_members* | Map of member lists used to set authoritative bindings, keyed by role. Ignored for template use. | <code title="map&#40;list&#40;string&#41;&#41;">map(list(string))</code> | | <code title="">{}</code> |
| *iam_roles* | List of roles used to set authoritative bindings. Ignored for template use. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *instance_count* | Number of instances to create (only for non-template usage). | <code title="">number</code> | | <code title="">1</code> |
| *instance_type* | Instance type. | <code title="">string</code> | | <code title="">f1-micro</code> |
| *labels* | Instance labels. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="">{}</code> |

View File

@ -25,6 +25,10 @@ locals {
for pair in setproduct(keys(local.names), keys(local.attached_disks)) :
"${pair[0]}-${pair[1]}" => { name = pair[0], disk_name = pair[1] }
}
iam_roles = var.use_instance_template ? {} : {
for pair in setproduct(var.iam_roles, keys(local.names)) :
"${pair.0}/${pair.1}" => { role = pair.0, name = pair.1 }
}
names = (
var.use_instance_template
? { "${var.name}" = 0 }
@ -66,6 +70,14 @@ resource "google_compute_disk" "disks" {
disk_type = local.attached_disks[each.value.disk_name].options.type
image = local.attached_disks[each.value.disk_name].image
})
dynamic disk_encryption_key {
for_each = var.encryption != null ? [""] : []
content {
raw_key = var.encryption.disk_encryption_key_raw
kms_key_self_link = var.encryption.kms_key_self_link
}
}
}
resource "google_compute_instance" "default" {
@ -103,6 +115,8 @@ resource "google_compute_instance" "default" {
image = var.boot_disk.image
size = var.boot_disk.size
}
disk_encryption_key_raw = var.encryption != null ? var.encryption.disk_encryption_key_raw : null
kms_key_self_link = var.encryption != null ? var.encryption.kms_key_self_link : null
}
dynamic network_interface {
@ -121,7 +135,7 @@ resource "google_compute_instance" "default" {
iterator = nat_addresses
content {
nat_ip = nat_addresses.value == null ? null : (
length(nat_addresses.value) == 0 ? null : nat_addresses.value[each.value]
length(nat_addresses.value) == 0 ? null : nat_addresses.value.external[each.value]
)
}
}
@ -154,6 +168,16 @@ resource "google_compute_instance" "default" {
}
resource "google_compute_instance_iam_binding" "default" {
for_each = local.iam_roles
project = var.project_id
zone = var.zone
instance_name = each.value.name
role = each.value.role
members = lookup(var.iam_members, each.value.role, [])
depends_on = [google_compute_instance.default]
}
resource "google_compute_instance_template" "default" {
count = var.use_instance_template ? 1 : 0
project = var.project_id

View File

@ -60,6 +60,16 @@ variable "boot_disk" {
}
}
variable "encryption" {
description = "Encryption options. Only one of kms_key_self_link and disk_encryption_key_raw may be set. If needed, you can specify to encrypt or not the boot disk."
type = object({
encrypt_boot = bool
disk_encryption_key_raw = string
kms_key_self_link = string
})
default = null
}
variable "group" {
description = "Define this variable to create an instance group for instances. Disabled for template use."
type = object({
@ -74,6 +84,18 @@ variable "hostname" {
default = null
}
variable "iam_members" {
description = "Map of member lists used to set authoritative bindings, keyed by role. Ignored for template use."
type = map(list(string))
default = {}
}
variable "iam_roles" {
description = "List of roles used to set authoritative bindings. Ignored for template use."
type = list(string)
default = []
}
variable "instance_count" {
description = "Number of instances to create (only for non-template usage)."
type = number

View File

@ -0,0 +1,34 @@
# Google Cloud Container Registry Module
This module simplifies the creation of GCS buckets used by Google Container Registry.
## Example
```hcl
module "container_registry" {
source = "../../modules/container-registry"
project_id = "myproject"
location = "EU"
iam_roles = ["roles/storage.admin"]
iam_members = {
"roles/storage.admin" = ["group:cicd@example.com"]
}
}
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| project_id | Registry project id. | <code title="">string</code> | ✓ | |
| *iam_members* | Map of member lists used to set authoritative bindings, keyed by role. | <code title="map&#40;list&#40;string&#41;&#41;">map(list(string))</code> | | <code title="">null</code> |
| *iam_roles* | List of roles used to set authoritative bindings. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">null</code> |
| *location* | Registry location. Can be US, EU, ASIA or empty | <code title="">string</code> | | <code title=""></code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| bucket_id | ID of the GCS bucket created | |
<!-- END TFDOC -->

View File

@ -0,0 +1,27 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
resource "google_container_registry" "registry" {
project = var.project_id
location = var.location
}
resource "google_storage_bucket_iam_binding" "bindings" {
for_each = toset(var.iam_roles)
bucket = google_container_registry.registry.id
role = each.value
members = lookup(var.iam_members, each.value, [])
}

View File

@ -0,0 +1,20 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "bucket_id" {
description = "ID of the GCS bucket created"
value = google_container_registry.registry.id
}

View File

@ -0,0 +1,38 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "iam_members" {
description = "Map of member lists used to set authoritative bindings, keyed by role."
type = map(list(string))
default = null
}
variable "iam_roles" {
description = "List of roles used to set authoritative bindings."
type = list(string)
default = null
}
variable "location" {
description = "Registry location. Can be US, EU, ASIA or empty"
type = string
default = ""
}
variable "project_id" {
description = "Registry project id."
type = string
}

View File

@ -0,0 +1,19 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
terraform {
required_version = ">= 0.12.6"
}

View File

@ -0,0 +1,63 @@
# Google Cloud Data Fusion Module
This module allows simple management of ['Google Data Fusion'](https://cloud.google.com/data-fusion) instances. It supports creating Basic or Enterprise, public or private instances.
## Examples
## Auto-managed IP allocation
```hcl
module "datafusion" {
source = "./modules/datafusion"
name = "my-datafusion"
region = "europe-west1"
project_id = "my-project"
network = "my-network-name"
}
```
### Externally managed IP allocation
```hcl
module "datafusion" {
source = "./modules/datafusion"
name = "my-datafusion"
region = "europe-west1"
project_id = "my-project"
network = "my-network-name"
ip_allocation_create = false
ip_allocation = "10.0.0.0/22"
}
```
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| name | Name of the DataFusion instance. | <code title="">string</code> | ✓ | |
| network | Name of the network in the project with which the tenant project will be peered for executing pipelines in the form of projects/{project-id}/global/networks/{network} | <code title="">string</code> | ✓ | |
| project_id | Project ID. | <code title="">string</code> | ✓ | |
| region | DataFusion region. | <code title="">string</code> | ✓ | |
| *description* | DataFuzion instance description. | <code title="">string</code> | | <code title="">Terraform managed.</code> |
| *enable_stackdriver_logging* | Option to enable Stackdriver Logging. | <code title="">bool</code> | | <code title="">false</code> |
| *enable_stackdriver_monitoring* | Option to enable Stackdriver Monitorig. | <code title="">bool</code> | | <code title="">false</code> |
| *firewall_create* | Create Network firewall rules to enable SSH. | <code title="">bool</code> | | <code title="">true</code> |
| *ip_allocation* | Ip allocated for datafusion instance when not using the auto created one and created outside of the module. | <code title="">string</code> | | <code title="">null</code> |
| *ip_allocation_create* | Create Ip range for datafusion instance. | <code title="">bool</code> | | <code title="">true</code> |
| *labels* | The resource labels for instance to use to annotate any related underlying resources, such as Compute Engine VMs. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="">{}</code> |
| *network_peering* | Create Network peering between project and DataFusion tenant project. | <code title="">bool</code> | | <code title="">true</code> |
| *private_instance* | Create private instance. | <code title="">bool</code> | | <code title="">true</code> |
| *type* | Datafusion Instance type. It can be BASIC or ENTERPRISE (default value). | <code title="">string</code> | | <code title="">ENTERPRISE</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| id | DataFusion instance ID. | |
| ip_allocation | IP range reserved for Data Fusion instance in case of a private instance. | |
| resource | DataFusion resource. | |
| service_account | DataFusion Service Account. | |
| service_endpoint | DataFusion Service Endpoint. | |
| version | DataFusion version. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,79 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
prefix_length = 22
ip_allocation = (
var.ip_allocation_create
? "${google_compute_global_address.default[0].address}/${local.prefix_length}"
: var.ip_allocation
)
tenant_project = regex(
"cloud-datafusion-management-sa@([\\w-]+).iam.gserviceaccount.com",
google_data_fusion_instance.default.service_account
)[0]
}
resource "google_compute_global_address" "default" {
count = var.ip_allocation_create ? 1 : 0
project = var.project_id
name = "cdf-${var.name}"
address_type = "INTERNAL"
purpose = "VPC_PEERING"
prefix_length = local.prefix_length
network = var.network
}
resource "google_compute_network_peering" "default" {
count = var.network_peering == true ? 1 : 0
name = "cdf-${var.name}"
network = "projects/${var.project_id}/global/networks/${var.network}"
peer_network = "projects/${local.tenant_project}/global/networks/${var.region}-${google_data_fusion_instance.default.name}"
export_custom_routes = true
import_custom_routes = true
}
resource "google_compute_firewall" "default" {
count = var.firewall_create == true ? 1 : 0
name = "${var.name}-allow-ssh"
project = var.project_id
network = var.network
source_ranges = [local.ip_allocation]
target_tags = ["${var.name}-allow-ssh"]
allow {
protocol = "tcp"
ports = ["22"]
}
}
resource "google_data_fusion_instance" "default" {
provider = google-beta
project = var.project_id
name = var.name
type = var.type
description = var.description
labels = var.labels
region = var.region
private_instance = var.private_instance
enable_stackdriver_logging = var.enable_stackdriver_logging
enable_stackdriver_monitoring = var.enable_stackdriver_monitoring
network_config {
network = var.network
ip_allocation = local.ip_allocation
}
}

View File

@ -0,0 +1,45 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "id" {
description = "DataFusion instance ID."
value = google_data_fusion_instance.default.id
}
output "ip_allocation" {
description = "IP range reserved for Data Fusion instance in case of a private instance."
value = "${local.ip_allocation}"
}
output "resource" {
description = "DataFusion resource."
value = google_data_fusion_instance.default
}
output "service_account" {
description = "DataFusion Service Account."
value = google_data_fusion_instance.default.service_account
}
output "service_endpoint" {
description = "DataFusion Service Endpoint."
value = google_data_fusion_instance.default.service_endpoint
}
output "version" {
description = "DataFusion version."
value = google_data_fusion_instance.default.version
}

View File

@ -0,0 +1,99 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
###############################################################################
# DtaFusion variables #
###############################################################################
variable "description" {
description = "DataFuzion instance description."
type = string
default = "Terraform managed."
}
variable "enable_stackdriver_logging" {
description = "Option to enable Stackdriver Logging."
type = bool
default = false
}
variable "enable_stackdriver_monitoring" {
description = "Option to enable Stackdriver Monitorig."
type = bool
default = false
}
variable "labels" {
description = "The resource labels for instance to use to annotate any related underlying resources, such as Compute Engine VMs."
type = map(string)
default = {}
}
variable "name" {
description = "Name of the DataFusion instance."
type = string
}
variable "network" {
description = "Name of the network in the project with which the tenant project will be peered for executing pipelines in the form of projects/{project-id}/global/networks/{network}"
type = string
}
variable "firewall_create" {
description = "Create Network firewall rules to enable SSH."
type = bool
default = true
}
variable "network_peering" {
description = "Create Network peering between project and DataFusion tenant project."
type = bool
default = true
}
variable "private_instance" {
description = "Create private instance."
type = bool
default = true
}
variable "project_id" {
description = "Project ID."
type = string
}
variable "region" {
description = "DataFusion region."
type = string
}
variable "ip_allocation_create" {
description = "Create Ip range for datafusion instance."
type = bool
default = true
}
variable "ip_allocation" {
description = "Ip allocated for datafusion instance when not using the auto created one and created outside of the module."
type = string
default = null
}
variable "type" {
description = "Datafusion Instance type. It can be BASIC or ENTERPRISE (default value)."
type = string
default = "ENTERPRISE"
}

View File

@ -0,0 +1,19 @@
/**
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
terraform {
required_version = ">= 0.12.6"
}

View File

@ -1,6 +1,8 @@
# Google Cloud DNS Module
This module allows simple management of Google Cloud DNS zones and records. It supports creating public, private, forwarding, and peering zones. For DNSSEC configuration, refer to the [`dns_managed_zone` documentation](https://www.terraform.io/docs/providers/google/r/dns_managed_zone.html#dnssec_config).
This module allows simple management of Google Cloud DNS zones and records. It supports creating public, private, forwarding, peering and service directory based zones.
For DNSSEC configuration, refer to the [`dns_managed_zone` documentation](https://www.terraform.io/docs/providers/google/r/dns_managed_zone.html#dnssec_config).
## Example
@ -32,14 +34,16 @@ module "private-dns" {
| *description* | Domain description. | <code title="">string</code> | | <code title="">Terraform managed.</code> |
| *dnssec_config* | DNSSEC configuration: kind, non_existence, state. | <code title="">any</code> | | <code title="">{}</code> |
| *forwarders* | List of target name servers, only valid for 'forwarding' zone types. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *peer_network* | Peering network self link, only valid for 'peering' zone types. | <code title="">string</code> | | <code title=""></code> |
| *peer_network* | Peering network self link, only valid for 'peering' zone types. | <code title="">string</code> | | <code title="">null</code> |
| *recordsets* | List of DNS record objects to manage. | <code title="list&#40;object&#40;&#123;&#10;name &#61; string&#10;type &#61; string&#10;ttl &#61; number&#10;records &#61; list&#40;string&#41;&#10;&#125;&#41;&#41;">list(object({...}))</code> | | <code title="">[]</code> |
| *type* | Type of zone to create, valid values are 'public', 'private', 'forwarding', 'peering'. | <code title="">string</code> | | <code title="">private</code> |
| *service_directory_namespace* | Service directory namespace id (URL), only valid for 'service-directory' zone types. | <code title="">string</code> | | <code title="">null</code> |
| *type* | Type of zone to create, valid values are 'public', 'private', 'forwarding', 'peering', 'service-directory'. | <code title="">string</code> | | <code title="">private</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| dns_keys | DNSKEY and DS records of DNSSEC-signed managed zones. | |
| domain | The DNS zone domain. | |
| name | The DNS zone name. | |
| name_servers | The DNS zone name servers. | |

View File

@ -15,7 +15,6 @@
*/
locals {
is_static_zone = var.type == "public" || var.type == "private"
recordsets = var.recordsets == null ? {} : {
for record in var.recordsets :
join("/", [record.name, record.type]) => record
@ -25,6 +24,9 @@ locals {
google_dns_managed_zone.public.0, null
)
)
dns_keys = try(
data.google_dns_keys.dns_keys.0, null
)
}
resource "google_dns_managed_zone" "non-public" {
@ -38,14 +40,11 @@ resource "google_dns_managed_zone" "non-public" {
dynamic forwarding_config {
for_each = (
var.type == "forwarding" && var.forwarders != null
? { config = var.forwarders }
: {}
var.type == "forwarding" && var.forwarders != null ? [""] : []
)
iterator = config
content {
dynamic "target_name_servers" {
for_each = config.value
for_each = var.forwarders
iterator = address
content {
ipv4_address = address.value
@ -56,14 +55,11 @@ resource "google_dns_managed_zone" "non-public" {
dynamic peering_config {
for_each = (
var.type == "peering" && var.peer_network != null
? { config = var.peer_network }
: {}
var.type == "peering" && var.peer_network != null ? [""] : []
)
iterator = config
content {
target_network {
network_url = config.value
network_url = var.peer_network
}
}
}
@ -78,6 +74,19 @@ resource "google_dns_managed_zone" "non-public" {
}
}
dynamic service_directory_config {
for_each = (
var.type == "service-directory" && var.service_directory_namespace != null
? [""]
: []
)
content {
namespace {
namespace_url = var.service_directory_namespace
}
}
}
}
resource "google_dns_managed_zone" "public" {
@ -113,6 +122,11 @@ resource "google_dns_managed_zone" "public" {
}
data "google_dns_keys" "dns_keys" {
count = var.dnssec_config == {} || var.type != "public" ? 0 : 1
managed_zone = google_dns_managed_zone.public.0.id
}
resource "google_dns_record_set" "cloud-static-records" {
for_each = (
var.type == "public" || var.type == "private"

View File

@ -38,3 +38,8 @@ output "name_servers" {
description = "The DNS zone name servers."
value = try(local.zone.name_servers, null)
}
output "dns_keys" {
description = "DNSKEY and DS records of DNSSEC-signed managed zones."
value = local.dns_keys
}

View File

@ -30,9 +30,6 @@ variable "description" {
default = "Terraform managed."
}
# TODO(ludoo): add link to DNSSEC documentation in README
# https://www.terraform.io/docs/providers/google/r/dns_managed_zone.html#dnssec_config
variable "default_key_specs_key" {
description = "DNSSEC default key signing specifications: algorithm, key_length, key_type, kind."
type = any
@ -71,7 +68,7 @@ variable "name" {
variable "peer_network" {
description = "Peering network self link, only valid for 'peering' zone types."
type = string
default = ""
default = null
}
variable "project_id" {
@ -90,8 +87,14 @@ variable "recordsets" {
default = []
}
variable "service_directory_namespace" {
description = "Service directory namespace id (URL), only valid for 'service-directory' zone types."
type = string
default = null
}
variable "type" {
description = "Type of zone to create, valid values are 'public', 'private', 'forwarding', 'peering'."
description = "Type of zone to create, valid values are 'public', 'private', 'forwarding', 'peering', 'service-directory'."
type = string
default = "private"
}

View File

@ -15,5 +15,9 @@
*/
terraform {
required_version = ">= 0.12.6"
required_version = ">= 0.12.20"
required_providers {
google = "~> 3.10"
google-beta = "~> 3.20"
}
}

View File

@ -0,0 +1,44 @@
# Google Cloud Endpoints
This module allows simple management of ['Google Cloud Endpoints'](https://cloud.google.com/endpoints/) services. It supports creating ['OpenAPI'](https://cloud.google.com/endpoints/docs/openapi) or ['gRPC'](https://cloud.google.com/endpoints/docs/grpc/about-grpc) endpoints.
## Examples
### OpenAPI
```hcl
module "endpoint" {
source = "../../modules/endpoint"
project_id = "my-project"
service_name = "YOUR-API.endpoints.YOUR-PROJECT-ID.cloud.goog"
openapi_config = { "yaml_path" = "openapi.yaml" }
grpc_config = null
iam_roles = ["servicemanagement.serviceController"]
iam_members = {
"servicemanagement.serviceController" = ["serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com"]
}
}
```
[Here](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/endpoints/getting-started/openapi.yaml) you can find an example of an openapi.yaml file. Once created the endpoint, remember to activate the service at project level.
<!-- BEGIN TFDOC -->
## Variables
| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| grpc_config | The configuration for a gRPC enpoint. Either this or openapi_config must be specified. | <code title="object&#40;&#123;&#10;yaml_path &#61; string&#10;protoc_output_path &#61; string&#10;&#125;&#41;">object({...})</code> | ✓ | |
| openapi_config | The configuration for an OpenAPI endopoint. Either this or grpc_config must be specified. | <code title="object&#40;&#123;&#10;yaml_path &#61; string&#10;&#125;&#41;">object({...})</code> | ✓ | |
| service_name | The name of the service. Usually of the form '$apiname.endpoints.$projectid.cloud.goog'. | <code title="">string</code> | ✓ | |
| *iam_members* | Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the instance are preserved. | <code title="map&#40;list&#40;string&#41;&#41;">map(list(string))</code> | | <code title="">{}</code> |
| *iam_roles* | Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *project_id* | The project ID that the service belongs to. | <code title="">string</code> | | <code title="">null</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| endpoints | A list of Endpoint objects. | |
| endpoints_service | The Endpoint service resource. | |
| service_name | The name of the service.. | |
<!-- END TFDOC -->

Some files were not shown because too many files have changed in this diff Show More