More updates 2

This commit is contained in:
Julio Castillo 2022-09-09 16:40:37 +02:00
parent 834c5d53c9
commit 0ee32be2fa
30 changed files with 126 additions and 124 deletions

View File

@ -4,7 +4,7 @@ The blueprints in this folder show how to wire together different Google Cloud s
## Resource tracking and remediation via Cloud Asset feeds
<a href="./asset-inventory-feed-remediation" title="Resource tracking and remediation via Cloud Asset feeds"><img src="./asset-inventory-feed-remediation/diagram.png" align="left" width="280px"></a> This [example](./asset-inventory-feed-remediation) shows how to leverage [Cloud Asset Inventory feeds](https://cloud.google.com/asset-inventory/docs/monitoring-asset-changes) to stream resource changes in real time, and how to programmatically use the feed change notifications for alerting or remediation, via a Cloud Function wired to the feed PubSub queue.
<a href="./asset-inventory-feed-remediation" title="Resource tracking and remediation via Cloud Asset feeds"><img src="./asset-inventory-feed-remediation/diagram.png" align="left" width="280px"></a> This [blueprint](./asset-inventory-feed-remediation) shows how to leverage [Cloud Asset Inventory feeds](https://cloud.google.com/asset-inventory/docs/monitoring-asset-changes) to stream resource changes in real time, and how to programmatically use the feed change notifications for alerting or remediation, via a Cloud Function wired to the feed PubSub queue.
The blueprint's feed tracks changes to Google Compute instances, and the Cloud Function enforces policy compliance on each change so that tags match a set of simple rules. The obvious use case is when instance tags are used to scope firewall rules, but the blueprint can easily be adapted to suit different use cases.

View File

@ -1,6 +1,6 @@
# AD FS
This example does the following:
This blueprint does the following:
Terraform:
@ -18,11 +18,11 @@ Ansible:
In addition to this, we also include a Powershell script that facilitates the configuration required for Anthos when authenticating users with AD FS as IdP.
The diagram below depicts the architecture of the example:
The diagram below depicts the architecture of the blueprint:
![Architecture](architecture.png)
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fadfs), then go through the following steps to create resources:
@ -36,7 +36,7 @@ Once the resources have been created, do the following:
ansible-playbook playbook.yaml
# Testing the example
# Testing the blueprint
1. In your browser open the following URL:

View File

@ -1,6 +1,6 @@
# Cloud Asset Inventory feeds for resource change tracking and remediation
This example shows how to leverage [Cloud Asset Inventory feeds](https://cloud.google.com/asset-inventory/docs/monitoring-asset-changes) to stream resource changes in real time, and how to programmatically react to changes by wiring a Cloud Function to the feed outputs.
This blueprint shows how to leverage [Cloud Asset Inventory feeds](https://cloud.google.com/asset-inventory/docs/monitoring-asset-changes) to stream resource changes in real time, and how to programmatically react to changes by wiring a Cloud Function to the feed outputs.
The Cloud Function can then be used for different purposes:
@ -9,25 +9,25 @@ The Cloud Function can then be used for different purposes:
- adapting the configuration of separate related resources
- implementing remediation steps that enforce policy compliance by tweaking or reverting the changes.
A [companion Medium article](https://medium.com/google-cloud/using-cloud-asset-inventory-feeds-for-dynamic-configuration-and-policy-enforcement-c37b6a590c49) has been published for this example, refer to it for more details on the context and the specifics of running the example.
A [companion Medium article](https://medium.com/google-cloud/using-cloud-asset-inventory-feeds-for-dynamic-configuration-and-policy-enforcement-c37b6a590c49) has been published for this blueprint, refer to it for more details on the context and the specifics of running the blueprint.
This example shows a simple remediation use case: how to enforce policies on instance tags and revert non-compliant changes in near-real time, thus adding an additional measure of control when using tags for firewall rule scoping. Changing the [monitored asset](https://cloud.google.com/asset-inventory/docs/supported-asset-types) and the function logic allows simple adaptation to other common use cases:
This blueprint shows a simple remediation use case: how to enforce policies on instance tags and revert non-compliant changes in near-real time, thus adding an additional measure of control when using tags for firewall rule scoping. Changing the [monitored asset](https://cloud.google.com/asset-inventory/docs/supported-asset-types) and the function logic allows simple adaptation to other common use cases:
- enforcing a centrally defined Cloud Armor policy in backend services
- creating custom DNS records for instances or forwarding rules
The example uses a single project for ease of testing, in actual use a few changes are needed to operate at the resource hierarchy level:
The blueprint uses a single project for ease of testing, in actual use a few changes are needed to operate at the resource hierarchy level:
- the feed should be set at the folder or organization level
- the custom role used to assign tag changing permissions should be defined at the organization level
- the role binding that grants the custom role to the Cloud Function service account should be set at the same level as the feed (folder or organization)
The resources created in this example are shown in the high level diagram below:
The resources created in this blueprint are shown in the high level diagram below:
<img src="diagram.png" width="640px">
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fasset-inventory-feed-remediation), then go through the following steps to create resources:
@ -36,9 +36,9 @@ Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/c
Once done testing, you can clean up resources by running `terraform destroy`. To persist state, check out the `backend.tf.sample` file.
## Testing the example
## Testing the blueprint
The terraform outputs generate preset `gcloud` commands that you can copy and run in the console, to complete configuration and test the example:
The terraform outputs generate preset `gcloud` commands that you can copy and run in the console, to complete configuration and test the blueprint:
- `subscription_pull` shows messages in the PubSub queue, to check feed message format if the Cloud Function is disabled
- `cf_logs` shows Cloud Function logs to check that remediation works

View File

@ -1,8 +1,8 @@
# Binary Authorization
The following example shows to how to create a CI and a CD pipeline in Cloud Build for the deployment of an application to a private GKE cluster with unrestricted access to a public endpoint. The example enables a Binary Authorization policy in the project so only images that have been attested can be deployed to the cluster. The attestations are created using a cryptographic key pair that has been provisioned in KMS.
The following blueprint shows to how to create a CI and a CD pipeline in Cloud Build for the deployment of an application to a private GKE cluster with unrestricted access to a public endpoint. The blueprint enables a Binary Authorization policy in the project so only images that have been attested can be deployed to the cluster. The attestations are created using a cryptographic key pair that has been provisioned in KMS.
The diagram below depicts the architecture used in the example.
The diagram below depicts the architecture used in the blueprint.
![Architecture](diagram.png)
@ -15,16 +15,16 @@ The CI pipeline does the following:
The CD pipeline deploys the application to the cluster.
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fbinauthz), then go through the following steps to create resources:
* `terraform init`
* `terraform apply -var project_id=my-project-id`
WARNING: The example requires the activation of the Binary Authorization API. That API does not support authentication with user credentials. A service account will need to be used to run the example
WARNING: The blueprint requires the activation of the Binary Authorization API. That API does not support authentication with user credentials. A service account will need to be used to run the blueprint
## Testing the example
## Testing the blueprint
Once the resources have been created, do the following to verify that everything works as expected.

View File

@ -2,18 +2,18 @@
## Usage
This example shows how to create reusable and modular Cloud DNS architectures when using Shared VPC.
This blueprint shows how to create reusable and modular Cloud DNS architectures when using Shared VPC.
The goal is to provision dedicated Cloud DNS instances for application teams that want to manage their own DNS records, and configure DNS peering to ensure name resolution works in a common Shared VPC.
The example will:
The blueprint will:
- Create a GCP project per application team based on the `teams` input variable
- Create a VPC and Cloud DNS instance per application team
- Create a Cloud DNS private zone per application team in the form of `[teamname].[dns_domain]`, with `teamname` and `dns_domain` based on input variables
- Configure DNS peering for each private zone from the Shared VPC to the DNS VPC of each application team
The resources created in this example are shown in the high level diagram below:
The resources created in this blueprint are shown in the high level diagram below:
<img src="diagram.png" width="640px">

View File

@ -1,13 +1,13 @@
# Delegated Role Grants
This example shows two applications of [delegated role grants](https://cloud.google.com/iam/docs/setting-limits-on-granting-roles):
This blueprint shows two applications of [delegated role grants](https://cloud.google.com/iam/docs/setting-limits-on-granting-roles):
- how to use them to restrict service usage in a GCP project
- how to use them to allow administrative access to a service via a predefined role, while restricting administrators from minting other admins.
## Restricting service usage
In its default configuration, the example provisions two sets of permissions:
In its default configuration, the blueprint provisions two sets of permissions:
- the roles listed in `direct_role_grants` will be granted unconditionally to the users listed in `project_administrators`.
- additionally, `project_administrators` will be granted the role `roles/resourcemanager.projectIamAdmin` in a restricted fashion, allowing them to only grant the roles listed in `delegated_role_grants` to other users.
@ -19,13 +19,13 @@ This diagram shows the resources and expected behaviour:
<img src="diagram.png" width="572px">
A [Medium article](https://medium.com/@jccb/managing-gcp-service-usage-through-delegated-role-grants-a843610f2226) has been published for this example, refer to it for more details on the context and the specifics of running the example.
A [Medium article](https://medium.com/@jccb/managing-gcp-service-usage-through-delegated-role-grants-a843610f2226) has been published for this blueprint, refer to it for more details on the context and the specifics of running the blueprint.
## Restricting a predefined role
By changing the `restricted_role_grant`, the example can be used to grant administrators a predefined role like `roles/compute.networkAdmin`, which allows setting IAM policies on service resources like subnetworks, but restrict the roles that those administrators are able to confer to other users.
By changing the `restricted_role_grant`, the blueprint can be used to grant administrators a predefined role like `roles/compute.networkAdmin`, which allows setting IAM policies on service resources like subnetworks, but restrict the roles that those administrators are able to confer to other users.
You can easily configure the example for this use case:
You can easily configure the blueprint for this use case:
```hcl
# terraform.tfvars
@ -40,7 +40,7 @@ This diagram shows the resources and expected behaviour:
<img src="diagram-2.png" width="572px">
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fiam-delegated-role-grants), then go through the following steps to create resources:
@ -51,7 +51,7 @@ Once done testing, you can clean up resources by running `terraform destroy`.
## Auditing Roles
This example includes a python script that audits a list of roles to ensure you're not granting the `setIamPolicy` permission at the project, folder or organization level. To audit all the predefined compute roles, run it like this:
This blueprint includes a python script that audits a list of roles to ensure you're not granting the `setIamPolicy` permission at the project, folder or organization level. To audit all the predefined compute roles, run it like this:
```bash
pip3 install -r requirements.txt

View File

@ -1,8 +1,8 @@
# Multi-cluster mesh on GKE (fleet API)
The following example shows how to create a multi-cluster mesh for two private clusters on GKE. Anthos Service Mesh with automatic control plane management is set up for clusters using the Fleet API. This can only be done if the clusters are in a single project and in the same VPC. In this particular case both clusters having being deployed to different subnets in a shared VPC.
The following blueprint shows how to create a multi-cluster mesh for two private clusters on GKE. Anthos Service Mesh with automatic control plane management is set up for clusters using the Fleet API. This can only be done if the clusters are in a single project and in the same VPC. In this particular case both clusters having being deployed to different subnets in a shared VPC.
The diagram below depicts the architecture of the example.
The diagram below depicts the architecture of the blueprint.
![Architecture](architecture.png)
@ -22,7 +22,7 @@ Ansible is used to execute commands in the management VM. From this VM there is
10. Deploy a sleep service in both clusters.
11. Send requests from a sleep pod to the hello-world service from both clusters, to verify that we get responses from alternative versions.
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fmulti-cluster-mesh-gke-fleet-api), then go through the following steps to create resources:
@ -40,7 +40,7 @@ Once terraform completes do the following:
ansible-playbook -v playbook.yaml
## Testing the example
## Testing the blueprint
The last two commands executed with Ansible Send requests from a sleep pod to the hello-world service from both clusters. If you see in the output of those two commands responses from alternative versions, everything works as expected.

View File

@ -3,7 +3,7 @@
This repository provides an end-to-end solution to gather some GCP Networking quotas and limits (that cannot be seen in the GCP console today) and display them in a dashboard.
The goal is to allow for better visibility of these limits, facilitating capacity planning and avoiding hitting these limits.
Here is an example of dashboard you can get with this solution:
Here is an blueprint of dashboard you can get with this solution:
<img src="metric.png" width="640px">
@ -48,7 +48,7 @@ It writes this values to custom metrics in Cloud Monitoring and creates a dashbo
Note that metrics are created in the cloud-function/metrics.yaml file.
You can also edit default limits for a specific network in that file. See the example for `vpc_peering_per_network`.
You can also edit default limits for a specific network in that file. See the blueprint for `vpc_peering_per_network`.
## Next steps and ideas
In a future release, we could support:
@ -57,4 +57,4 @@ In a future release, we could support:
- Google managed VPCs that are peered with PSA (such as Cloud SQL or Memorystore)
- Subnet IP ranges utilization
If you are interested in this and/or would like to contribute, please contact legranda@google.com.
If you are interested in this and/or would like to contribute, please contact legranda@google.com.

View File

@ -2,18 +2,18 @@
When managing GCP Service Accounts with terraform, it's often a question on **how to avoid Service Account Key in the terraform state?**
This example shows how to manage IAM Service Account Keys by manually generating a key pair and uploading the public part of the key to GCP. It has the following benefits:
This blueprint shows how to manage IAM Service Account Keys by manually generating a key pair and uploading the public part of the key to GCP. It has the following benefits:
- no [passing keys between users](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys#pass-between-users) or systems
- no private keys stored in the terraform state (only public part of the key is in the state)
- let keys [expire automatically](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys#key-expiryhaving)
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fonprem-sa-key-management&cloudshell_open_in_editor=cloudshell_open%2Fcloud-foundation-fabric%2Fblueprints%2Fcloud-operations%2Fonprem-sa-key-management%2Fvariables.tf), then go through the following steps to create resources:
Cleaning up example keys
Cleaning up blueprint keys
```bash
rm -f /public-keys/data-uploader/
rm -f /public-keys/prisma-security/
@ -49,7 +49,7 @@ contents=$(jq --arg key "$(cat keys/data_uploader_private_key.pem)" '.private_ke
contents=$(jq --arg key "$(cat keys/prisma_security_private_key.pem)" '.private_key=$key' prisma-security.json) && echo "$contents" > prisma-security.json
```
## Testing the example
## Testing the blueprint
Validate that service accounts json credentials are valid
```bash
gcloud auth activate-service-account --key-file prisma-security.json

View File

@ -1,11 +1,11 @@
# Compute Image builder with Hashicorp Packer
This example shows how to deploy infrastructure for a Compute Engine image builder based on
This blueprint shows how to deploy infrastructure for a Compute Engine image builder based on
[Hashicorp's Packer tool](https://www.packer.io).
![High-level diagram](diagram.png "High-level diagram")
## Running the example
## Running the blueprint
Prerequisite: [Packer](https://www.packer.io/downloads) version >= v1.7.0
@ -24,18 +24,18 @@ Building Compute Engine image (Packer part):
## Using Packer's service account
The following example leverages [service account impersonation](https://cloud.google.com/iam/docs/impersonating-service-accounts)
The following blueprint leverages [service account impersonation](https://cloud.google.com/iam/docs/impersonating-service-accounts)
to execute any operations on GCP as a dedicated Packer service account. Depending on how you execute
the Packer tool, you need to grant your principal rights to impersonate Packer's service account.
Set `packer_account_users` variable in Terraform configuration to grant roles required to impersonate
Packer's service account to selected IAM principals.
Example: allow default [Cloud Build](https://cloud.google.com/build) service account to impersonate
Blueprint: allow default [Cloud Build](https://cloud.google.com/build) service account to impersonate
Packer SA: `packer_account_users=["serviceAccount:myProjectNumber@cloudbuild.gserviceaccount.com"]`.
## Configuring Packer
Provided Packer build example uses [HCL2 configuration files](https://www.packer.io/guides/hcl) and
Provided Packer build blueprint uses [HCL2 configuration files](https://www.packer.io/guides/hcl) and
requires configuration of some input variables *(i.e. service accounts emails)*.
Values of those variables can be taken from the Terraform outputs.
@ -55,7 +55,7 @@ IP addresses only, communication with this VM has to either:
* originate from the network routable on Packer's VPC *(i.e. peered VPC, over VPN or interconnect)*
* use [Identity-Aware Proxy](https://cloud.google.com/iap/docs/using-tcp-forwarding) tunnel
By default, this example assumes that IAP tunnel is needed to communicate with the temporary VM.
By default, this blueprint assumes that IAP tunnel is needed to communicate with the temporary VM.
This might be changed by setting `use_iap` variable to `false` in Terraform and Packer
configurations respectively.
@ -63,7 +63,7 @@ configurations respectively.
## Accessing resources over the Internet
The following example assumes that provisioning of a Compute Engine VM requires access to
The blueprint assumes that provisioning of a Compute Engine VM requires access to
the resources over the Internet (i.e. to install OS packages). Since Compute VM has no public IP
address for security reasons, Internet connectivity is done with [Cloud NAT](https://cloud.google.com/nat/docs/overview).
<!-- BEGIN TFDOC -->

View File

@ -1,10 +1,10 @@
# Compute Engine quota monitoring
This example improves on the [GCE quota exporter tool](https://github.com/GoogleCloudPlatform/professional-services/tree/master/tools/gce-quota-sync) (by the same author of this example), and shows a practical way of collecting and monitoring [Compute Engine resource quotas](https://cloud.google.com/compute/quotas) via Cloud Monitoring metrics as an alternative to the recently released [built-in quota metrics](https://cloud.google.com/monitoring/alerts/using-quota-metrics).
This blueprint improves on the [GCE quota exporter tool](https://github.com/GoogleCloudPlatform/professional-services/tree/master/tools/gce-quota-sync) (by the same author of this blueprint), and shows a practical way of collecting and monitoring [Compute Engine resource quotas](https://cloud.google.com/compute/quotas) via Cloud Monitoring metrics as an alternative to the recently released [built-in quota metrics](https://cloud.google.com/monitoring/alerts/using-quota-metrics).
Compared to the built-in metrics, it offers a simpler representation of quotas and quota ratios which is especially useful in charts, it allows filtering or combining quotas between different projects regardless of their monitoring workspace, and it creates a default alerting policy without the need to interact directly with the monitoring API.
Regardless of its specific purpose, this example is also useful in showing how to manipulate and write time series to cloud monitoring. The resources it creates are shown in the high level diagram below:
Regardless of its specific purpose, this blueprint is also useful in showing how to manipulate and write time series to cloud monitoring. The resources it creates are shown in the high level diagram below:
<img src="diagram.png" width="640px" alt="GCP resource diagram">
@ -16,7 +16,7 @@ Quota time series are stored using a [custom metric](https://cloud.google.com/mo
The solution also creates a basic monitoring alert policy, to demonstrate how to raise alerts when any of the tracked quota ratios go over a predefined threshold.
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fquota-monitoring), then go through the following steps to create resources:

View File

@ -1,18 +1,18 @@
# Scheduled Cloud Asset Inventory Export to Bigquery
This example shows how to leverage [Cloud Asset Inventory Exporting to Bigquery](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery) feature to keep track of your project wide assets over time storing information in Bigquery.
This blueprint shows how to leverage [Cloud Asset Inventory Exporting to Bigquery](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery) feature to keep track of your project wide assets over time storing information in Bigquery.
The data stored in Bigquery can then be used for different purposes:
- dashboarding
- analysis
The example uses export resources at the project level for ease of testing, in actual use a few changes are needed to operate at the resource hierarchy level:
The blueprint uses export resources at the project level for ease of testing, in actual use a few changes are needed to operate at the resource hierarchy level:
- the export should be set at the folder or organization level
- the `roles/cloudasset.viewer` on the service account should be set at the folder or organization level
The resources created in this example are shown in the high level diagram below:
The resources created in this blueprint are shown in the high level diagram below:
<img src="diagram.png" width="640px">
@ -23,7 +23,7 @@ Ensure that you grant your account one of the following roles on your project, f
- Cloud Asset Viewer role (`roles/cloudasset.viewer`)
- Owner primitive role (`roles/owner`)
## Running the example
## Running the blueprint
Clone this repository, specify your variables in a `terraform.tvars` and then go through the following steps to create resources:
@ -32,9 +32,9 @@ Clone this repository, specify your variables in a `terraform.tvars` and then go
Once done testing, you can clean up resources by running `terraform destroy`. To persist state, check out the `backend.tf.sample` file.
## Testing the example
## Testing the blueprint
Once resources are created, you can run queries on the data you exported on Bigquery. [Here](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery#querying_an_asset_snapshot) you can find some example of queries you can run.
Once resources are created, you can run queries on the data you exported on Bigquery. [Here](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery#querying_an_asset_snapshot) you can find some blueprint of queries you can run.
You can also create a dashboard connecting [Datalab](https://datastudio.google.com/) or any other BI tools of your choice to your Bigquery dataset.

View File

@ -1,10 +1,10 @@
# TCP healthcheck and restart for unmanaged GCE instances
This example shows how to leverage [Serverless VPC Access](https://cloud.google.com/vpc/docs/configure-serverless-vpc-access) and Cloud Functions to organize a highly performant TCP healthcheck for unmanaged GCE instances. Healthchecker Cloud Function uses [goroutines](https://gobyexample.com/goroutines) to achieve parallel healthchecking for multiple instances and handles up to 1 thousand VMs checked in less than a second execution time.
This blueprint shows how to leverage [Serverless VPC Access](https://cloud.google.com/vpc/docs/configure-serverless-vpc-access) and Cloud Functions to organize a highly performant TCP healthcheck for unmanaged GCE instances. Healthchecker Cloud Function uses [goroutines](https://gobyexample.com/goroutines) to achieve parallel healthchecking for multiple instances and handles up to 1 thousand VMs checked in less than a second execution time.
**_NOTE:_** [Managed Instance Groups](https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs) has autohealing functionality out of the box, current example is more applicable for standalone VMs or VMs in an unmanaged instance group.
**_NOTE:_** [Managed Instance Groups](https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs) has autohealing functionality out of the box, current blueprint is more applicable for standalone VMs or VMs in an unmanaged instance group.
The example contains the following components:
The blueprint contains the following components:
- [Cloud Scheduler](https://cloud.google.com/scheduler) to initiate a healthcheck on a schedule.
- [Serverless VPC Connector](https://cloud.google.com/vpc/docs/configure-serverless-vpc-access) to allow Cloud Functions TCP level access to private GCE instances.
@ -13,7 +13,7 @@ The example contains the following components:
- **Restarter** Cloud Function to perform GCE instance reset for instances which are failing TCP healthcheck.
The resources created in this example are shown in the high level diagram below:
The resources created in this blueprint are shown in the high level diagram below:
<img src="diagram.png" width="640px">
@ -31,7 +31,7 @@ Healthchecker cloud function has the following configuration options:
**_NOTE:_** In the current example `healthchecker` is used along with the `restarter` cloud function, but restarter can be replaced with another function like [Pubsub2Inbox](https://github.com/GoogleCloudPlatform/professional-services/tree/main/tools/pubsub2inbox) for email notifications.
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Funmanaged-instances-healthcheck), then go through the following steps to create resources:
@ -40,7 +40,7 @@ Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/c
Once done testing, you can clean up resources by running `terraform destroy`. To persist state, check out the `backend.tf.sample` file.
## Testing the example
## Testing the blueprint
Configure `gcloud` with the project used for the deployment
```bash
gcloud config set project <MY-PROJECT-ID>

View File

@ -8,27 +8,27 @@ They are meant to be used as minimal but complete starting points to create migr
### M4CE on a single project
<a href="./single-project/" title="M4CE with single project"><img src="./single-project/diagram.png" align="left" width="280px"></a> This [example](./single-project/) implements a simple environment for Migrate for Compute Engine (v5) where both the API backend and the migration target environment are deployed on a single GCP project.
<a href="./single-project/" title="M4CE with single project"><img src="./single-project/diagram.png" align="left" width="280px"></a> This [blueprint](./single-project/) implements a simple environment for Migrate for Compute Engine (v5) where both the API backend and the migration target environment are deployed on a single GCP project.
This example represents the easiest scenario to implement a Migrate for Compute Engine (v5) environment suitable for small migrations on simple environments or for product showcases.
This blueprint represents the easiest scenario to implement a Migrate for Compute Engine (v5) environment suitable for small migrations on simple environments or for product showcases.
<br clear="left">
### M4CE with host and target projects
<a href="./host-target-projects/" title="M4CE with host and target projects"><img src="./host-target-projects/diagram.png" align="left" width="280px"></a> This [example](./host-target-projects/) implements a Migrate for Compute Engine (v5) host and target projects topology where the API backend and access grants are implemented on the host project while workloads are migrated on a different target project.
<a href="./host-target-projects/" title="M4CE with host and target projects"><img src="./host-target-projects/diagram.png" align="left" width="280px"></a> This [blueprint](./host-target-projects/) implements a Migrate for Compute Engine (v5) host and target projects topology where the API backend and access grants are implemented on the host project while workloads are migrated on a different target project.
This example shows a complex scenario where Migrate for Compute Engine (v5) can be deployed on top of and existing HUB and SPOKE topology and the migration target projects are deployed with platform foundations.
This blueprint shows a complex scenario where Migrate for Compute Engine (v5) can be deployed on top of and existing HUB and SPOKE topology and the migration target projects are deployed with platform foundations.
<br clear="left">
### M4CE with Host and Target Projects and Shared VPC
<a href="./host-target-sharedvpc/" title="M4CE with host and target projects and shared VPC"><img src="./host-target-sharedvpc/diagram.png" align="left" width="280px"></a> This [example](./host-target-sharedvpc/) implements a Migrate for Compute Engine (v5) host and target projects topology as described above with the support of shared VPC.
<a href="./host-target-sharedvpc/" title="M4CE with host and target projects and shared VPC"><img src="./host-target-sharedvpc/diagram.png" align="left" width="280px"></a> This [blueprint](./host-target-sharedvpc/) implements a Migrate for Compute Engine (v5) host and target projects topology as described above with the support of shared VPC.
The example shows how to implement a Migrate for Compute Engine (v5) environment on top of an existing shared VPC topology where the shared VPC service projects are the target projects for the migration.
The blueprint shows how to implement a Migrate for Compute Engine (v5) environment on top of an existing shared VPC topology where the shared VPC service projects are the target projects for the migration.
<br clear="left">
### ESXi Connector
<a href="./esxi/" title="M4CE ESXi connector"><img src="./esxi/diagram.png" align="left" width="280px"></a> This [example](./esxi/) implements a Migrate for Compute Engine (v5) environment on a VMWare ESXi cluster as source for VM migrations.
<a href="./esxi/" title="M4CE ESXi connector"><img src="./esxi/diagram.png" align="left" width="280px"></a> This [blueprint](./esxi/) implements a Migrate for Compute Engine (v5) environment on a VMWare ESXi cluster as source for VM migrations.
The example shows how to deploy the Migrate for Compute Engine (v5) connector and implement all the security prerequisites for the migration to GCP.
The blueprint shows how to deploy the Migrate for Compute Engine (v5) connector and implement all the security prerequisites for the migration to GCP.

View File

@ -1,8 +1,8 @@
# M4CE(v5) - ESXi Connector
This example deploys a virtual machine from an OVA image and the security prerequisites to run the Migrate for Compute Engine (v5) [connector](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector) on VMWare ESXi.
This blueprint deploys a virtual machine from an OVA image and the security prerequisites to run the Migrate for Compute Engine (v5) [connector](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector) on VMWare ESXi.
The example is designed to deploy the M4CE (v5) connector on and existing VMWare environment. The [network configuration](https://cloud.google.com/migrate/compute-engine/docs/5.0/concepts/architecture#migration_architecture) required to allow the communication of the migrate connector to the GCP API is not included in this example.
The blueprint is designed to deploy the M4CE (v5) connector on and existing VMWare environment. The [network configuration](https://cloud.google.com/migrate/compute-engine/docs/5.0/concepts/architecture#migration_architecture) required to allow the communication of the migrate connector to the GCP API is not included in this blueprint.
This is the high level diagram:
@ -30,7 +30,7 @@ This sample creates several distinct groups of resources:
<!-- END TFDOC -->
## Manual Steps
Once this example is deployed a VCenter user has to be created and binded to the M4CE role in order to allow the connector access the VMWare resources.
Once this blueprint is deployed a VCenter user has to be created and binded to the M4CE role in order to allow the connector access the VMWare resources.
The user can be created manually through the VCenter web interface or through GOV commandline if it is available:
```bash
export GOVC_URL=<VCENTER_URL> (eg. https://192.168.1.100/sdk)

View File

@ -1,8 +1,8 @@
# M4CE(v5) - Host and Target Projects
This example creates a Migrate for Compute Engine (v5) environment deployed on an host project with multiple [target projects](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project).
This blueprint creates a Migrate for Compute Engine (v5) environment deployed on an host project with multiple [target projects](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project).
The example is designed to implement a M4CE (v5) environment on-top of complex migration landing environments where VMs have to be migrated to multiple target projects. It also includes the IAM wiring needed to make such scenarios work.
The blueprint is designed to implement a M4CE (v5) environment on-top of complex migration landing environments where VMs have to be migrated to multiple target projects. It also includes the IAM wiring needed to make such scenarios work.
This is the high level diagram:

View File

@ -1,8 +1,8 @@
# M4CE(v5) - Host and Target Projects with Shared VPC
This example creates a Migrate for Compute Engine (v5) environment deployed on an host project with multiple [target projects](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project) and shared VPCs.
This blueprint creates a Migrate for Compute Engine (v5) environment deployed on an host project with multiple [target projects](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project) and shared VPCs.
The example is designed to implement a M4CE (v5) environment on-top of complex migration landing environment where VMs have to be migrated to multiple target projects. In this example targets are also service projects for a shared VPC. It also includes the IAM wiring needed to make such scenarios work.
The blueprint is designed to implement a M4CE (v5) environment on-top of complex migration landing environment where VMs have to be migrated to multiple target projects. In this blueprint targets are also service projects for a shared VPC. It also includes the IAM wiring needed to make such scenarios work.
This is the high level diagram:
@ -41,4 +41,4 @@ This sample creates\update several distinct groups of resources:
<!-- END TFDOC -->
## Manual Steps
Once this example is deployed the M4CE [m4ce_gmanaged_service_account](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/target-sa-compute-engine#configuring_the_default_service_account) has to be configured to grant the access to the shared VPC and allow the deploy of Compute Engine instances as the result of the migration.
Once this blueprint is deployed the M4CE [m4ce_gmanaged_service_account](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/target-sa-compute-engine#configuring_the_default_service_account) has to be configured to grant the access to the shared VPC and allow the deploy of Compute Engine instances as the result of the migration.

View File

@ -1,8 +1,8 @@
# M4CE(v5) - Single Project
This sample creates a simple M4CE (v5) environment deployed on a single [host project](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project).
This blueprint creates a simple M4CE (v5) environment deployed on a single [host project](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project).
The example is designed for quick tests or product demos where it is required to setup a simple and minimal M4CE (v5) environment. It also includes the IAM wiring needed to make such scenarios work.
The blueprint is designed for quick tests or product demos where it is required to setup a simple and minimal M4CE (v5) environment. It also includes the IAM wiring needed to make such scenarios work.
This is the high level diagram:

View File

@ -7,7 +7,7 @@ The most straightforward way for workloads running outside of Google Cloud to ca
Workload identity federation enables applications running outside of Google Cloud to replace long-lived service account keys with short-lived access tokens. This is achieved by configuring Google Cloud to trust an external identity provider, so applications can use the credentials issued by the external identity provider to impersonate a service account.
This example shows how to set up everything, both in Azure and Google Cloud, so a workload in Azure can access Google Cloud resources without a service account key. This will be possible by configuring workload identity federation to trust access tokens generated for a specific application in an Azure Active Directory (AAD) tenant.
This blueprint shows how to set up everything, both in Azure and Google Cloud, so a workload in Azure can access Google Cloud resources without a service account key. This will be possible by configuring workload identity federation to trust access tokens generated for a specific application in an Azure Active Directory (AAD) tenant.
The following diagram illustrates how the VM will get a short-lived access token and use it to access a resource:
@ -31,14 +31,14 @@ The provided terraform configuration will set up the following architecture:
* A service account with the Viewer role granted on the project. The external identities in the workload identity pool would be assigned the Workload Identity User role on that service account.
## Running the example
## Running the blueprint
Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=blueprints%2Fcloud-operations%2Fworkload-identity-federation), then go through the following steps to create resources:
* `terraform init`
* `terraform apply -var project_id=my-project-id`
## Testing the example
## Testing the blueprint
Once the resources have been created, do the following to verify that everything works as expected:

View File

@ -8,34 +8,36 @@ They are meant to be used as minimal but complete starting points to create actu
### GCE and GCS CMEK via centralized Cloud KMS
<a href="./cmek-via-centralized-kms/" title="CMEK on Cloud Storage and Compute Engine via centralized Cloud KMS"><img src="./cmek-via-centralized-kms/diagram.png" align="left" width="280px"></a> This [example](./cmek-via-centralized-kms/) implements [CMEK](https://cloud.google.com/kms/docs/cmek) for GCS and GCE, via keys hosted in KMS running in a centralized project. The example shows the basic resources and permissions for the typical use case of application projects implementing encryption at rest via a centrally managed KMS service.
<a href="./cmek-via-centralized-kms/" title="CMEK on Cloud Storage and Compute Engine via centralized Cloud KMS"><img src="./cmek-via-centralized-kms/diagram.png" align="left" width="280px"></a> This [blueprint](./cmek-via-centralized-kms/) implements [CMEK](https://cloud.google.com/kms/docs/cmek) for GCS and GCE, via keys hosted in KMS running in a centralized project. The blueprint shows the basic resources and permissions for the typical use case of application projects implementing encryption at rest via a centrally managed KMS service.
<br clear="left">
### Cloud Storage to Bigquery with Cloud Dataflow with least privileges
<a href="./gcs-to-bq-with-least-privileges/" title="Cloud Storage to Bigquery with Cloud Dataflow with least privileges"><img src="./gcs-to-bq-with-least-privileges/diagram.png" align="left" width="280px"></a> This [example](./gcs-to-bq-with-least-privileges/) implements resources required to run GCS to BigQuery Dataflow pipelines. The solution rely on a set of Services account created with the least privileges principle.
<a href="./gcs-to-bq-with-least-privileges/" title="Cloud Storage to Bigquery with Cloud Dataflow with least privileges"><img src="./gcs-to-bq-with-least-privileges/diagram.png" align="left" width="280px"></a> This [blueprint](./gcs-to-bq-with-least-privileges/) implements resources required to run GCS to BigQuery Dataflow pipelines. The solution rely on a set of Services account created with the least privileges principle.
<br clear="left">
### Data Platform Foundations
<a href="./data-platform-foundations/" title="Data Platform Foundations"><img src="./data-platform-foundations/images/overview_diagram.png" align="left" width="280px"></a>
This [example](./data-platform-foundations/) implements a robust and flexible Data Foundation on GCP that provides opinionated defaults, allowing customers to build and scale out additional data pipelines quickly and reliably.
This [blueprint](./data-platform-foundations/) implements a robust and flexible Data Foundation on GCP that provides opinionated defaults, allowing customers to build and scale out additional data pipelines quickly and reliably.
<br clear="left">
### SQL Server Always On Availability Groups
<a href="./sqlserver-alwayson/" title="SQL Server Always On Availability Groups"><img src="https://cloud.google.com/compute/images/sqlserver-ag-architecture.svg" align="left" width="280px"></a>
This [example](./data-platform-foundations/) implements SQL Server Always On Availability Groups using Fabric modules. It builds a two node cluster with a fileshare witness instance in an existing VPC and adds the necessary firewalling. The actual setup process (apart from Active Directory operations) has been scripted, so that least amount of manual works needs to performed.
This [blueprint](./data-platform-foundations/) implements SQL Server Always On Availability Groups using Fabric modules. It builds a two node cluster with a fileshare witness instance in an existing VPC and adds the necessary firewalling. The actual setup process (apart from Active Directory operations) has been scripted, so that least amount of manual works needs to performed.
<br clear="left">
### Cloud SQL instance with multi-region read replicas
<a href="./cloudsql-multiregion/" title="Cloud SQL instance with multi-region read replicas"><img src="./cloudsql-multiregion/images/diagram.png" align="left" width="280px"></a>
This [example](./cloudsql-multiregion/) creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
<a href="./cloudsql-multiregion/" title="Cloud SQL instance with multi-region read replicas"><img src="./cloudsql-multiregion/diagram.png" align="left" width="280px"></a>
This [blueprint](./cloudsql-multiregion/) creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
<br clear="left">
### Data Playground starter with Cloud Vertex AI Notebook and GCS
<a href="./data-playground/" title="Data Playground project with Cloud Vertex AI Notebook, BigQuery and GCS"><img src="./data-playground/diagram.png" align="left" width="280px"></a>
This [example](./data-playground/) creates a [Vertex AI Notebook](https://cloud.google.com/vertex-ai/docs/workbench/introduction) running on a VPC with a private IP and a dedicated Service Account. A GCS bucket and a BigQuery dataset are created to store inputs and outputs of data experiments.
This [blueprint](./data-playground/) creates a [Vertex AI
Notebook](https://cloud.google.com/vertex-ai/docs/workbench/introduction)
running on a VPC with a private IP and a dedicated Service Account. A GCS bucket and a BigQuery dataset are created to store inputs and outputs of data experiments.
<br clear="left">

View File

@ -2,7 +2,7 @@
From startups to enterprises, database disaster recovery planning is critical to provide the continuity of processing. While Cloud SQL does provide high availability within a single region, regional failures or unavailability can occur from cyber attacks to natural disasters. Such incidents or outages lead to a quick domino effect for startups, making it difficult to recover from the loss of revenue and customers, which is especially true for bootstrapped or lean startups. It is critical that your database is regionally resilient and made available promptly in a secondary region. With Cloud SQL for PostgreSQL, you can configure cross-region read replicas for a complete DR failover and fallback process.
This example creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
This blueprint creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
The solution is resilient to a regional outage. To get familiar with the procedure needed in the unfortunate case of a disaster recovery, please follow steps described in [part two](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback#phase-2) of the aforementioned article.
@ -29,7 +29,7 @@ If you're migrating from another Cloud Provider, refer to [this](https://cloud.g
## Requirements
This example will deploy all its resources into the project defined by the `project_id` variable. Please note that we assume this project already exists. However, if you provide the appropriate values to the `project_create` variable, the project will be created as part of the deployment.
This blueprint will deploy all its resources into the project defined by the `project_id` variable. Please note that we assume this project already exists. However, if you provide the appropriate values to the `project_create` variable, the project will be created as part of the deployment.
If `project_create` is left to `null`, the identity performing the deployment needs the `owner` role on the project defined by the `project_id` variable. Otherwise, the identity performing the deployment needs `resourcemanager.projectCreator` on the resource hierarchy node specified by `project_create.parent` and `billing.user` on the billing account specified by `project_create.billing_account_id`.

View File

@ -8,7 +8,7 @@ The following diagram is a high-level reference of the resources created and man
![Data Platform architecture overview](./images/overview_diagram.png "Data Platform architecture overview")
A demo Airflow pipeline is also part of this example: it can be built and run on top of the foundational infrastructure to verify or test the setup quickly.
A demo Airflow pipeline is also part of this blueprint: it can be built and run on top of the foundational infrastructure to verify or test the setup quickly.
## Design overview and choices
@ -21,7 +21,7 @@ The approach adapts to different high-level requirements:
- least privilege principle
- rely on service account impersonation
The code in this example doesn't address Organization-level configurations (Organization policy, VPC-SC, centralized logs). We expect those elements to be managed by automation stages external to this script like those in [FAST](../../../fast).
The code in this blueprint doesn't address Organization-level configurations (Organization policy, VPC-SC, centralized logs). We expect those elements to be managed by automation stages external to this script like those in [FAST](../../../fast).
### Project structure
@ -66,7 +66,7 @@ Service account creation follows the least privilege principle, performing a sin
A full reference of IAM roles managed by the Data Platform [is available here](./IAM.md).
Using of service account keys within a data pipeline exposes to several security risks deriving from a credentials leak. This example shows how to leverage impersonation to avoid the need of creating keys.
Using of service account keys within a data pipeline exposes to several security risks deriving from a credentials leak. This blueprint shows how to leverage impersonation to avoid the need of creating keys.
### User groups
@ -90,13 +90,13 @@ You can configure groups via the `groups` variable.
### Virtual Private Cloud (VPC) design
As is often the case in real-world configurations, this example accepts as input an existing [Shared-VPC](https://cloud.google.com/vpc/docs/shared-vpc) via the `network_config` variable. Make sure that the GKE API (`container.googleapis.com`) is enabled in the VPC host project.
As is often the case in real-world configurations, this blueprint accepts as input an existing [Shared-VPC](https://cloud.google.com/vpc/docs/shared-vpc) via the `network_config` variable. Make sure that the GKE API (`container.googleapis.com`) is enabled in the VPC host project.
If the `network_config` variable is not provided, one VPC will be created in each project that supports network resources (load, transformation and orchestration).
### IP ranges and subnetting
To deploy this example with self-managed VPCs you need the following ranges:
To deploy this blueprint with self-managed VPCs you need the following ranges:
- one /24 for the load project VPC subnet used for Cloud Dataflow workers
- one /24 for the transformation VPC subnet used for Cloud Dataflow workers
@ -144,7 +144,7 @@ This step is optional and depends on customer policies and security best practic
We suggest using Cloud Data Loss Prevention to identify/mask/tokenize your confidential data.
While implementing a Data Loss Prevention strategy is out of scope for this example, we enable the service in two different projects so that [Cloud Data Loss Prevention templates](https://cloud.google.com/dlp/docs/concepts-templates) can be configured in one of two ways:
While implementing a Data Loss Prevention strategy is out of scope for this blueprint, we enable the service in two different projects so that [Cloud Data Loss Prevention templates](https://cloud.google.com/dlp/docs/concepts-templates) can be configured in one of two ways:
- during the ingestion phase, from Dataflow
- during the transformation phase, from [BigQuery](https://cloud.google.com/bigquery/docs/scan-with-dlp) or [Cloud Dataflow](https://cloud.google.com/architecture/running-automated-dataflow-pipeline-de-identify-pii-dataset)
@ -166,11 +166,11 @@ The default configuration will implement 3 tags:
Anything that is not tagged is available to all users who have access to the data warehouse.
For the purpose of the example no groups has access to tagged data. You can configure your tags and roles associated by configuring the `data_catalog_tags` variable. We suggest using the "[Best practices for using policy tags in BigQuery](https://cloud.google.com/bigquery/docs/best-practices-policy-tags)" article as a guide to designing your tags structure and access pattern.
For the purpose of the blueprint no groups has access to tagged data. You can configure your tags and roles associated by configuring the `data_catalog_tags` variable. We suggest using the "[Best practices for using policy tags in BigQuery](https://cloud.google.com/bigquery/docs/best-practices-policy-tags)" article as a guide to designing your tags structure and access pattern.
## How to run this script
To deploy this example on your GCP organization, you will need
To deploy this blueprint on your GCP organization, you will need
- a folder or organization where new projects will be created
- a billing account that will be associated with the new projects
@ -209,9 +209,9 @@ terraform init
terraform apply
```
## How to use this example from Terraform
## How to use this blueprint from Terraform
While this example can be used as a standalone deployment, it can also be called directly as a Terraform module by providing the variables values as show below:
While this blueprint can be used as a standalone deployment, it can also be called directly as a Terraform module by providing the variables values as show below:
```hcl
module "data-platform" {

View File

@ -1,6 +1,6 @@
# Data Playground
This example creates a minimum viable architecture for a data experimentation project with the needed APIs enabled, VPC and Firewall set in place, BigQuesy dataset, GCS bucket and an AI notebook to get started.
This blueprint creates a minimum viable architecture for a data experimentation project with the needed APIs enabled, VPC and Firewall set in place, BigQuesy dataset, GCS bucket and an AI notebook to get started.
This is the high level diagram:

View File

@ -1,6 +1,6 @@
## SQL Server Always On Groups example
## SQL Server Always On Groups blueprint
This is an example of building [SQL Server Always On Availability Groups](https://cloud.google.com/compute/docs/instances/sql-server/configure-availability)
This is an blueprint of building [SQL Server Always On Availability Groups](https://cloud.google.com/compute/docs/instances/sql-server/configure-availability)
using Fabric modules. It builds a two node cluster with a fileshare witness instance in an existing VPC and adds the necessary firewalling.
![Architecture diagram](https://cloud.google.com/compute/images/sqlserver-ag-architecture.svg)

View File

@ -8,5 +8,5 @@ They are meant to be used as minimal but complete starting points to create actu
### Multitenant GKE fleet
<a href="./multitenant-fleet/" title="GKE multitenant fleet"><img src="./multitenant-fleet/diagram.png" align="left" width="280px"></a> This [example](./multitenant-fleet/) allows simple centralized management of similar sets of GKE clusters and their nodepools in a single project, and optional fleet management via GKE Hub templated configurations.
<a href="./multitenant-fleet/" title="GKE multitenant fleet"><img src="./multitenant-fleet/diagram.png" align="left" width="280px"></a> This [blueprint](./multitenant-fleet/) allows simple centralized management of similar sets of GKE clusters and their nodepools in a single project, and optional fleet management via GKE Hub templated configurations.
<br clear="left">

View File

@ -1,10 +1,10 @@
# GKE Multitenant Example
# GKE Multitenant Blueprint
This example presents an opinionated architecture to handle multiple homogeneous GKE clusters. The general idea behind this example is to deploy a single project hosting multiple clusters leveraging several useful GKE features.
This blueprint presents an opinionated architecture to handle multiple homogeneous GKE clusters. The general idea behind this blueprint is to deploy a single project hosting multiple clusters leveraging several useful GKE features.
The pattern used in this design is useful, for example, in cases where multiple clusters host/support the same workloads, such as in the case of a multi-regional deployment. Furthermore, combined with Anthos Config Sync and proper RBAC, this architecture can be used to host multiple tenants (e.g. teams, applications) sharing the clusters.
The pattern used in this design is useful, for blueprint, in cases where multiple clusters host/support the same workloads, such as in the case of a multi-regional deployment. Furthermore, combined with Anthos Config Sync and proper RBAC, this architecture can be used to host multiple tenants (e.g. teams, applications) sharing the clusters.
This example is used as part of the [FAST GKE stage](../../../fast/stages/03-gke-multitenant/) but it can also be used independently if desired.
This blueprint is used as part of the [FAST GKE stage](../../../fast/stages/03-gke-multitenant/) but it can also be used independently if desired.
<p align="center">
<img src="diagram.png" alt="GKE multitenant">

View File

@ -1,11 +1,11 @@
# Decentralized firewall management
This sample shows how a decentralized firewall management can be organized using the [firewall factory](../../factories/net-vpc-firewall-yaml/README.md).
This example shows how a decentralized firewall management can be organized using the [firewall factory](../../factories/net-vpc-firewall-yaml/README.md).
This approach is a good fit when Shared VPCs are used across multiple application/infrastructure teams. A central repository keeps environment/team
specific folders with firewall definitions in `yaml` format.
In the current example multiple teams can define their [VPC Firewall Rules](https://cloud.google.com/vpc/docs/firewalls)
In the current blueprint multiple teams can define their [VPC Firewall Rules](https://cloud.google.com/vpc/docs/firewalls)
for [dev](./firewall/dev) and [prod](./firewall/prod) environments using team specific subfolders. Rules defined in the
[common](./firewall/common) folder are applied to both dev and prod environments.
@ -17,7 +17,7 @@ This is the high level diagram:
![High-level diagram](diagram.png "High-level diagram")
The rules can be validated either using an automated process or a manual process (or a combination of
the two). There is an example of a YAML-based validator using [Yamale](https://github.com/23andMe/Yamale)
the two). There is an blueprint of a YAML-based validator using [Yamale](https://github.com/23andMe/Yamale)
in the [`validator/`](validator/) subdirectory, which can be integrated as part of a CI/CD pipeline.
<!-- BEGIN TFDOC -->

View File

@ -1,6 +1,6 @@
# Network filtering with Squid
This example shows how to deploy a filtering HTTP proxy to restrict Internet access. Here we show one way to do this using a VPC with two subnets:
This blueprint shows how to deploy a filtering HTTP proxy to restrict Internet access. Here we show one way to do this using a VPC with two subnets:
- The `apps` subnet hosts the VMs that will have their Internet access tightly controlled by a non-caching filtering forward proxy.
- The `proxy` subnet hosts a Cloud NAT instance and a [Squid](http://www.squid-cache.org/) server.

View File

@ -1,8 +1,8 @@
# Hub and Spoke via VPC Peering
This example creates a simple **Hub and Spoke** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [VPC Peering](https://cloud.google.com/vpc/docs/vpc-peering).
This blueprint creates a simple **Hub and Spoke** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [VPC Peering](https://cloud.google.com/vpc/docs/vpc-peering).
The example shows some of the limitations that need to be taken into account when using VPC Peering, mostly due to the lack of transitivity between peerings:
The blueprint shows some of the limitations that need to be taken into account when using VPC Peering, mostly due to the lack of transitivity between peerings:
- no mesh networking between the spokes
- complex support for managed services hosted in tenant VPCs connected via peering (Cloud SQL, GKE, etc.)
@ -11,7 +11,7 @@ One possible solution to the managed service limitation above is presented here,
One other topic that needs to be considered when using peering is the limit of 25 peerings in each peering group, which constrains the scalability of design like the one presented here.
The example has been purposefully kept simple to show how to use and wire the VPC modules together, and so that it can be used as a basis for more complex scenarios. This is the high level diagram:
The blueprint has been purposefully kept simple to show how to use and wire the VPC modules together, and so that it can be used as a basis for more complex scenarios. This is the high level diagram:
![High-level diagram](diagram.png "High-level diagram")
@ -41,7 +41,7 @@ gcloud container clusters get-credentials cluster-1 --zone europe-west1-b
kubectl get all
```
The example configures the peering with the GKE master VPC to export routes for you, so that VPN routes are passed through the peering. You can disable by hand in the console or by editing the `peering_config` variable in the `gke-cluster` module, to test non-working configurations or switch to using the [GKE proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
The blueprint configures the peering with the GKE master VPC to export routes for you, so that VPN routes are passed through the peering. You can disable by hand in the console or by editing the `peering_config` variable in the `gke-cluster` module, to test non-working configurations or switch to using the [GKE proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
### Export routes via Terraform (recommended)
@ -73,7 +73,7 @@ Then connect via SSH to the spoke 1 instance and run the same commands you ran o
## Operational considerations
A single pre-existing project is used in this example to keep variables and complexity to a minimum, in a real world scenario each spoke would use a separate project (and Shared VPC).
A single pre-existing project is used in this blueprint to keep variables and complexity to a minimum, in a real world scenario each spoke would use a separate project (and Shared VPC).
A few APIs need to be enabled in the project, if `apply` fails due to a service not being enabled just click on the link in the error message to enable it for the project, then resume `apply`.

View File

@ -1,15 +1,15 @@
# Hub and Spoke via VPN
This example creates a simple **Hub and Spoke VPN** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [IPsec HA VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview#ha-vpn).
This blueprint creates a simple **Hub and Spoke VPN** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [IPsec HA VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview#ha-vpn).
A few additional features are also shown:
- [custom BGP advertisements](https://cloud.google.com/router/docs/how-to/advertising-overview) to implement transitivity between spokes
- [VPC Global Routing](https://cloud.google.com/network-connectivity/docs/router/how-to/configuring-routing-mode) to leverage a regional set of VPN gateways in different regions as next hops (used here for illustrative/study purpose, not usually done in real life)
The example has been purposefully kept simple to show how to use and wire the VPC and VPN-HA modules together, and so that it can be used as a basis for experimentation. For a more complex scenario that better reflects real-life usage, including [Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) and [DNS cross-project binding](https://cloud.google.com/dns/docs/zones/cross-project-binding) please refer to the [FAST network stage](../../../fast/stages/02-networking-vpn/).
The blueprint has been purposefully kept simple to show how to use and wire the VPC and VPN-HA modules together, and so that it can be used as a basis for experimentation. For a more complex scenario that better reflects real-life usage, including [Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) and [DNS cross-project binding](https://cloud.google.com/dns/docs/zones/cross-project-binding) please refer to the [FAST network stage](../../../fast/stages/02-networking-vpn/).
This is the high level diagram of this example:
This is the high level diagram of this blueprint:
![High-level diagram](diagram.png "High-level diagram")
@ -27,9 +27,9 @@ This sample creates several distinct groups of resources:
## Prerequisites
A single pre-existing project is used in this example to keep variables and complexity to a minimum, in a real world scenarios each spoke would probably use a separate project.
A single pre-existing project is used in this blueprint to keep variables and complexity to a minimum, in a real world scenarios each spoke would probably use a separate project.
The provided project needs a valid billing account, the Compute and DNS APIs are enabled by the example.
The provided project needs a valid billing account, the Compute and DNS APIs are enabled by the blueprint.
You can easily create such a project by commenting turning on project creation in the project module contained in `main.tf`, as shown in this snippet:
@ -52,7 +52,7 @@ module "project" {
## Testing
Once the example is up, you can quickly test features by logging in to one of the test VMs:
Once the blueprint is up, you can quickly test features by logging in to one of the test VMs:
```bash
gcloud compute ssh hs-ha-lnd-test-r1