cloud-foundation-fabric/blueprints/networking/shared-vpc-gke
Julio Castillo 1a3bb25917 Update provider version (needed for dns logging support). 2022-10-25 12:15:02 +02:00
..
README.md Rename examples folder to blueprints 2022-09-09 16:38:43 +02:00
backend.tf.sample Rename examples folder to blueprints 2022-09-09 16:38:43 +02:00
diagram.gcpdraw Rename examples folder to blueprints 2022-09-09 16:38:43 +02:00
diagram.png Rename examples folder to blueprints 2022-09-09 16:38:43 +02:00
main.tf Remove redundant ttls 2022-10-25 12:11:07 +02:00
outputs.tf Rename examples folder to blueprints 2022-09-09 16:38:43 +02:00
variables.tf Rename examples folder to blueprints 2022-09-09 16:38:43 +02:00
versions.tf Update provider version (needed for dns logging support). 2022-10-25 12:15:02 +02:00

README.md

Shared VPC with optional GKE cluster

This sample creates a basic Shared VPC setup using one host project and two service projects, each with a specific subnet in the shared VPC.

The setup also includes the specific IAM-level configurations needed for GKE on Shared VPC in one of the two service projects, and optionally creates a cluster with a single nodepool.

If you only need a basic Shared VPC, or prefer creating a cluster manually, set the cluster_create variable to False.

The sample has been purposefully kept simple so that it can be used as a basis for different Shared VPC configurations. This is the high level diagram:

High-level diagram

Accessing the bastion instance and GKE cluster

The bastion VM has no public address so access is mediated via IAP, which is supported transparently in the gcloud compute ssh command. Authentication is via OS Login set as a project default.

Cluster access from the bastion can leverage the instance service account's container.developer role: the only configuration needed is to fetch cluster credentials via gcloud container clusters get-credentials passing the correct cluster name, location and project via command options.

For convenience, Tinyproxy is installed on the bastion host, allowing kubectl use via IAP from an external client:

gcloud container clusters get-credentials "${CLUSTER_NAME}" \
  --zone "${CLUSTER_ZONE}" \
  --project "${CLUSTER_PROJECT_NAME}"

gcloud compute ssh "${BASTION_INSTANCE_NAME}" \
  --project "${CLUSTER_PROJECT_NAME}" \
  --zone "${CLUSTER_ZONE}" \
  -- -L 8888:localhost:8888 -N -q -f

# Run kubectl through the proxy
HTTPS_PROXY=localhost:8888 kubectl get pods

An alias can also be created. For example:

alias k='HTTPS_PROXY=localhost:8888 kubectl $@'

Destroying

There's a minor glitch that can surface running terraform destroy, where the service project attachments to the Shared VPC will not get destroyed even with the relevant API call succeeding. We are investigating the issue, in the meantime just manually remove the attachment in the Cloud console or via the gcloud beta compute shared-vpc associated-projects remove command when terraform destroy fails, and then relaunch the command.

Variables

name description type required default
billing_account_id Billing account id used as default for new projects. string
prefix Prefix used for resources that need unique names. string
root_node Hierarchy node where projects will be created, 'organizations/org_id' or 'folders/folder_id'. string
cluster_create Create GKE cluster and nodepool. bool true
ip_ranges Subnet IP CIDR ranges. map(string) {…}
ip_secondary_ranges Secondary IP CIDR ranges. map(string) {…}
owners_gce GCE project owners, in IAM format. list(string) []
owners_gke GKE project owners, in IAM format. list(string) []
owners_host Host project owners, in IAM format. list(string) []
private_service_ranges Private service IP CIDR ranges. map(string) {…}
project_services Service APIs enabled by default in new projects. list(string) […]
region Region used. string "europe-west1"

Outputs

name description sensitive
gke_clusters GKE clusters information.
projects Project ids.
vms GCE VMs.
vpc Shared VPC.