cloud-foundation-fabric/infrastructure/shared-vpc-gke/README.md

4.9 KiB

Shared VPC with GKE example

This sample creates a basic Shared VPC setup using one host project and two service projects, each with a specific subnet in the shared VPC. The setup also includes the specific IAM-level configurations needed for GKE on Shared VPC to enable cluster creation in one of the two service projects.

The sample has been purposefully kept simple so that it can be used as a basis for different Shared VPC configurations. This is the high level diagram:

High-level diagram

Managed resources and services

This sample creates several distinct groups of resources:

  • projects
    • host project
    • service project configured for GKE clusters
    • service project configured for GCE instances
  • networking
    • the shared VPC network
    • one subnet with secondary ranges for GKE clusters
    • one subnet for GCE instances
    • firewall rules for SSH access via IAP and open communication within the VPC
    • Cloud NAT service
  • IAM
    • one service account for the bastion CGE instance
    • one service account for the GKE nodes
    • optional owner role bindings on each project
    • optional OS Login role bindings on the GCE service project
    • role bindings to allow the GCE instance and GKE nodes logging and monitoring write access
    • role binding to allow the GCE instance cluster access
  • DNS
    • one private zone
  • GCE
    • one instance used to access the internal GKE cluster
  • GKE
    • one private cluster with one nodepool

Accessing the bastion instance and GKE cluster

The bastion VM has no public address so access is mediated via IAP, which is supported transparently in the gcloud compute ssh command. Authentication is via OS Login set as a project default.

Cluster access from the bastion can leverage the instance service account's container.developer role: the only configuration needed is to fetch cluster credentials via gcloud container clusters get-credentials passing the correct cluster name, location and project via command options.

Destroying

There's a minor glitch that can surface running terraform destroy, where the service project attachments to the Shared VPC will not get destroyed even with the relevant API call succeeding. We are investigating the issue, in the meantime just manually remove the attachment in the Cloud console or via the gcloud beta compute shared-vpc associated-projects remove command when terraform destroy fails, and then relaunch the command.

Variables

name description type required default
billing_account_id Billing account id used as default for new projects. string
prefix Prefix used for resources that need unique names. string
root_node Hierarchy node where projects will be created, 'organizations/org_id' or 'folders/folder_id'. string
ip_ranges Subnet IP CIDR ranges. map(string) ...
ip_secondary_ranges Secondary IP CIDR ranges. map(string) ...
owners_gce GCE project owners, in IAM format. list(string) []
owners_gke GKE project owners, in IAM format. list(string) []
owners_host Host project owners, in IAM format. list(string) []
private_service_ranges Private service IP CIDR ranges. map(string) ...
project_services Service APIs enabled by default in new projects. list(string) ...
region Region used. string europe-west1

Outputs

name description sensitive
gke_clusters GKE clusters information.
projects Project ids.
service_accounts GCE and GKE service accounts.
vms GCE VMs.
vpc Shared VPC.