a62fda5b66 | ||
---|---|---|
.. | ||
README.md | ||
diagram.png | ||
gke-clusters.tf | ||
gke-hub.tf | ||
gke-nodepools.tf | ||
main.tf | ||
outputs.tf | ||
variables.tf |
README.md
GKE Multitenant Example
This example presents an opinionated architecture to handle multiple homogeneous GKE clusters. The general idea behind this example is to deploy a single project hosting multiple clusters leveraging several useful GKE features.
The pattern used in this design is useful, for example, in cases where multiple clusters host/support the same workloads, such as in the case of a multi-regional deployment. Furthermore, combined with Anthos Config Sync and proper RBAC, this architecture can be used to host multiple tenants (e.g. teams, applications) sharing the clusters.
This example is used as part of the FAST GKE stage but it can also be used independently if desired.
The overall architecture is based on the following design decisions:
- All clusters are assumed to be private, therefore only VPC-native clusters are supported.
- Logging and monitoring configured to use Cloud Operations for system components and user workloads.
- GKE metering enabled by default and stored in a bigquery dataset created withing the project.
- Optional GKE Fleet support with the possibility to enable any of the following features:
- Support for Config Sync, Hierarchy Controller, and Policy Controller when using Anthos Config Management.
- Groups for GKE can be enabled to facilitate the creation of flexible RBAC policies referencing group principals.
- Support for application layer secret encryption.
- Support to customize peering configuration of the control plane VPC (e.g. to import/export routes to the peered network)
- Some features are enabled by default in all clusters:
Basic usage
The following example shows how to deploy a single cluster and a single node pool
clusters = {
"mycluster" = {
cluster_autoscaling = null
description = "mycluster"
dns_domain = null
location = "europe-west1"
labels = {}
net = {
master_range = "172.17.16.0/28"
pods = "pods"
services = "services"
subnet = "//www.googleapis.com/compute/v1/projects/<MY_PROJECT>/regions/europe-west1/subnetworks/<MY_SUBNET>"
}
overrides = null
}
}
nodepools = {
"mycluster" = {
"mynodepool" = {
initial_node_count = 1
node_count = 1
node_type = "n2-standard-4"
overrides = null
spot = false
}
}
}
Fleet configuration
Multi-tenant usage
This is an example of that shows the use of the above variables:
# the `cluster_defaults` variable defaults are used and not shown here
clusters = {
"gke-00" = {
cluster_autoscaling = null
description = "gke-00"
dns_domain = null
location = "europe-west1"
labels = {}
net = {
master_range = "172.17.16.0/28"
pods = "pods"
services = "services"
subnet = local.vpc.subnet_self_links["europe-west3/gke-dev-0"]
}
overrides = null
}
"gke-01" = {
cluster_autoscaling = null
description = "gke-01"
dns_domain = null
location = "europe-west3"
labels = {}
net = {
master_range = "172.17.17.0/28"
pods = "pods"
services = "services"
subnet = local.vpc.subnet_self_links["europe-west3/gke-dev-0"]
}
overrides = {
cloudrun_config = false
database_encryption_key = null
gcp_filestore_csi_driver_config = true
master_authorized_ranges = {
rfc1918_1 = "10.0.0.0/8"
}
max_pods_per_node = 64
pod_security_policy = true
release_channel = "STABLE"
vertical_pod_autoscaling = false
}
}
}
nodepools = {
"gke-0" = {
"gke-00-000" = {
initial_node_count = 1
node_count = 1
node_type = "n2-standard-4"
overrides = null
spot = false
}
}
"gke-1" = {
"gke-01-000" = {
initial_node_count = 1
node_count = 1
node_type = "n2-standard-4"
overrides = {
image_type = "UBUNTU_CONTAINERD"
max_pods_per_node = 64
node_locations = []
node_tags = []
node_taints = []
}
spot = true
}
}
}
fleet_configmanagement_templates = {
default = {
binauthz = false
config_sync = {
git = {
gcp_service_account_email = null
https_proxy = null
policy_dir = "configsync"
secret_type = "none"
source_format = "hierarchy"
sync_branch = "main"
sync_repo = "https://github.com/.../..."
sync_rev = null
sync_wait_secs = null
}
prevent_drift = true
source_format = "hierarchy"
}
hierarchy_controller = null
policy_controller = null
version = "1.10.2"
}
}
fleet_configmanagement_clusters = {
default = ["gke-1", "gke-2"]
}
fleet_features = {
appdevexperience = false
configmanagement = false
identityservice = false
multiclusteringress = "gke-1"
multiclusterservicediscovery = true
servicemesh = false
}
Files
name | description | modules |
---|---|---|
gke-clusters.tf | None | gke-cluster |
gke-hub.tf | None | gke-hub |
gke-nodepools.tf | None | gke-nodepool |
main.tf | Module-level locals and resources. | bigquery-dataset · project |
outputs.tf | Output variables. | |
variables.tf | Module variables. |
Variables
name | description | type | required | default | producer |
---|---|---|---|---|---|
billing_account_id | Billing account id. | string |
✓ | ||
clusters | map(object({…})) |
✓ | |||
folder_id | Folder used for the GKE project in folders/nnnnnnnnnnn format. | string |
✓ | ||
nodepools | map(map(object({…}))) |
✓ | |||
prefix | Prefix used for resources that need unique names. | string |
✓ | ||
project_id | ID of the project that will contain all the clusters. | string |
✓ | ||
vpc_config | Shared VPC project and VPC details. | object({…}) |
✓ | ||
authenticator_security_group | Optional group used for Groups for GKE. | string |
null |
||
cluster_defaults | Default values for optional cluster configurations. | object({…}) |
{…} |
||
dns_domain | Domain name used for clusters, prefixed by each cluster name. Leave null to disable Cloud DNS for GKE. | string |
null |
||
fleet_configmanagement_clusters | Config management features enabled on specific sets of member clusters, in config name => [cluster name] format. | map(list(string)) |
{} |
||
fleet_configmanagement_templates | Sets of config management configurations that can be applied to member clusters, in config name => {options} format. | map(object({…})) |
{} |
||
fleet_features | Enable and configue fleet features. Set to null to disable GKE Hub if fleet workload identity is not used. | object({…}) |
null |
||
fleet_workload_identity | Use Fleet Workload Identity for clusters. Enables GKE Hub if set to true. | bool |
true |
||
group_iam | Project-level IAM bindings for groups. Use group emails as keys, list of roles as values. | map(list(string)) |
{} |
||
iam | Project-level authoritative IAM bindings for users and service accounts in {ROLE => [MEMBERS]} format. | map(list(string)) |
{} |
||
labels | Project-level labels. | map(string) |
{} |
||
nodepool_defaults | object({…}) |
{…} |
|||
peering_config | Configure peering with the control plane VPC. Requires compute.networks.updatePeering. Set to null if you don't want to update the default peering configuration. | object({…}) |
{…} |
||
project_services | Additional project services to enable. | list(string) |
[] |
Outputs
name | description | sensitive | consumers |
---|---|---|---|
cluster_ids | Cluster ids. | ||
clusters | Cluster resources. | ||
project_id | GKE project id. |