6941313c7d
* factories refactor doc * Adds file schema and filesystem organization * Update 20231106-factories.md * move factories out of blueprints and create new factories README * align factory in billing-account module * align factory in dataplex-datascan module * align factory in billing-account module * align factory in net-firewall-policy module * align factory in dns-response-policy module * align factory in net-vpc-firewall module * align factory in net-vpc module * align factory variable names in FAST * remove decentralized firewall blueprint * bump terraform version * bump module versions * update top-level READMEs * move project factory to modules * fix variable names and tests * tfdoc * remove changelog link * add project factory to top-level README * fix cludrun eventarc diff * fix README * fix cludrun eventarc diff --------- Co-authored-by: Simone Ruffilli <sruffilli@google.com> |
||
---|---|---|
.. | ||
manifest-templates | ||
README.md | ||
main.tf | ||
outputs.tf | ||
providers.tf | ||
tutorial.md | ||
variables.tf | ||
versions.tf | ||
versions_override.tf |
README.md
Highly Available Kafka on GKE
Introduction
This blueprints shows how to a hihgly available Kakfa instance on GKE using the Strimzi operator.
Requirements
This blueprint assumes the GKE cluster already exists. We recommend using the accompanying Autopilot Cluster Pattern to deploy a cluster according to Google's best practices. Once you have the cluster up-and-running, you can use this blueprint to deploy Kafka in it.
The Kafka manifests will download , which means that the subnet where the GKE cluster is deployed needs to have Internet connectivity to download the images. If you're using the provided Autopilot Cluster Pattern, you can set the enable_cloud_nat
option of the vpc_create
variable.
Cluster authentication
Once you have a cluster with Internet connectivity, create a terraform.tfvars
and setup the credentials_config
variable. We recommend using Anthos Fleet to simplify accessing the control plane.
Kafka Configuration
This template exposes several variables to configure the Kafka instance:
namespace
which controls the namespace used to deploy the Kafka instancekafka_config
to customize the configuration of the Kafka instance. The default configuration deploys version 3.6.0 with 3 replicas, with a disk of 10Gi and 4096 MB of RAM.zookeeper_config
to customize the configuration of the Zookeeper instance. The default configuration deploys 3 replicas, with a disk of 10Gi and 2048 MB of RAM.
Any other configuration can be applied by directly modifying the YAML manifests under the manifest-templates directory.
Sample Configuration
The following template as a starting point for your terraform.tfvars
credentials_config = {
kubeconfig = {
path = "~/.kube/config"
}
}
kafka_config = {
volume_claim_size = "15Gi"
replicas = 4
}
zookeeper_config = {
volume_claim_size = "15Gi"
}
Variables
name | description | type | required | default |
---|---|---|---|---|
credentials_config | Configure how Terraform authenticates to the cluster. | object({…}) |
✓ | |
kafka_config | Configure Kafka cluster statefulset parameters. | object({…}) |
{} |
|
namespace | Namespace used for Redis cluster resources. | string |
"kafka" |
|
templates_path | Path where manifest templates will be read from. Set to null to use the default manifests. | string |
null |
|
zookeeper_config | Configure Zookeper cluster statefulset parameters. | object({…}) |
{} |