# External Application Load Balancer Module This module allows managing Global HTTP/HTTPS Classic Load Balancers (GLBs). It's designed to expose the full configuration of the underlying resources, and to facilitate common usage patterns by providing sensible defaults, and optionally managing prerequisite resources like health checks, instance groups, etc. Due to the complexity of the underlying resources, changes to the configuration that involve recreation of resources are best applied in stages, starting by disabling the configuration in the urlmap that references the resources that need recreation, then doing the same for the backend service, etc. ## Examples - [Examples](#examples) - [Minimal HTTP Example](#minimal-http-example) - [Minimal HTTPS examples](#minimal-https-examples) - [HTTP backends](#http-backends) - [HTTPS backends](#https-backends) - [HTTP to HTTPS redirect](#http-to-https-redirect) - [Classic vs Non-classic](#classic-vs-non-classic) - [Health Checks](#health-checks) - [Backend Types and Management](#backend-types-and-management) - [Instance Groups](#instance-groups) - [Managed Instance Groups](#managed-instance-groups) - [Storage Buckets](#storage-buckets) - [Network Endpoint Groups (NEGs)](#network-endpoint-groups-negs) - [Zonal NEG creation](#zonal-neg-creation) - [Hybrid NEG creation](#hybrid-neg-creation) - [Internet NEG creation](#internet-neg-creation) - [Private Service Connect NEG creation](#private-service-connect-neg-creation) - [Serverless NEG creation](#serverless-neg-creation) - [URL Map](#url-map) - [SSL Certificates](#ssl-certificates) - [Complex example](#complex-example) - [Files](#files) - [Variables](#variables) - [Outputs](#outputs) - [Fixtures](#fixtures) ### Minimal HTTP Example An HTTP load balancer with a backend service pointing to a GCE instance group: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = module.compute-vm-group-b.group.id }, { backend = module.compute-vm-group-c.group.id }, ] } } } # tftest modules=3 resources=9 fixtures=fixtures/compute-vm-group-bc.tf inventory=minimal-http.yaml e2e ``` ### Minimal HTTPS examples #### HTTP backends An HTTPS load balancer needs a certificate and backends can be HTTP or HTTPS. This is an example With HTTP backends and a managed certificate: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = module.compute-vm-group-b.group.id }, { backend = module.compute-vm-group-c.group.id }, ] protocol = "HTTP" } } protocol = "HTTPS" ssl_certificates = { managed_configs = { default = { domains = ["glb-test-0.example.org"] } } } } # tftest modules=3 resources=10 fixtures=fixtures/compute-vm-group-bc.tf inventory=http-backends.yaml e2e ``` #### HTTPS backends For HTTPS backends the backend service protocol needs to be set to `HTTPS`. The port name if omitted is inferred from the protocol, in this case it is set internally to `https`. The health check also needs to be set to https. This is a complete example: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = module.compute-vm-group-b.group.id }, { backend = module.compute-vm-group-c.group.id }, ] protocol = "HTTPS" } } health_check_configs = { default = { https = { port_specification = "USE_SERVING_PORT" } } } protocol = "HTTPS" ssl_certificates = { managed_configs = { default = { domains = ["glb-test-0.example.org"] } } } } # tftest modules=3 resources=10 fixtures=fixtures/compute-vm-group-bc.tf inventory=https-backends.yaml e2e ``` #### HTTP to HTTPS redirect Redirect is implemented via an additional HTTP load balancer with a custom URL map, similarly to how it's done via the GCP Console. The address shared by the two load balancers needs to be reserved. ```hcl module "addresses" { source = "./fabric/modules/net-address" project_id = var.project_id global_addresses = { "glb-test-0" = {} } } module "glb-test-0-redirect" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0-redirect" address = ( module.addresses.global_addresses["glb-test-0"].address ) health_check_configs = {} urlmap_config = { description = "URL redirect for glb-test-0." default_url_redirect = { https = true response_code = "MOVED_PERMANENTLY_DEFAULT" } } } module "glb-test-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" use_classic_version = false address = ( module.addresses.global_addresses["glb-test-0"].address ) backend_service_configs = { default = { backends = [ { backend = module.compute-vm-group-b.group.id }, ] protocol = "HTTP" } } protocol = "HTTPS" ssl_certificates = { managed_configs = { default = { domains = ["glb-test.example.com"] } } } } # tftest modules=5 resources=14 fixtures=fixtures/compute-vm-group-bc.tf inventory=http-https-redirect.yaml e2e ``` ### Classic vs Non-classic The module uses a classic Global Load Balancer by default. To use the non-classic version set the `use_classic_version` variable to `false` as in the following example, note that the module is not enforcing feature sets between the two versions: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" use_classic_version = false backend_service_configs = { default = { backends = [ { backend = module.compute-vm-group-b.group.id }, { backend = module.compute-vm-group-c.group.id }, ] } } } # tftest modules=3 resources=9 fixtures=fixtures/compute-vm-group-bc.tf inventory=classic-vs-non-classic.yaml e2e ``` ### Health Checks You can leverage externally defined health checks for backend services, or have the module create them for you. By default a simple HTTP health check named `default` is created and used in backend services. If you need to override the default, simply define your own health check using the same key (`default`). For more complex configurations you can define your own health checks and reference them via keys in the backend service configurations. Health checks created by this module are controlled via the `health_check_configs` variable, which behaves in a similar way to other LB modules in this repository. This is an example that overrides the default health check configuration using a TCP health check: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [{ backend = module.compute-vm-group-b.group.id }] # no need to reference the hc explicitly when using the `default` key # health_checks = ["default"] } } health_check_configs = { default = { tcp = { port = 80 } } } } # tftest modules=3 resources=9 fixtures=fixtures/compute-vm-group-bc.tf inventory=health-check-1.yaml e2e ``` To leverage existing health checks without having the module create them, simply pass their self links to backend services and set the `health_check_configs` variable to an empty map: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [{ backend = module.compute-vm-group-b.group.id }] health_checks = ["projects/${var.project_id}/global/healthChecks/custom"] } } health_check_configs = {} } # tftest modules=3 resources=8 fixtures=fixtures/compute-vm-group-bc.tf inventory=health-check-2.yaml ``` ### Backend Types and Management #### Instance Groups The module can optionally create unmanaged instance groups, which can then be referred to in backends via their key. This is the simple HTTP example above but with instance group creation managed by the module: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = "default-b" } ] } } group_configs = { default-b = { zone = "${var.region}-b" instances = [ module.compute-vm-group-b.id ] named_ports = { http = 80 } } } } # tftest modules=3 resources=10 fixtures=fixtures/compute-vm-group-bc.tf inventory=instance-groups.yaml e2e ``` #### Managed Instance Groups This example shows how to use the module with a manage instance group as backend: ```hcl module "win-template" { source = "./fabric/modules/compute-vm" project_id = var.project_id zone = "${var.region}-a" name = "win-template" instance_type = "n2d-standard-2" create_template = true boot_disk = { initialize_params = { image = "projects/windows-cloud/global/images/windows-server-2019-dc-v20221214" size = 70 } } network_interfaces = [{ network = var.vpc.self_link subnetwork = var.subnet.self_link nat = false addresses = null }] } module "win-mig" { source = "./fabric/modules/compute-mig" project_id = var.project_id location = "${var.region}-a" name = "win-mig" instance_template = module.win-template.template.self_link autoscaler_config = { max_replicas = 3 min_replicas = 1 cooldown_period = 30 scaling_signals = { cpu_utilization = { target = 0.80 } } } named_ports = { http = 80 } } module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = module.win-mig.group_manager.instance_group } ] } } } # tftest modules=3 resources=8 inventory=managed-instance-groups.yaml e2e ``` #### Storage Buckets GCS bucket backends can also be managed and used in this module in a similar way to regular backend services.Multiple GCS bucket backends can be defined and referenced in URL maps by their keys (or self links if defined externally) together with regular backend services, [an example is provided later in this document](#complex-example). This is a simple example that defines a GCS backend as the default for the URL map: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_buckets_config = { default = { bucket_name = var.bucket } } # with a single GCS backend the implied default health check is not needed health_check_configs = {} } # tftest modules=1 resources=4 inventory=storage.yaml e2e ``` #### Network Endpoint Groups (NEGs) Supported Network Endpoint Groups (NEGs) can also be used as backends. Similarly to groups, you can pass a self link for existing NEGs or have the module manage them for you. A simple example using an existing zonal NEG: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = "myneg-b" balancing_mode = "RATE" max_rate = { per_endpoint = 10 } } ] } } neg_configs = { myneg-b = { hybrid = { network = var.vpc.self_link subnetwork = var.subnet.self_link zone = "${var.region}-b" endpoints = {} } } } } # tftest modules=1 resources=6 inventory=network-endpoint-groups.yaml e2e ``` #### Zonal NEG creation This example shows how to create and manage zonal NEGs using GCE VMs as endpoints: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = "neg-0" balancing_mode = "RATE" max_rate = { per_endpoint = 10 } } ] } } neg_configs = { neg-0 = { gce = { network = var.vpc.self_link subnetwork = var.subnet.self_link zone = "${var.region}-b" endpoints = { e-0 = { instance = "my-ig-b" ip_address = module.compute-vm-group-b.internal_ip port = 80 } } } } } } # tftest modules=3 resources=11 fixtures=fixtures/compute-vm-group-bc.tf inventory=zonal-neg-creation.yaml e2e ``` #### Hybrid NEG creation This example shows how to create and manage hybrid NEGs: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = "neg-0" balancing_mode = "RATE" max_rate = { per_endpoint = 10 } } ] } } neg_configs = { neg-0 = { hybrid = { network = var.vpc.self_link zone = "${var.region}-b" endpoints = { e-0 = { ip_address = "10.0.0.10" port = 80 } } } } } } # tftest modules=1 resources=7 inventory=hybrid-neg.yaml e2e ``` #### Internet NEG creation This example shows how to create and manage internet NEGs: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = "neg-0" } ] health_checks = [] } } # with a single internet NEG the implied default health check is not needed health_check_configs = {} neg_configs = { neg-0 = { internet = { use_fqdn = true endpoints = { e-0 = { destination = "www.example.org" port = 80 } } } } } } # tftest modules=1 resources=6 inventory=internet-neg.yaml e2e ``` #### Private Service Connect NEG creation The module supports managing PSC NEGs if the non-classic version of the load balancer is used: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" use_classic_version = false backend_service_configs = { default = { backends = [ { backend = "neg-0" } ] health_checks = [] } } # with a single PSC NEG the implied default health check is not needed health_check_configs = {} neg_configs = { neg-0 = { psc = { region = var.region target_service = "${var.region}-cloudkms.googleapis.com" } } } } # tftest modules=1 resources=5 ``` #### Serverless NEG creation The module supports managing Serverless NEGs for Cloud Run and Cloud Function. This is an example of a Cloud Run NEG: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = "neg-0" } ] health_checks = [] } } # with a single serverless NEG the implied default health check is not needed health_check_configs = {} neg_configs = { neg-0 = { cloudrun = { region = var.region target_service = { name = "hello" } } } } } # tftest modules=1 resources=5 inventory=serverless-neg.yaml e2e ``` Serverless NEGs don't use the port name but it should be set to `http`. An HTTPS frontend requires the protocol to be set to `HTTPS`, and the port name field will infer this value if omitted so you need to set it explicitly: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = "neg-0" } ] health_checks = [] port_name = "http" } } # with a single serverless NEG the implied default health check is not needed health_check_configs = {} neg_configs = { neg-0 = { cloudrun = { region = var.region target_service = { name = "hello" } } } } protocol = "HTTPS" ssl_certificates = { managed_configs = { default = { domains = ["glb-test-0.example.org"] } } } } # tftest modules=1 resources=6 inventory=https-sneg.yaml e2e ``` ### URL Map The module exposes the full URL map resource configuration, with some minor changes to the interface to decrease verbosity, and support for aliasing backend services via keys. The default URL map configuration sets the `default` backend service as the default service for the load balancer as a convenience. Just override the `urlmap_config` variable to change the default behaviour: ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [{ backend = module.compute-vm-group-b.group.id }] } other = { backends = [{ backend = module.compute-vm-group-c.group.id }] } } urlmap_config = { default_service = "default" host_rules = [{ hosts = ["*"] path_matcher = "pathmap" }] path_matchers = { pathmap = { default_service = "default" path_rules = [{ paths = ["/other", "/other/*"] service = "other" }] } } } } # tftest modules=3 resources=10 fixtures=fixtures/compute-vm-group-bc.tf inventory=url-map.yaml e2e ``` ### SSL Certificates The module also allows managing managed and self-managed SSL certificates via the `ssl_certificates` variable. Any certificate defined there will be added to the HTTPS proxy resource. THe [HTTPS example above](#minimal-https-examples) shows how to configure manage certificated, the following example shows how to use an unmanaged (or self managed) certificate. The example uses Terraform resource for the key and certificate so that the we don't depend on external files when running tests, in real use the key and certificate are generally provided via external files read by the Terraform `file()` function. ```hcl resource "tls_private_key" "default" { algorithm = "RSA" rsa_bits = 2048 } resource "tls_self_signed_cert" "default" { private_key_pem = tls_private_key.default.private_key_pem subject { common_name = "example.com" organization = "ACME Examples, Inc" } validity_period_hours = 720 allowed_uses = [ "key_encipherment", "digital_signature", "server_auth", ] } module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_service_configs = { default = { backends = [ { backend = module.compute-vm-group-b.group.id }, { backend = module.compute-vm-group-c.group.id }, ] protocol = "HTTP" } } protocol = "HTTPS" ssl_certificates = { create_configs = { default = { # certificate and key could also be read via file() from external files certificate = tls_self_signed_cert.default.cert_pem private_key = tls_private_key.default.private_key_pem } } } } # tftest modules=3 resources=12 fixtures=fixtures/compute-vm-group-bc.tf inventory=ssl-certificates.yaml e2e ``` ### Complex example This example mixes group and NEG backends, and shows how to set HTTPS for specific backends. ```hcl module "glb-0" { source = "./fabric/modules/net-lb-app-ext" project_id = var.project_id name = "glb-test-0" backend_buckets_config = { gcs-0 = { bucket_name = var.bucket } } backend_service_configs = { default = { backends = [ { backend = "group-zone-b" }, { backend = "group-zone-c" }, ] } neg-gce-0 = { backends = [{ balancing_mode = "RATE" backend = "neg-zone-c" max_rate = { per_endpoint = 10 } }] } neg-hybrid-0 = { backends = [{ balancing_mode = "RATE" backend = "neg-hello" max_rate = { per_endpoint = 10 } }] health_checks = ["neg"] protocol = "HTTPS" } } group_configs = { group-zone-b = { zone = "${var.region}-b" instances = [ module.compute-vm-group-b.id ] named_ports = { http = 80 } } group-zone-c = { zone = "${var.region}-c" instances = [ module.compute-vm-group-c.id ] named_ports = { http = 80 } } } health_check_configs = { default = { http = { port = 80 } } neg = { https = { host = "hello.example.com" port = 443 } } } neg_configs = { neg-zone-c = { gce = { network = var.vpc.self_link subnetwork = var.subnet.self_link zone = "${var.region}-c" endpoints = { e-0 = { instance = "my-ig-c" ip_address = module.compute-vm-group-c.internal_ip port = 80 } } } } neg-hello = { hybrid = { network = var.vpc.self_link zone = "${var.region}-b" endpoints = { e-0 = { ip_address = "192.168.0.3" port = 443 } } } } } urlmap_config = { default_service = "default" host_rules = [ { hosts = ["*"] path_matcher = "gce" }, { hosts = ["hello.example.com"] path_matcher = "hello" }, { hosts = ["static.example.com"] path_matcher = "static" } ] path_matchers = { gce = { default_service = "default" path_rules = [ { paths = ["/gce-neg", "/gce-neg/*"] service = "neg-gce-0" } ] } hello = { default_service = "neg-hybrid-0" } static = { default_service = "gcs-0" } } } } # tftest modules=3 resources=19 fixtures=fixtures/compute-vm-group-bc.tf inventory=complex-example.yaml e2e ``` ## Files | name | description | resources | |---|---|---| | [backend-service.tf](./backend-service.tf) | Backend service resources. | google_compute_backend_service | | [backends.tf](./backends.tf) | Backend groups and backend buckets resources. | google_compute_backend_bucket | | [groups.tf](./groups.tf) | None | google_compute_instance_group | | [health-check.tf](./health-check.tf) | Health check resource. | google_compute_health_check | | [main.tf](./main.tf) | Module-level locals and resources. | google_compute_global_forwarding_rule · google_compute_managed_ssl_certificate · google_compute_ssl_certificate · google_compute_target_http_proxy · google_compute_target_https_proxy | | [negs.tf](./negs.tf) | NEG resources. | google_compute_global_network_endpoint · google_compute_global_network_endpoint_group · google_compute_network_endpoint · google_compute_network_endpoint_group · google_compute_region_network_endpoint_group | | [outputs.tf](./outputs.tf) | Module outputs. | | | [urlmap.tf](./urlmap.tf) | URL map resources. | google_compute_url_map | | [variables-backend-service.tf](./variables-backend-service.tf) | Backend services variables. | | | [variables-health-check.tf](./variables-health-check.tf) | Health check variable. | | | [variables-urlmap.tf](./variables-urlmap.tf) | URLmap variable. | | | [variables.tf](./variables.tf) | Module variables. | | | [versions.tf](./versions.tf) | Version pins. | | ## Variables | name | description | type | required | default | |---|---|:---:|:---:|:---:| | [name](variables.tf#L92) | Load balancer name. | string | ✓ | | | [project_id](variables.tf#L194) | Project id. | string | ✓ | | | [address](variables.tf#L17) | Optional IP address used for the forwarding rule. | string | | null | | [backend_buckets_config](variables.tf#L23) | Backend buckets configuration. | map(object({…})) | | {} | | [backend_service_configs](variables-backend-service.tf#L19) | Backend service level configuration. | map(object({…})) | | {} | | [description](variables.tf#L56) | Optional description used for resources. | string | | "Terraform managed." | | [group_configs](variables.tf#L62) | Optional unmanaged groups to create. Can be referenced in backends via key or outputs. | map(object({…})) | | {} | | [health_check_configs](variables-health-check.tf#L19) | Optional auto-created health check configurations, use the output self-link to set it in the auto healing policy. Refer to examples for usage. | map(object({…})) | | {…} | | [https_proxy_config](variables.tf#L74) | HTTPS proxy connfiguration. | object({…}) | | {} | | [labels](variables.tf#L86) | Labels set on resources. | map(string) | | {} | | [neg_configs](variables.tf#L97) | Optional network endpoint groups to create. Can be referenced in backends via key or outputs. | map(object({…})) | | {} | | [ports](variables.tf#L188) | Optional ports for HTTP load balancer, valid ports are 80 and 8080. | list(string) | | null | | [protocol](variables.tf#L199) | Protocol supported by this load balancer. | string | | "HTTP" | | [ssl_certificates](variables.tf#L212) | SSL target proxy certificates (only if protocol is HTTPS) for existing, custom, and managed certificates. | object({…}) | | {} | | [urlmap_config](variables-urlmap.tf#L19) | The URL map configuration. | object({…}) | | {…} | | [use_classic_version](variables.tf#L229) | Use classic Global Load Balancer. | bool | | true | ## Outputs | name | description | sensitive | |---|---|:---:| | [address](outputs.tf#L17) | Forwarding rule address. | | | [backend_service_ids](outputs.tf#L22) | Backend service resources. | | | [backend_service_names](outputs.tf#L29) | Backend service resource names. | | | [forwarding_rule](outputs.tf#L36) | Forwarding rule resource. | | | [global_neg_ids](outputs.tf#L41) | Autogenerated global network endpoint group ids. | | | [group_ids](outputs.tf#L48) | Autogenerated instance group ids. | | | [health_check_ids](outputs.tf#L55) | Autogenerated health check ids. | | | [id](outputs.tf#L62) | Fully qualified forwarding rule id. | | | [neg_ids](outputs.tf#L67) | Autogenerated network endpoint group ids. | | | [psc_neg_ids](outputs.tf#L74) | Autogenerated PSC network endpoint group ids. | | | [serverless_neg_ids](outputs.tf#L81) | Autogenerated serverless network endpoint group ids. | | ## Fixtures - [compute-vm-group-bc.tf](../../tests/fixtures/compute-vm-group-bc.tf)