# F5 BigIP-VE HA active-active blueprint This blueprint allows to create active/active private and/or public F5 BigIP-VE load balancers.

Networking diagram

## Design notes - The blueprint supports by default two VPCs: a `dataplane` network and a `management` network. - We don't use the `F5 Cloud Failover Extension (CFE)`. This would imply an active/passive architecture, it would limit the number of instances to two, it would use static routes and it would require F5 VMs service accounts to have roles set, so they can configure routes. - Instead, users can deploy as many active instances they need and we make them reachable through passthrough GCP load balancers. - The blueprint allows to expose the F5 instances both externally and internally, using internal and external network passthrough load balancers. You can also choose to expose the same F5 instances both externally and internally at the same time. - The blueprint supports dual-stack (IPv4/IPv6). - We deliberately use the original F5-BigIP `startup-script.tpl` file. We haven't changed it and we pass to it the same variables, so it should be easier to swap it with custom scripts (copyright reported in the template file and down in this readme). ## Access the F5 machines through IAP tunnels F5 management IPs are private. If you haven't setup any hybrid connectivity (i.e. VPN/Interconnect) you can still access the VMs with SSH and their GUI leveraging IAP tunnels. For example, you can first establish a tunnel: ```shell gcloud compute ssh YOUR_F5_VM_NAME \ --project YOUR_PROJECT \ --zone europe-west8-a -- \ -L 4431:127.0.0.1:8443 \ -L 221:127.0.0.1:22 \ -N -q -f ``` And then connect to: - SSH: `127.0.0.1`, port `221` - GUI: `127.0.0.1`, port `4431` The default username is `admin` and the password is `MyFabricSecret123!` ## F5 configuration You won't be able to pass traffic through the F5 load balancers until you perform some further configurations. We hope to automate these configuration steps soon. **Contributions are welcome!** - Disable traffic-group and -optionally- configure config-sync. - Configure the secondary IP range IPs assigned to each machine as a self-ip on each F5. These need to be self IPs as opposed to NAT pools, so they will be different for each instance, even if config sync is active. - Enable `automap` so that traffic is source natted using the self IPs configured, before going to the backends. - Create as many `virtual servers`/`irules` as you need, so you can match incoming traffic and redirect it to the backends. - By default, Google load balancers' health checks will query the F5 VMs on port `65535` from a set of [well-known IPs](https://cloud.google.com/load-balancing/docs/health-check-concepts#ip-ranges). We recommend creating a dedicated virtual server that answers on port `65535`. You can redirect the connection to the loopback interface. ## Examples - [Design notes](#design-notes) - [Access the F5 machines through IAP tunnels](#access-the-f5-machines-through-iap-tunnels) - [F5 configuration](#f5-configuration) - [Examples](#examples) - [Single instance](#single-instance) - [Active/active instances](#activeactive-instances) - [Change the shared instances configuration](#change-the-shared-instances-configuration) - [Public load F5 load balancers](#public-load-f5-load-balancers) - [Multiple forwarding rules and dual-stack (IPv4/IPv6)](#multiple-forwarding-rules-and-dual-stack-ipv4ipv6) - [Use the GCP secret manager](#use-the-gcp-secret-manager) - [F5 code copyright](#f5-code-copyright) - [Variables](#variables) - [Outputs](#outputs) ### Single instance By default, the blueprint deploys one or more instances in a region. These instances are behind an internal network passthrough (`L3_DEFAULT`) load balancer. ```hcl module "f5-lb" { source = "./fabric/blueprints/third-party-solutions/f5-bigip/f5-bigip-ha-active" project_id = "my-project" prefix = "test" region = "europe-west1" instance_dedicated_configs = { a = { license_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEEEE" network_config = { alias_ip_range_address = "192.168.1.0/24" alias_ip_range_name = "ip-range-a" } } } vpc_config = { dataplane = { network = "projects/my-project/global/networks/dataplane" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/dataplane" } management = { network = "projects/my-project/global/networks/management" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/management" } } } # tftest modules=6 resources=8 inventory=single-instance.yaml ``` ### Active/active instances To add more than one instance, add items to the `instance_dedicated_configs` variable. Keys specify the the zones where the instances are deployed. ```hcl module "f5-lb" { source = "./fabric/blueprints/third-party-solutions/f5-bigip/f5-bigip-ha-active" project_id = "my-project" prefix = "test" region = "europe-west1" instance_dedicated_configs = { a = { license_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEEEE" network_config = { alias_ip_range_address = "192.168.1.0/24" alias_ip_range_name = "ip-range-a" } } b = { license_key = "XXXXX-YYYYY-WWWWW-ZZZZZ-PPPPPP" network_config = { alias_ip_range_address = "192.168.2.0/24" alias_ip_range_name = "ip-range-b" } } } vpc_config = { dataplane = { network = "projects/my-project/global/networks/dataplane" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/dataplane" } management = { network = "projects/my-project/global/networks/management" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/management" } } } # tftest modules=7 resources=12 inventory=active-active-instances.yaml ``` ### Change the shared instances configuration You can change one or more properties used by the shared instances, leveraging the `instance_shared_config` variable. ```hcl module "f5-lb" { source = "./fabric/blueprints/third-party-solutions/f5-bigip/f5-bigip-ha-active" project_id = "my-project" prefix = "test" region = "europe-west1" instance_dedicated_configs = { a = { license_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEEEE" network_config = { alias_ip_range_address = "192.168.1.0/24" alias_ip_range_name = "ip-range-a" } } b = { license_key = "XXXXX-YYYYY-WWWWW-ZZZZZ-PPPPPP" network_config = { alias_ip_range_address = "192.168.2.0/24" alias_ip_range_name = "ip-range-b" } } } instance_shared_config = { boot_disk = { size = 150 } instance_type = "n2-standard-8" tags = ["f5-lbs"] username = "f5admin" } vpc_config = { dataplane = { network = "projects/my-project/global/networks/dataplane" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/dataplane" } management = { network = "projects/my-project/global/networks/management" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/management" } } } # tftest modules=7 resources=12 inventory=shared-config.yaml ``` ### Public load F5 load balancers You can configure the blueprint so it deploys external network passthrough load balancers, so you can expose on Internet your F5 load balancer(s). ```hcl module "f5-lb" { source = "./fabric/blueprints/third-party-solutions/f5-bigip/f5-bigip-ha-active" project_id = "my-project" prefix = "test" region = "europe-west1" instance_dedicated_configs = { a = { license_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEEEE" network_config = { alias_ip_range_address = "192.168.1.0/24" alias_ip_range_name = "ip-range-a" } } b = { license_key = "XXXXX-YYYYY-WWWWW-ZZZZZ-PPPPPP" network_config = { alias_ip_range_address = "192.168.2.0/24" alias_ip_range_name = "ip-range-b" } } } forwarding_rules_config = { "ext-ipv4" = { external = true } } vpc_config = { dataplane = { network = "projects/my-project/global/networks/dataplane" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/dataplane" } management = { network = "projects/my-project/global/networks/management" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/management" } } } # tftest modules=7 resources=12 inventory=public-load-balancers.yaml ``` ### Multiple forwarding rules and dual-stack (IPv4/IPv6) You can configure the blueprint in order to expose both internal and external load balancers. Each load balancer can have multiple forwarding rules, eventually both IPv4 and IPv6. ```hcl module "f5-lb" { source = "./fabric/blueprints/third-party-solutions/f5-bigip/f5-bigip-ha-active" project_id = "my-project" prefix = "test" region = "europe-west1" instance_dedicated_configs = { a = { license_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEEEE" network_config = { alias_ip_range_address = "192.168.1.0/24" alias_ip_range_name = "ip-range-a" } } b = { license_key = "XXXXX-YYYYY-WWWWW-ZZZZZ-PPPPPP" network_config = { alias_ip_range_address = "192.168.2.0/24" alias_ip_range_name = "ip-range-b" } } } forwarding_rules_config = { "ext-ipv4" = { external = true } "ext-ipv6" = { external = true ip_version = "IPV6" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/ipv6_external" } "int-ipv4" = {} "int-ipv6" = { ip_version = "IPV6" } } vpc_config = { dataplane = { network = "projects/my-project/global/networks/dataplane" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/dataplane" } management = { network = "projects/my-project/global/networks/management" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/management" } } } # tftest modules=8 resources=20 inventory=multiple-fw-rules.yaml ``` ### Use the GCP secret manager By default, this blueprint (and the `startup-script.tpl`) stores the F5 admin password in plain-text as a metadata of the F5 VMs. Most of administrators change this password in F5 soon after the boot. The example shows how to leverage instead the GCP secret manager. ```hcl module "f5-lb" { source = "./fabric/blueprints/third-party-solutions/f5-bigip/f5-bigip-ha-active" project_id = "my-project" prefix = "test" region = "europe-west1" instance_shared_config = { secret = { is_gcp = true value = "MyNewFabricSecret123!" # needs to be defined in the same project } } instance_dedicated_configs = { a = { license_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEEEE" network_config = { alias_ip_range_address = "192.168.1.0/24" alias_ip_range_name = "ip-range-a" } } } vpc_config = { dataplane = { network = "projects/my-project/global/networks/dataplane" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/dataplane" } management = { network = "projects/my-project/global/networks/management" subnetwork = "projects/my-project/regions/europe-west1/subnetworks/management" } } } # tftest modules=6 resources=8 inventory=secret-manager.yaml ``` ## F5 code copyright This repository usees code from the third-party project [terraform-gcp-bigip-module](https://github.com/F5Networks/terraform-gcp-bigip-module). This code is also licensed as Apache 2.0. This is the original copyright notice from the third-party repository: `Copyright 2014-2019 F5 Networks Inc.` ## Variables | name | description | type | required | default | |---|---|:---:|:---:|:---:| | [instance_dedicated_configs](variables.tf#L43) | The F5 VMs configuration. The map keys are the zones where the VMs are deployed. | map(object({…})) | ✓ | | | [prefix](variables.tf#L78) | The name prefix used for resources. | string | ✓ | | | [project_id](variables.tf#L83) | The project id where we deploy the resources. | string | ✓ | | | [region](variables.tf#L88) | The region where we deploy the F5 IPs. | string | ✓ | | | [vpc_config](variables.tf#L93) | The dataplane and mgmt network and subnetwork self links. | object({…}) | ✓ | | | [forwarding_rules_config](variables.tf#L17) | The optional configurations of the GCP load balancers forwarding rules. | map(object({…})) | | {…} | | [health_check_config](variables.tf#L32) | The optional health check configuration. The variable types are enforced by the underlying module. | map(any) | | {…} | | [instance_shared_config](variables.tf#L56) | The F5 VMs shared configurations. | object({…}) | | {} | ## Outputs | name | description | sensitive | |---|---|:---:| | [f5_management_ips](outputs.tf#L17) | The F5 management interfaces IP addresses. | | | [forwarding_rules_configs](outputs.tf#L25) | The GCP forwarding rules configurations. | |