- The blueprint supports by default two VPCs: a `dataplane` network and a `management` network.
- We don't use the `F5 Cloud Failover Extension (CFE)`. This would imply an active/passive architecture, it would limit the number of instances to two, it would use static routes and it would require F5 VMs service accounts to have roles set, so they can configure routes.
- Instead, users can deploy as many active instances they need and we make them reachable through passthrough GCP load balancers.
- The blueprint allows to expose the F5 instances both externally and internally, using internal and external network passthrough load balancers. You can also choose to expose the same F5 instances both externally and internally at the same time.
- We deliberately use the original F5-BigIP `startup-script.tpl` file. We haven't changed it and we pass to it the same variables, so it should be easier to swap it with custom scripts (copyright reported in the template file and down in this readme).
## Access the F5 machines through IAP tunnels
F5 management IPs are private. If you haven't setup any hybrid connectivity (i.e. VPN/Interconnect) you can still access the VMs with SSH and their GUI leveraging IAP tunnels.
For example, you can first establish a tunnel:
```shell
gcloud compute ssh YOUR_F5_VM_NAME \
--project YOUR_PROJECT \
--zone europe-west8-a -- \
-L 4431:127.0.0.1:8443 \
-L 221:127.0.0.1:22 \
-N -q -f
```
And then connect to:
- SSH: `127.0.0.1`, port `221`
- GUI: `127.0.0.1`, port `4431`
The default username is `admin` and the password is `MyFabricSecret123!`
## F5 configuration
You won't be able to pass traffic through the F5 load balancers until you perform some further configurations. We hope to automate these configuration steps soon. **Contributions are welcome!**
- Disable traffic-group and -optionally- configure config-sync.
- Configure the secondary IP range IPs assigned to each machine as a self-ip on each F5. These need to be self IPs as opposed to NAT pools, so they will be different for each instance, even if config sync is active.
- Enable `automap` so that traffic is source natted using the self IPs configured, before going to the backends.
- Create as many `virtual servers`/`irules` as you need, so you can match incoming traffic and redirect it to the backends.
- By default, Google load balancers' health checks will query the F5 VMs on port `65535` from a set of [well-known IPs](https://cloud.google.com/load-balancing/docs/health-check-concepts#ip-ranges). We recommend creating a dedicated virtual server that answers on port `65535`. You can redirect the connection to the loopback interface.
By default, the blueprint deploys one or more instances in a region. These instances are behind an internal network passthrough (`L3_DEFAULT`) load balancer.
By default, this blueprint (and the `startup-script.tpl`) stores the F5 admin password in plain-text as a metadata of the F5 VMs. Most of administrators change this password in F5 soon after the boot.
The example shows how to leverage instead the GCP secret manager.
| [forwarding_rules_config](variables.tf#L17) | The optional configurations of the GCP load balancers forwarding rules. | <codetitle="map(object({ address = optional(string) external = optional(bool, false) global_access = optional(bool, true) ip_version = optional(string, "IPV4") protocol = optional(string, "L3_DEFAULT") subnetwork = optional(string) # used for IPv6 NLBs }))">map(object({…}))</code> | | <codetitle="{ l4 = {} }">{…}</code> |
| [health_check_config](variables.tf#L32) | The optional health check configuration. The variable types are enforced by the underlying module. | <code>map(any)</code> | | <codetitle="{ tcp = { port = 65535 port_specification = "USE_FIXED_PORT" } }">{…}</code> |