Update ILBaNH example (all protocols, symmetric hashing, multi-zone) (#277)

* update ILBaNH example (all protocols, symmetric hashing, multi-zone)

* update variables/outputs table in README

* update test
This commit is contained in:
Ludovico Magnocavallo 2021-07-19 19:28:39 +02:00 committed by GitHub
parent c1631bfd97
commit 4fb953d83f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 88 additions and 48 deletions

View File

@ -10,9 +10,9 @@ Two ILBs are configured on the primary and secondary interfaces of gateway VMs w
## Testing
Since ILBs as next hops only forward TCP and UDP traffic, simple tests use `curl` on clients to send HTTP requests. To make this practical, test VMs on both VPCs have `nginx` pre-installed and active on port 80.
This setup can be used to test and verify new ILB features like [forwards all protocols on ILB as next hops](https://cloud.google.com/load-balancing/docs/internal/ilb-next-hop-overview#all-traffic) and [symmetric hashing](https://cloud.google.com/load-balancing/docs/internal/ilb-next-hop-overview#symmetric-hashing), using simple `curl` and `ping` tests on clients. To make this practical, test VMs on both VPCs have `nginx` pre-installed and active on port 80.
On the gateways, `iftop` is installed by default to quickly monitor traffic passing forwarded across VPCs.
On the gateways, `iftop` and `tcpdump` are installed by default to quickly monitor traffic passing forwarded across VPCs.
Session affinity on the ILB backend services can be changed using `gcloud compute backend-services update` on each of the ILBs, or by setting the `ilb_session_affinity` variable to update both ILBs.
@ -23,7 +23,9 @@ Some scenarios to test:
- short-lived connections with session affinity set to the default of `NONE`, then to `CLIENT_IP`
- long-lived connections, failing health checks on the active gateway while the connection is active
### Useful commands (adjust names and addresses to match)
### Useful commands
Basic commands to SSH to VMs and monitor backend health are provided in the Terraform outputs, and they already match input variables so that names, zones, etc. are correct. Other testing commands are provided below, adjust names to match your setup.
Create a large file on a destination VM (eg `ilb-test-vm-right-1`) to test long-running connections.
@ -33,7 +35,7 @@ dd if=/dev/zero of=/var/www/html/test.txt bs=10M count=100 status=progress
Run curl from a source VM (eg `ilb-test-vm-left-1`) to send requests to a destination VM artifically slowing traffic.
```
```bash
curl -0 --output /dev/null --limit-rate 10k 10.0.1.3/test.txt
```
@ -70,7 +72,16 @@ A sample testing session using `tmux`:
| *prefix* | Prefix used for resource names. | <code title="">string</code> | | <code title="">ilb-test</code> |
| *project_create* | Create project instead of using an existing one. | <code title="">bool</code> | | <code title="">false</code> |
| *region* | Region used for resources. | <code title="">string</code> | | <code title="">europe-west1</code> |
| *zones* | Zone suffixes used for instances. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">["b", "c"]</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| addresses | IP addresses. | |
| backend_health_left | Command-line health status for left ILB backends. | |
| backend_health_right | Command-line health status for right ILB backends. | |
| ssh_gw | Command-line login to gateway VMs. | |
| ssh_vm_left | Command-line login to left VMs. | |
| ssh_vm_right | Command-line login to right VMs. | |
<!-- END TFDOC -->

View File

@ -21,46 +21,8 @@ write_files:
content: |
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
# https://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.kernel.rpf.html
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.ens4.rp_filter = 2
net.ipv4.conf.ens5.rp_filter = 2
- path: /etc/netplan/99-nic2-routing.yaml
permissions: "0644"
owner: root
content: |
network:
ethernets:
ens5:
dhcp4: true
routes:
- to: 0.0.0.0/0
via: ${gw_right}
table: 102
routing-policy:
- from: ${ip_cidr_right}
to: 35.191.0.0/16
table: 102
- from: ${ip_cidr_right}
to: 130.211.0.0/22
table: 102
version: 2
- path: /root/start.sh
permissions: "0755"
owner: root
content: |
#!/bin/bash
iptables -D INPUT -s 35.191.0.0/16 -j REJECT
iptables -D INPUT -s 130.211.0.0/22 -j REJECT
- path: /root/stop.sh
permissions: "0755"
owner: root
content: |
#!/bin/bash
iptables -I INPUT -s 35.191.0.0/16 -j REJECT
iptables -I INPUT -s 130.211.0.0/22 -j REJECT
package_update: true
package_upgrade: true
package_reboot_if_required: true
@ -69,4 +31,6 @@ packages:
- tcpdump
runcmd:
- sysctl -p
- netplan apply
- ip rule add from ${ip_cidr_right} to 35.191.0.0/16 lookup 102
- ip rule add from ${ip_cidr_right} to 130.211.0.0/22 lookup 102
- ip route add default via ${cidrhost(ip_cidr_right, 1)} dev ens5 proto static onlink table 102

View File

@ -18,6 +18,7 @@ module "gw" {
source = "../../modules/compute-vm"
project_id = module.project.project_id
region = var.region
zones = local.zones
name = "${local.prefix}gw"
instance_type = "f1-micro"
@ -73,9 +74,9 @@ module "ilb-left" {
timeout_sec = null
connection_draining_timeout_sec = null
}
backends = [{
backends = [for zone, group in module.gw.groups : {
failover = false
group = values(module.gw.groups)[0].self_link
group = group.self_link
balancing_mode = "CONNECTION"
}]
health_check_config = {
@ -97,9 +98,9 @@ module "ilb-right" {
timeout_sec = null
connection_draining_timeout_sec = null
}
backends = [{
backends = [for zone, group in module.gw.groups : {
failover = false
group = values(module.gw.groups)[0].self_link
group = group.self_link
balancing_mode = "CONNECTION"
}]
health_check_config = {

View File

@ -20,6 +20,7 @@ locals {
trimprefix(k, local.prefix) => v.address
}
prefix = var.prefix == null || var.prefix == "" ? "" : "${var.prefix}-"
zones = [for z in var.zones : "${var.region}-${z}"]
}
module "project" {

View File

@ -13,3 +13,58 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "addresses" {
description = "IP addresses."
value = {
gw = module.gw.internal_ips
ilb-left = module.ilb-left.forwarding_rule_address
ilb-right = module.ilb-right.forwarding_rule_address
vm-left = module.vm-left.internal_ips
vm-right = module.vm-right.internal_ips
}
}
output "backend_health_left" {
description = "Command-line health status for left ILB backends."
value = <<-EOT
gcloud compute backend-services get-health ${local.prefix}ilb-left \
--region ${var.region} \
--flatten status.healthStatus \
--format "value(status.healthStatus.ipAddress, status.healthStatus.healthState)"
EOT
}
output "backend_health_right" {
description = "Command-line health status for right ILB backends."
value = <<-EOT
gcloud compute backend-services get-health ${local.prefix}ilb-right \
--region ${var.region} \
--flatten status.healthStatus \
--format "value(status.healthStatus.ipAddress, status.healthStatus.healthState)"
EOT
}
output "ssh_gw" {
description = "Command-line login to gateway VMs."
value = [
for name, instance in module.gw.instances :
"gcloud compute ssh ${instance.name} --project ${var.project_id} --zone ${instance.zone}"
]
}
output "ssh_vm_left" {
description = "Command-line login to left VMs."
value = [
for name, instance in module.vm-left.instances :
"gcloud compute ssh ${instance.name} --project ${var.project_id} --zone ${instance.zone}"
]
}
output "ssh_vm_right" {
description = "Command-line login to right VMs."
value = [
for name, instance in module.vm-right.instances :
"gcloud compute ssh ${instance.name} --project ${var.project_id} --zone ${instance.zone}"
]
}

View File

@ -57,3 +57,9 @@ variable "region" {
type = string
default = "europe-west1"
}
variable "zones" {
description = "Zone suffixes used for instances."
type = list(string)
default = ["b", "c"]
}

View File

@ -26,6 +26,7 @@ module "vm-left" {
source = "../../modules/compute-vm"
project_id = module.project.project_id
region = var.region
zones = local.zones
name = "${local.prefix}vm-left"
instance_type = "f1-micro"
network_interfaces = [
@ -53,6 +54,7 @@ module "vm-right" {
project_id = module.project.project_id
region = var.region
name = "${local.prefix}vm-right"
zones = local.zones
instance_type = "f1-micro"
network_interfaces = [
{

View File

@ -24,4 +24,4 @@ def test_resources(e2e_plan_runner):
"Test that plan works and the numbers of resources is as expected."
modules, resources = e2e_plan_runner(FIXTURES_DIR)
assert len(modules) == 14
assert len(resources) == 41
assert len(resources) == 42