87548f9739
This blueprint creates a networking playground showing a number of different VPC connectivity options: Hub and spoke via HA VPN Hub and spoke via VPC peering Interconnecting two networks via a network virtual appliance (aka NVA) On top of that, this blueprint implements Policy Based Routing (aka PBR) to show how to force all traffic within a VPC to be funneled through an internal network passthrough load balancer, to implement an Intrusion Prevention System (IPS). PBR is enabled in the hub VPC, matching all traffic originating from within that VPC. |
||
---|---|---|
.. | ||
README.md | ||
diagram.excalidraw | ||
diagram.png | ||
diagram.svg | ||
dns-hub.tf | ||
main.tf | ||
nva.tf | ||
outputs.tf | ||
test-resources.tf | ||
variables.tf | ||
vpc-ext.tf | ||
vpc-hub.tf | ||
vpc-peering-a.tf | ||
vpc-peering-b.tf | ||
vpc-vpn-a.tf | ||
vpc-vpn-b.tf |
README.md
VPC Connectivity Lab
This blueprint creates a networking playground showing a number of different VPC connectivity options:
- Hub and spoke via HA VPN
- Hub and spoke via VPC peering
- Interconnecting two networks via a network virtual appliance (aka NVA)
On top of that, this blueprint implements Policy Based Routing (aka PBR) to show how to force all traffic within a VPC to be funneled through an internal network passthrough load balancer, to implement an Intrusion Prevention System (IPS). PBR is enabled in the hub
VPC, matching all traffic originating from within that VPC.
The blueprint has been purposefully kept simple to show how to use and wire VPCs together, and so that it can be used as a basis for experimentation.
This is the high level diagram of this blueprint:
Prerequisites
This blueprint is contained within a single project to keep complexity to a minimum, even though in a real world scenario each spoke would probably use a separate project.
The blueprint can either create a new project or consume an existing one.
If the variable var.project_create_config
is populated, the blueprint will create a new project named var.project_id
, otherwise the blueprint will use an existing project with the same id.
Testing reachability
After running terraform, and in case var.test_vms
is set to true (as it is by default), a set of ping commands will be printed to check the reachability of all VMs. The blueprint is configured to ensure that all VMs can ping each other - so you can simply SSH to each one, and run the generated ping commands, e.g.:
ping -c 1 ext.example
ping -c 1 hub-a.example
ping -c 1 hub-b.example
ping -c 1 spoke-peering-a.example
ping -c 1 spoke-peering-b.example
ping -c 1 spoke-vpn-a.example
ping -c 1 spoke-vpn-b.example
Testing IPS/NVA
Per blueprint setup, not all traffic will flow through the deployed test NVAs. You should expect the following flows to be routed through them:
ext
to {hub
,peering-a
,peering-b
,vpn-a
,vpn-b
}peering-a
to {ext
,peering-b
}peering-b
to {ext
,peering-a
}vpn-a
to {ext
}vpn-b
to {ext
}
Additional PBR routes could be configured to force all traffic coming from vpn-{a,b}
to go through the NVA - however traffic coming from peering-{a,b}
can NOT be subjected to PBR routes, due to product constrains.
In order to see the actual traffic flow, you'll want to manually stop one of the NVA instances (to force all traffic to be sent through a single VM), SSH to the active NVA instance and run the following commands:
# Setting the toolbox up might take a while since we're using cheap instances :)
$ toolbox
# Once inside the toolbox
$ tcpdump -i any icmp -n
Files
name | description | modules | resources |
---|---|---|---|
dns-hub.tf | DNS setup. | dns |
|
main.tf | Project setup. | project |
|
nva.tf | None | compute-vm · simple-nva |
google_compute_instance_group |
outputs.tf | Module outputs. | ||
test-resources.tf | None | ||
variables.tf | Module variables. | ||
vpc-ext.tf | External VPC. | net-address · net-cloudnat · net-lb-int · net-vpc · net-vpc-firewall |
google_compute_route |
vpc-hub.tf | Internal Hub VPC. | net-address · net-lb-int · net-vpc · net-vpc-firewall · net-vpc-peering · net-vpn-ha |
google_compute_route |
vpc-peering-a.tf | None | net-vpc · net-vpc-firewall |
|
vpc-peering-b.tf | None | net-vpc · net-vpc-firewall |
|
vpc-vpn-a.tf | None | net-vpc · net-vpc-firewall · net-vpn-ha |
|
vpc-vpn-b.tf | None | net-vpc · net-vpc-firewall · net-vpn-ha |
Variables
name | description | type | required | default |
---|---|---|---|---|
prefix | Prefix used for resource names. | string |
✓ | |
ip_ranges | Subnet/Routes IP CIDR ranges. | map(string) |
{…} |
|
project_create_config | Populate with billing account id to trigger project creation. | object({…}) |
null |
|
project_id | Project id for all resources. | string |
"net-test-02" |
|
region | Region used to deploy resources. | string |
"europe-west8" |
|
test_vms | Enable the creation of test resources. | bool |
true |
Outputs
name | description | sensitive |
---|---|---|
ping_commands | Ping commands that can be run to check VPC reachability. |
Test
module "test" {
source = "./fabric/blueprints/networking/vpc-connectivity-lab"
project_create_config = {
billing_account_id = "123456-123456-123456"
parent_id = "folders/123456789"
}
project_id = "net-test-04"
prefix = "fast-sr0-sbox"
}
# tftest modules=35 resources=131