cloud-foundation-fabric/networking
Ludovico Magnocavallo 14fe796885
Add missing role to GKE nodepool service account, refactor test runners and parallelize tests (#164)
* add missing role to GKE nodepool service account

* refactor plan test runners

* remove spurious print statements from test

* use concurrency via locking the fixture dir

* add filelock to test requirements

* fix pytest arg in cloud build

* and yet another dep and args fix

* fix e2e runner, use correct runner in env e2e test

* revert parallel test changes, split modules and environments triggers

* I should stop experimenting in PRs
2020-11-09 21:32:09 +01:00
..
hub-and-spoke-peering Add support for internal service account to GKE nodepool module (#156) 2020-11-07 10:48:12 +01:00
hub-and-spoke-vpn [#138] Update copyright headers to 2020 (#139) 2020-09-23 11:07:03 +02:00
ilb-next-hop remove version constraints from examples 2020-11-06 08:45:46 +01:00
onprem-google-access-dns remove version constraints from examples 2020-11-06 08:45:46 +01:00
shared-vpc-gke Add missing role to GKE nodepool service account, refactor test runners and parallelize tests (#164) 2020-11-09 21:32:09 +01:00
README.md ILB for appliances example (#122) 2020-08-15 10:12:43 +02:00

README.md

Networking and infrastructure examples

The examples in this folder implement typical network topologies like hub and spoke, or end-to-end scenarios that allow testing specific features like on-premises DNS policies and Private Google Access.

They are meant to be used as minimal but complete starting points to create actual infrastructure, and as playgrounds to experiment with specific Google Cloud features.

Examples

Hub and Spoke via Peering

This example implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is conncted to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.

The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workarund is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.

Hub and Spoke via Dynamic VPN

This example implements a hub and spoke topology via dynamic VPN tunnels, a common design where peering cannot be used due to limitations on the number of spokes or connectivity to managed services.

The example shows how to implement spoke transitivity via BGP advertisements, how to expose hub DNS zones to spokes via DNS peering, and allows easy testing of different VPN and BGP configurations.

DNS and Private Access for On-premises

This example uses an emulated on-premises environment running in Docker containers inside a GCE instance, to allow testing specific features like DNS policies, DNS forwarding zones across VPN, and Private Access for On-premises hosts.

The emulated on-premises environment can be used to test access to different services from outside Google Cloud, by implementing a VPN connection and BGP to Google CLoud via Strongswan and Bird.

Shared VPC with GKE and per-subnet support

This example shows how to configure a Shared VPC, including the specific IAM configurations needed for GKE, and to give different level of access to the VPC subnets to different identities.

It is meant to be used as a starting point for most Shared VPC configurations, and to be integrated to the above examples where Shared VPC is needed in more complex network topologies.

ILB as next hop

This example allows testing ILB as next hop using simple Linux gateway VMS between two VPCs, to emulate virtual appliances. An optional additional ILB can be enabled to test multiple load balancer configurations and hashing.