f62b9362a2
* rename infrastructure folder to networking * example WIP: VPCs * move ip forwarding to its own variable in compute-vm module * add per-instance metadata support to compute-vm module * ipip tunnels on linux savepoint * simple multinic gateways example * remove stale files * resolve conflicts * update diagram * rename folder * use a template for gw cloud config, rename some resources and files * Update README.md * Update README.md * add basic plan tests for all networking e2e examples * fix test for foundations/environments e2e example * fix shared vpc e2 example count error in gke node service account permissions * use module path for assets in onprem e2e example * use project id from module in ilb e2e example * add mising boilerplates in tests * run examples tests in ci * update module's README * rename ilb example * Update README.md * fix rp_filter configuration * README * Update README.md * Update README.md * Update README.md * update CHANGELOG * update CHANGELOG * Update README.md |
||
---|---|---|
.. | ||
assets | ||
README.md | ||
backend.tf.sample | ||
diagram.png | ||
main.tf | ||
outputs.tf | ||
variables.tf | ||
versions.tf |
README.md
On-prem DNS and Google Private Access
This example leverages the on prem in a box module to bootstrap an emulated on-premises environment on GCP, then connects it via VPN and sets up BGP and DNS so that several specific features can be tested:
- Cloud DNS forwarding zone to on-prem
- DNS forwarding from on-prem via a Cloud DNS inbound policy
- Private Access for on-premises hosts
The example has been purposefully kept simple to show how to use and wire the on-prem module, but it lends itself well to experimenting and can be combined with the other infrastructure examples in this repository to test different GCP networking patterns in connection to on-prem. This is the high level diagram:
Managed resources and services
This sample creates several distinct groups of resources:
- one VPC
- one set of firewall rules
- one Cloud NAT configuration
- one test instance
- one service account for the test instance
- one service account for the onprem instance
- one dynamic VPN gateway with a single tunnel
- two DNS zones (private and forwarding) and a DNS inbound policy
- one emulated on-premises environment in a single GCP instance
Cloud DNS inbound forwarder entry point
The Cloud DNS inbound policy reserves an IP address in the VPC, which is used by the on-prem DNS server to forward queries to Cloud DNS. This address needs of course to be explicitly set in the on-prem DNS configuration (see below for details), but since there's currently no way for Terraform to find the exact address (cf Google provider issue), the following manual workaround needs to be applied.
Find out the forwarder entry point address
Run this gcloud command to (find out the address assigned to the inbound forwarder)[https://cloud.google.com/dns/docs/policies#list-in-entrypoints]:
gcloud compute addresses list -project [your project id]
In the list of addresses, look for the address with purpose DNS_RESOLVER
in the subnet to-onprem-default
. If its IP address is 10.0.0.2
it matches the default value in the Terraform forwarder_address
variable, which means you're all set. If it's different, proceed to the next step.
Update the forwarder address variable and recreate on-prem
If the forwader address does not match the Terraform variable, add the correct value in your terraform.tfvars
(or change the default value in variables.tf
), then taint the onprem instance and apply to recreate it with the correct value in the DNS configuration:
tf apply
tf taint 'module.vm-onprem.google_compute_instance.default["onprem-1"]'
tf apply
CoreDNS configuration for on-premises
The on-prem module uses a CoreDNS container to expose its DNS service, configured with foru distinct blocks:
- the onprem block serving static records for the
onprem.example.com
zone that map to each of the on-prem containers - the forwarding block for the
gcp.example.com
zone and for Google Private Access, that map to the IP address of the Cloud DNS inbound policy - the
google.internal
block that exposes to containers a name for the instance metadata address - the default block that forwards to Google public DNS resolvers
This is the CoreDNS configuration:
onprem.example.com {
root /etc/coredns
hosts onprem.hosts
log
errors
}
gcp.example.com googleapis.com {
forward . ${resolver_address}
log
errors
}
google.internal {
hosts {
169.254.169.254 metadata.google.internal
}
}
. {
forward . 8.8.8.8
log
errors
}
Testing
Onprem to cloud
# connect to the onprem instance
gcloud compute ssh onprem-1
# check that the BGP session works and the advertised routes are set
sudo docker exec -it onprem_bird_1 ip route |grep bird
10.0.0.0/24 via 169.254.1.1 dev vti0 proto bird src 10.0.16.2
35.199.192.0/19 via 169.254.1.1 dev vti0 proto bird src 10.0.16.2
199.36.153.4/30 via 169.254.1.1 dev vti0 proto bird src 10.0.16.2
199.36.153.8/30 via 169.254.1.1 dev vti0 proto bird src 10.0.16.2
# get a shell on the toolbox container
sudo docker exec -it onprem_toolbox_1 sh
# test pinging the IP address of the test instance (check outputs for it)
ping 10.0.0.3
# note: if you are able to ping the IP but the DNS tests below do not work,
# refer to the sections above on configuring the DNS inbound fwd IP
# test forwarding from CoreDNS via the Cloud DNS inbound policy
dig test-1.gcp.example.org +short
10.0.0.3
# test that Private Access is configured correctly
dig compute.googleapis.com +short
private.googleapis.com.
199.36.153.8
199.36.153.9
199.36.153.10
199.36.153.11
# issue an API call via Private Access
gcloud config set project [your project id]
gcloud compute instances list
Cloud to onprem
# connect to the test instance
gcloud compute ssh test-1
# test forwarding from Cloud DNS to onprem CoreDNS (address may differ)
dig gw.onprem.example.org +short
10.0.16.1
# test a request to the onprem web server
curl www.onprem.example.com -s |grep h1
<h1>On Prem in a Box</h1>
Operational considerations
A single pre-existing project is used in this example to keep variables and complexity to a minimum, in a real world scenarios each spoke would probably use a separate project.
The VPN used to connect to the on-premises environment does not account for HA, upgrading to use HA VPN is reasonably simple by using the relevant module.
Variables
name | description | type | required | default |
---|---|---|---|---|
project_id | Project id for all resources. | string |
✓ | |
bgp_asn | BGP ASNs. | map(number) |
... |
|
bgp_interface_ranges | BGP interface IP CIDR ranges. | map(string) |
... |
|
dns_forwarder_address | Address of the DNS server used to forward queries from on-premises. | string |
10.0.0.2 |
|
forwarder_address | GCP DNS inbound policy forwarder address. | string |
10.0.0.2 |
|
ip_ranges | IP CIDR ranges. | map(string) |
... |
|
region | VPC region. | string |
europe-west1 |
|
ssh_source_ranges | IP CIDR ranges that will be allowed to connect via SSH to the onprem instance. | list(string) |
["0.0.0.0/0"] |
Outputs
name | description | sensitive |
---|---|---|
onprem-instance | Onprem instance details. | |
test-instance | Test instance details. |