Updating hub-and-spoke peering blueprint to use HA VPN.

This commit is contained in:
Mark Schlagenhauf 2023-06-07 22:53:45 +00:00
parent ae73274bfb
commit 359b30c141
2 changed files with 316 additions and 256 deletions

View File

@ -1,13 +1,16 @@
# Hub and Spoke via VPC Peering
# Hub and Spoke using VPC Network Peering
This blueprint creates a simple **Hub and Spoke** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [VPC Peering](https://cloud.google.com/vpc/docs/vpc-peering).
This blueprint creates a simple **Hub and Spoke** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [VPC Network Peering](https://cloud.google.com/vpc/docs/vpc-peering).
The blueprint shows some of the limitations that need to be taken into account when using VPC Peering, mostly due to the lack of transitivity between peerings:
Since VPC Network Peering does not provide transitive routing, some things
don't work without additional configuration. By default, spokes cannot
talk with other spokes, and managed services in tenent networks can only be
reached from the attached spoke.
- no mesh networking between the spokes
- complex support for managed services hosted in tenant VPCs connected via peering (Cloud SQL, GKE, etc.)
One possible solution to the managed service limitation above is presented here, using a static VPN to establish connectivity to the GKE masters in the tenant project ([courtesy of @drebes](https://github.com/drebes/tf-samples/blob/master/gke-master-from-hub/main.tf#L10)). Other solutions typically involve the use of proxies, as [described in this GKE article](https://cloud.google.com/kubernetes-engine/docs/archive/creating-kubernetes-engine-private-clusters-with-net-proxies).
To get around these limitations, this blueprint uses [Cloud
VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview)
to provide transitive routing and to establish connectivity to the Google Kubernetes Engine (GKE)
masters in the tenant project ([courtesy of @drebes](https://github.com/drebes/tf-samples/blob/master/gke-master-from-hub/main.tf#L10)). Other solutions typically involve the use of proxies, as [described in this GKE article](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
One other topic that needs to be considered when using peering is the limit of 25 peerings in each peering group, which constrains the scalability of design like the one presented here.
@ -15,22 +18,25 @@ The blueprint has been purposefully kept simple to show how to use and wire the
![High-level diagram](diagram.png "High-level diagram")
## Managed resources and services
This sample creates several distinct groups of resources:
- one VPC each for hub and each spoke
- one set of firewall rules for each VPC
- one Cloud NAT configuration for each spoke
- three VPC networks, one each for the hub and spokes, each with one subnet
- VPC Network Peering configurations between the hub network and each spoke
- a Compute Engine VM instance for each VPC network. The VMs are created
using an accompanying service account
- private GKE cluster with a single node pool in the spoke-2 VPC network. The GKE nodes have an accompanying service account.
- one set of firewall rules for each VPC network
- one Cloud NAT configuration for each network
- one test instance for each spoke
- one GKE cluster with a single nodepool in spoke 2
- one service account for the GCE instances
- one service account for the GKE nodes
- one static VPN gateway in hub and spoke 2 with a single tunnel each
- VPN gateways in the hub and spoke-2 networks with accompanying tunnels. These tunnels allow the Cloud Routers to exchange transitive routes so that resources in spoke-1 and spoke-2 can reach each other, and so that resources in the hub network can reach the control plane of the GKE cluster hosted in spoke-2.
## Testing GKE access from spoke 1
As mentioned above, a VPN tunnel is used as a workaround to avoid the peering transitivity issue that would prevent any VPC other than spoke 2 to connect to the GKE master. This diagram illustrates the solution
As mentioned above, VPN tunnels are to provide transitive routing so that
the hub network can connect to the GKE master. This diagram illustrates the solution
![Network-level diagram](diagram-network.png "Network-level diagram")
@ -41,21 +47,22 @@ gcloud container clusters get-credentials cluster-1 --zone europe-west1-b
kubectl get all
```
The blueprint configures the peering with the GKE master VPC to export routes for you, so that VPN routes are passed through the peering. You can disable by hand in the console or by editing the `peering_config` variable in the `gke-cluster` module, to test non-working configurations or switch to using the [GKE proxy](https://cloud.google.com/kubernetes-engine/docs/archive/creating-kubernetes-engine-private-clusters-with-net-proxies).
The blueprint configures the peering with the GKE master VPC network to export routes for you, so that VPN routes are passed through the peering. You can disable by hand in the console or by editing the `peering_config` variable in the `gke-cluster` module, to test non-working configurations or switch to using the [GKE proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
### Export routes via Terraform (recommended)
Change the GKE cluster module and add a new variable after `private_cluster_config`:
```tfvars
peering_config = {
export_routes = true
import_routes = false
}
peering_config = {
export_routes = true
import_routes = false
}
```
If you added the variable after applying, simply apply Terraform again.
### Export routes via gcloud (alternative)
If you prefer to use `gcloud` to export routes on the peering, first identify the peering (it has a name like `gke-xxxxxxxxxxxxxxxxxxxx-xxxx-xxxx-peer`) in the Cloud Console from the *VPC network peering* page, or using `gcloud`, then configure it to export routes:
@ -64,12 +71,12 @@ If you prefer to use `gcloud` to export routes on the peering, first identify th
gcloud compute networks peerings list
# find the gke-xxxxxxxxxxxxxxxxxxxx-xxxx-xxxx-peer in the spoke-2 network
gcloud compute networks peerings update [peering name from above] \
--network spoke-2 --export-custom-routes
--network spoke-2 --export-custom-routes
```
### Test routes
Then connect via SSH to the spoke 1 instance and run the same commands you ran on the spoke 2 instance above, you should be able to run `kubectl` commands against the cluster. To test the default situation with no supporting VPN, just comment out the two VPN modules in `main.tf` and run `terraform apply` to bring down the VPN gateways and tunnels. GKE should only become accessible from spoke 2.
Then connect via SSH to the hub VM instance and run the same commands you ran on the spoke 2 instance above. You should be able to run `kubectl` commands against the cluster. To test the default situation with no supporting VPN, just comment out the two VPN modules in `main.tf` and run `terraform apply` to bring down the VPN gateways and tunnels. GKE should only become accessible from spoke 2.
## Operational considerations
@ -77,7 +84,9 @@ A single pre-existing project is used in this blueprint to keep variables and co
A few APIs need to be enabled in the project, if `apply` fails due to a service not being enabled just click on the link in the error message to enable it for the project, then resume `apply`.
The VPN used to connect the GKE masters VPC does not account for HA, upgrading to use HA VPN is reasonably simple by using the relevant [module](../../../modules/net-vpn-ha).
You can connect your hub to on-premises using Cloud Interconnect or HA VPN. On-premises networks would be able to reach the hub and all spokes, and the hub and all spokes would be able to reach on-premises, assuming the on-premises network is configured to allow access.
You can add additional spoke to the architecture. All of these spokes have networking similar to spoke-1: They will have connectivity to the hub and to spoke-2, but not to each other unless you also create VPN tunnels for the new spokes.
<!-- BEGIN TFDOC -->
## Variables
@ -96,8 +105,8 @@ The VPN used to connect the GKE masters VPC does not account for HA, upgrading t
| name | description | sensitive |
|---|---|:---:|
| [project](outputs.tf#L15) | Project id. | |
| [vms](outputs.tf#L20) | GCE VMs. | |
| [project](outputs.tf#L15) | Project ID. | |
| [vms](outputs.tf#L20) | Compute Engine VMs. | |
<!-- END TFDOC -->
@ -105,15 +114,15 @@ The VPN used to connect the GKE masters VPC does not account for HA, upgrading t
```hcl
module "test" {
source = "./fabric/blueprints/networking/hub-and-spoke-peering"
prefix = "prefix"
project_create = {
billing_account = "123456-123456-123456"
oslogin = true
parent = "folders/123456789"
}
project_id = "project-1"
source = "./fabric/blueprints/networking/hub-and-spoke-peering"
prefix = "prefix"
project_create = {
billing_account = "123456-123456-123456"
oslogin = true
parent = "folders/123456789"
}
project_id = "project-1"
}
# tftest modules=22 resources=67
```
# tftest modules=22 resources=61
```

View File

@ -13,15 +13,15 @@
# limitations under the License.
locals {
vm-instances = [
module.vm-hub.instance,
module.vm-spoke-1.instance,
module.vm-spoke-2.instance
]
vm-startup-script = join("\n", [
"#! /bin/bash",
"apt-get update && apt-get install -y bash-completion dnsutils kubectl"
])
vm-instances = [
module.vm-hub.instance,
module.vm-spoke-1.instance,
module.vm-spoke-2.instance
]
vm-startup-script = join("\n", [
"#! /bin/bash",
"apt-get update && apt-get install -y bash-completion dnsutils kubectl"
])
}
###############################################################################
@ -29,16 +29,16 @@ locals {
###############################################################################
module "project" {
source = "../../../modules/project"
project_create = var.project_create != null
billing_account = try(var.project_create.billing_account, null)
oslogin = try(var.project_create.oslogin, false)
parent = try(var.project_create.parent, null)
name = var.project_id
services = [
"compute.googleapis.com",
"container.googleapis.com"
]
source = "../../../modules/project"
project_create = var.project_create != null
billing_account = try(var.project_create.billing_account, null)
oslogin = try(var.project_create.oslogin, false)
parent = try(var.project_create.parent, null)
name = var.project_id
services = [
"compute.googleapis.com",
"container.googleapis.com"
]
}
################################################################################
@ -46,34 +46,34 @@ module "project" {
################################################################################
module "vpc-hub" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-hub"
subnets = [
{
ip_cidr_range = var.ip_ranges.hub
name = "${var.prefix}-hub-1"
region = var.region
}
]
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-hub"
subnets = [
{
ip_cidr_range = var.ip_ranges.hub
name = "${var.prefix}-hub-1"
region = var.region
}
]
}
module "nat-hub" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-hub"
router_name = "${var.prefix}-hub"
router_network = module.vpc-hub.self_link
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-hub"
router_name = "${var.prefix}-hub"
router_network = module.vpc-hub.self_link
}
module "vpc-hub-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.vpc-hub.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
source = "../../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.vpc-hub.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
################################################################################
@ -81,42 +81,42 @@ module "vpc-hub-firewall" {
################################################################################
module "vpc-spoke-1" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-spoke-1"
subnets = [
{
ip_cidr_range = var.ip_ranges.spoke-1
name = "${var.prefix}-spoke-1-1"
region = var.region
}
]
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-spoke-1"
subnets = [
{
ip_cidr_range = var.ip_ranges.spoke-1
name = "${var.prefix}-spoke-1-1"
region = var.region
}
]
}
module "vpc-spoke-1-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-spoke-1.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-spoke-1.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
module "nat-spoke-1" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-spoke-1"
router_name = "${var.prefix}-spoke-1"
router_network = module.vpc-spoke-1.self_link
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-spoke-1"
router_name = "${var.prefix}-spoke-1"
router_network = module.vpc-spoke-1.self_link
}
module "hub-to-spoke-1-peering" {
source = "../../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-1.self_link
export_local_custom_routes = true
export_peer_custom_routes = false
source = "../../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-1.self_link
export_local_custom_routes = true
export_peer_custom_routes = false
}
################################################################################
@ -124,47 +124,47 @@ module "hub-to-spoke-1-peering" {
################################################################################
module "vpc-spoke-2" {
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-spoke-2"
subnets = [
{
ip_cidr_range = var.ip_ranges.spoke-2
name = "${var.prefix}-spoke-2-1"
region = var.region
secondary_ip_ranges = {
pods = var.ip_secondary_ranges.spoke-2-pods
services = var.ip_secondary_ranges.spoke-2-services
}
}
]
source = "../../../modules/net-vpc"
project_id = module.project.project_id
name = "${var.prefix}-spoke-2"
subnets = [
{
ip_cidr_range = var.ip_ranges.spoke-2
name = "${var.prefix}-spoke-2-1"
region = var.region
secondary_ip_ranges = {
pods = var.ip_secondary_ranges.spoke-2-pods
services = var.ip_secondary_ranges.spoke-2-services
}
}
]
}
module "vpc-spoke-2-firewall" {
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-spoke-2.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
source = "../../../modules/net-vpc-firewall"
project_id = module.project.project_id
network = module.vpc-spoke-2.name
default_rules_config = {
admin_ranges = values(var.ip_ranges)
}
}
module "nat-spoke-2" {
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-spoke-2"
router_name = "${var.prefix}-spoke-2"
router_network = module.vpc-spoke-2.self_link
source = "../../../modules/net-cloudnat"
project_id = module.project.project_id
region = var.region
name = "${var.prefix}-spoke-2"
router_name = "${var.prefix}-spoke-2"
router_network = module.vpc-spoke-2.self_link
}
module "hub-to-spoke-2-peering" {
source = "../../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-2.self_link
export_local_custom_routes = true
export_peer_custom_routes = false
depends_on = [module.hub-to-spoke-1-peering]
source = "../../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-2.self_link
export_local_custom_routes = true
export_peer_custom_routes = false
depends_on = [module.hub-to-spoke-1-peering]
}
################################################################################
@ -172,67 +172,68 @@ module "hub-to-spoke-2-peering" {
################################################################################
module "vm-hub" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-hub"
network_interfaces = [{
network = module.vpc-hub.self_link
subnetwork = module.vpc-hub.subnet_self_links["${var.region}/${var.prefix}-hub-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-hub"
network_interfaces = [{
network = module.vpc-hub.self_link
subnetwork = module.vpc-hub.subnet_self_links["${var.region}/${var.prefix}-hub-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
}
module "vm-spoke-1" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-spoke-1"
network_interfaces = [{
network = module.vpc-spoke-1.self_link
subnetwork = module.vpc-spoke-1.subnet_self_links["${var.region}/${var.prefix}-spoke-1-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-spoke-1"
network_interfaces = [{
network = module.vpc-spoke-1.self_link
subnetwork = module.vpc-spoke-1.subnet_self_links["${var.region}/${var.prefix}-spoke-1-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
}
module "vm-spoke-2" {
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-spoke-2"
network_interfaces = [{
network = module.vpc-spoke-2.self_link
subnetwork = module.vpc-spoke-2.subnet_self_links["${var.region}/${var.prefix}-spoke-2-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
source = "../../../modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "${var.prefix}-spoke-2"
network_interfaces = [{
network = module.vpc-spoke-2.self_link
subnetwork = module.vpc-spoke-2.subnet_self_links["${var.region}/${var.prefix}-spoke-2-1"]
nat = false
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
}
module "service-account-gce" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "${var.prefix}-gce-test"
iam_project_roles = {
(var.project_id) = [
"roles/container.developer",
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "${var.prefix}-gce-test"
iam_project_roles = {
(var.project_id) = [
"roles/container.developer",
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}
################################################################################
@ -240,55 +241,55 @@ module "service-account-gce" {
################################################################################
module "cluster-1" {
source = "../../../modules/gke-cluster-standard"
name = "${var.prefix}-cluster-1"
project_id = module.project.project_id
location = "${var.region}-b"
vpc_config = {
network = module.vpc-spoke-2.self_link
subnetwork = module.vpc-spoke-2.subnet_self_links["${var.region}/${var.prefix}-spoke-2-1"]
master_authorized_ranges = {
for name, range in var.ip_ranges : name => range
}
master_ipv4_cidr_block = var.private_service_ranges.spoke-2-cluster-1
}
max_pods_per_node = 32
labels = {
environment = "test"
}
private_cluster_config = {
enable_private_endpoint = true
master_global_access = true
peering_config = {
export_routes = true
import_routes = false
}
}
source = "../../../modules/gke-cluster-standard"
name = "${var.prefix}-cluster-1"
project_id = module.project.project_id
location = "${var.region}-b"
vpc_config = {
network = module.vpc-spoke-2.self_link
subnetwork = module.vpc-spoke-2.subnet_self_links["${var.region}/${var.prefix}-spoke-2-1"]
master_authorized_ranges = {
for name, range in var.ip_ranges : name => range
}
master_ipv4_cidr_block = var.private_service_ranges.spoke-2-cluster-1
}
max_pods_per_node = 32
labels = {
environment = "test"
}
private_cluster_config = {
enable_private_endpoint = true
master_global_access = true
peering_config = {
export_routes = true
import_routes = false
}
}
}
module "cluster-1-nodepool-1" {
source = "../../../modules/gke-nodepool"
name = "${var.prefix}-nodepool-1"
project_id = module.project.project_id
location = module.cluster-1.location
cluster_name = module.cluster-1.name
service_account = {
email = module.service-account-gke-node.email
}
source = "../../../modules/gke-nodepool"
name = "${var.prefix}-nodepool-1"
project_id = module.project.project_id
location = module.cluster-1.location
cluster_name = module.cluster-1.name
service_account = {
email = module.service-account-gke-node.email
}
}
# roles assigned via this module use non-authoritative IAM bindings at the
# project level, with no risk of conflicts with pre-existing roles
module "service-account-gke-node" {
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "${var.prefix}-gke-node"
iam_project_roles = {
(var.project_id) = [
"roles/logging.logWriter", "roles/monitoring.metricWriter",
]
}
source = "../../../modules/iam-service-account"
project_id = module.project.project_id
name = "${var.prefix}-gke-node"
iam_project_roles = {
(var.project_id) = [
"roles/logging.logWriter", "roles/monitoring.metricWriter",
]
}
}
################################################################################
@ -296,35 +297,85 @@ module "service-account-gke-node" {
################################################################################
module "vpn-hub" {
source = "../../../modules/net-vpn-static"
project_id = module.project.project_id
region = var.region
network = module.vpc-hub.name
name = "${var.prefix}-hub"
remote_ranges = values(var.private_service_ranges)
tunnels = {
spoke-2 = {
peer_ip = module.vpn-spoke-2.address
shared_secret = ""
traffic_selectors = { local = ["0.0.0.0/0"], remote = null }
}
}
source = "../../../modules/net-vpn-ha"
project_id = module.project.project_id
region = var.region
network = module.vpc-hub.name
name = "${var.prefix}-hub"
peer_gateways = {
default = { gcp = module.vpn-spoke-2.self_link }
}
router_config = {
asn = 64516
custom_advertise = {
all_subnets = true
all_vpc_subnets = true
all_peer_vpc_subnets = true
ip_ranges = {
"10.0.0.0/8" = "default"
}
}
}
tunnels = {
remote-0 = {
bgp_peer = {
address = "169.254.1.1"
asn = 64515
}
bgp_session_range = "169.254.1.2/30"
vpn_gateway_interface = 0
}
remote-1 = {
bgp_peer = {
address = "169.254.2.1"
asn = 64515
}
bgp_session_range = "169.254.2.2/30"
vpn_gateway_interface = 1
}
}
}
module "vpn-spoke-2" {
source = "../../../modules/net-vpn-static"
project_id = module.project.project_id
region = var.region
network = module.vpc-spoke-2.name
name = "${var.prefix}-spoke-2"
# use an aggregate of the remote ranges, so as to be less specific than the
# routes exchanged via peering
remote_ranges = ["10.0.0.0/8"]
tunnels = {
hub = {
peer_ip = module.vpn-hub.address
shared_secret = module.vpn-hub.random_secret
traffic_selectors = { local = ["0.0.0.0/0"], remote = null }
}
}
}
source = "../../../modules/net-vpn-ha"
project_id = module.project.project_id
region = var.region
network = module.vpc-spoke-2.name
name = "${var.prefix}-spoke-2"
router_config = {
asn = 64515
custom_advertise = {
all_subnets = true
all_vpc_subnets = true
all_peer_vpc_subnets = true
ip_ranges = {
"10.0.0.0/8" = "default"
"${var.private_service_ranges.spoke-2-cluster-1}" = "access to control plane"
}
}
}
peer_gateways = {
default = { gcp = module.vpn-hub.self_link }
}
tunnels = {
remote-0 = {
bgp_peer = {
address = "169.254.1.2"
asn = 64516
}
bgp_session_range = "169.254.1.1/30"
shared_secret = module.vpn-hub.random_secret
vpn_gateway_interface = 0
}
remote-1 = {
bgp_peer = {
address = "169.254.2.2"
asn = 64516
}
bgp_session_range = "169.254.2.1/30"
shared_secret = module.vpn-hub.random_secret
vpn_gateway_interface = 1
}
}
}