Net LB App Internal Cross-Region recipe (#2214)

* reorder tfdoc methods

* add support for recipes to tfdoc

* fix repo url in tfdoc

* update module README

* validated untested recipe

* validated untested refactored recipe

* add optional proxy subnet creation, outputs, fixes

* tested

* tfdoc fix

* fix README

* exclude examples from test collector
This commit is contained in:
Ludovico Magnocavallo 2024-04-14 18:38:05 +02:00 committed by GitHub
parent e7f2ca7edc
commit fbc7e891db
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 921 additions and 230 deletions

View File

@ -19,6 +19,7 @@ Due to the complexity of the underlying resources, changes to the configuration
- [Private Service Connect NEG creation](#private-service-connect-neg-creation)
- [URL Map](#url-map)
- [Complex example](#complex-example)
- [Recipes](#recipes)
- [Files](#files)
- [Variables](#variables)
- [Outputs](#outputs)
@ -721,6 +722,10 @@ module "ilb-l7" {
<!-- TFDOC OPTS files:1 -->
<!-- BEGIN TFDOC -->
## Recipes
- [Cross-region internal Application Load Balancer with VM instance group backends](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/blob/master/modules/net-lb-app-int-cross-region/recipe-cross-reg-int-app-lb-vm-dns)
## Files
| name | description | resources |

View File

@ -0,0 +1,175 @@
# Cross-region internal Application Load Balancer with VM instance group backends
This recipe shows an actual usage scenario for the [cross-region internal application load balancer](../README.md) by implementing the [example provided in the GCP documentation](https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-cross-reg-internal).
<p align="center">
<img src="https://cloud.google.com/static/load-balancing/images/cross-reg-int-vm.svg" alt="Scenario diagram.">
</p>
<!-- BEGIN TOC -->
- [Prerequisites](#prerequisites)
- [Proxy-only global subnets](#proxy-only-global-subnets)
- [Firewall rules](#firewall-rules)
- [Variable configuration](#variable-configuration)
- [VPC configuration options](#vpc-configuration-options)
- [Instance configuration options](#instance-configuration-options)
- [DNS configuration](#dns-configuration)
- [Testing](#testing)
- [Files](#files)
- [Variables](#variables)
- [Outputs](#outputs)
<!-- END TOC -->
## Prerequisites
To run this recipe, you need
- an existing GCP project with the `compute` API enabled
- the `roles/compute.admin` role or equivalent (e.g. `roles/editor`) assigned on the project
- an existing VPC in the same project
- one regular subnet per region where you want to deploy the load balancer in the same VPC
- an organization policy configuration that allows creation of internal application load balancer (the default configuration is fine)
- access to the Docker Registry from the instances (e.g. via Cloud NAT)
### Proxy-only global subnets
The load balancer needs one proxy-only global subnet in each of its regions. If the subnets already exist the load balancer will consume them. If you need to create them, either do that manually or configure the module to do it for you as explained in the [Variable configuration](#variable-configuration) section below.
### Firewall rules
For the load balancer to work you need to allow ingress to the instances from the health check ranges, and from the load balancer proxy ranges. You can create firewall rules manually or configure the module to do it for you as explained in the [Variable configuration](#variable-configuration) section below.
## Variable configuration
With all the requirements in place, the only variables that are needed are those that configure the project and VPC details. Note that you need to use ids or self links in the VPC configuration not names (Shared VPC configurations are also supported).
This is a simple minimal configuration:
```tfvars
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
}
# tftest modules=5 resources=15
```
### VPC configuration options
The VPC configuration also allows creating instances in different subnets, and auto-creation of proxy subnets and firewall rules. This is a complete configuration with all options.
```tfvars
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
# only specify this to use different subnets for instances
subnets_instances = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/vms"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/vms",
}
# create proxy subnets
proxy_subnets_config = {
europe-west1 = "172.16.193.0/24"
europe-west8 = "172.16.192.0/24"
}
# create firewall rules
firewall_config = {
proxy_subnet_ranges = [
"172.16.193.0/24",
"172.16.192.0/24"
]
enable_health_check = true
enable_iap_ssh = true
}
}
# tftest skip
```
### Instance configuration options
The instance type and the number of zones can be configured via the `instances_config` variable:
```tfvars
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
instances_config = {
# both attributes are optional
machine_type = "e2-small"
zones = ["b", "c"]
}
}
# tftest modules=5 resources=15
```
### DNS configuration
The DNS zone used for the load balancer record can be configured via the `dns_config` variable:
```tfvars
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
dns_config = {
# all attributes are optional
client_networks = [
"projects/my-project/global/networks/test",
"projects/my-other-project/global/networks/test"
]
domain = "foo.example."
hostname = "lb-test"
}
}
# tftest modules=5 resources=15
```
## Testing
To test the load balancer behaviour, you can simply disable the service on the backend instances by connecting via SSH and issuing the `sudo systemctl stop nginx` command.
If the backends are unhealthy and the necessary firewall rules are in place, check that the Docker containers have started successfully on the instances by connecting via SSH and issuing the `sudo systemctl status nginx` command.
<!-- TFDOC OPTS files:1 -->
<!-- BEGIN TFDOC -->
## Files
| name | description | modules |
|---|---|---|
| [instances.tf](./instances.tf) | Instance-related locals and resources. | <code>compute-vm</code> · <code>iam-service-account</code> |
| [main.tf](./main.tf) | Load balancer and VPC resources. | <code>dns</code> · <code>net-lb-app-int-cross-region</code> · <code>net-vpc</code> · <code>net-vpc-firewall</code> |
| [outputs.tf](./outputs.tf) | Module outputs. | |
| [region_shortnames.tf](./region_shortnames.tf) | Region shortnames via locals. | |
| [variables.tf](./variables.tf) | Module variables. | |
## Variables
| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [project_id](variables.tf#L49) | Project used to create resources. | <code>string</code> | ✓ | |
| [vpc_config](variables.tf#L55) | VPC configuration for load balancer and instances. Subnets are keyed by region. | <code title="object&#40;&#123;&#10; network &#61; string&#10; subnets &#61; map&#40;string&#41;&#10; subnets_instances &#61; optional&#40;map&#40;string&#41;&#41;&#10; firewall_config &#61; optional&#40;object&#40;&#123;&#10; proxy_subnet_ranges &#61; list&#40;string&#41;&#10; client_allowed_ranges &#61; optional&#40;list&#40;string&#41;&#41;&#10; enable_health_check &#61; optional&#40;bool, true&#41;&#10; enable_iap_ssh &#61; optional&#40;bool, false&#41;&#10; &#125;&#41;&#41;&#10; proxy_subnets_config &#61; optional&#40;map&#40;string&#41;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | ✓ | |
| [dns_config](variables.tf#L17) | DNS configuration. | <code title="object&#40;&#123;&#10; client_networks &#61; optional&#40;list&#40;string&#41;&#41;&#10; domain &#61; optional&#40;string, &#34;gce.example.&#34;&#41;&#10; hostname &#61; optional&#40;string&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>&#123;&#125;</code> |
| [instances_config](variables.tf#L28) | Configuration for instances. | <code title="object&#40;&#123;&#10; machine_type &#61; optional&#40;string, &#34;e2-micro&#34;&#41;&#10; zones &#61; optional&#40;list&#40;string&#41;, &#91;&#34;b&#34;&#93;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>&#123;&#125;</code> |
| [prefix](variables.tf#L42) | Prefix used for resource names. | <code>string</code> | | <code>&#34;lb-xr-00&#34;</code> |
## Outputs
| name | description | sensitive |
|---|---|:---:|
| [instances](outputs.tf#L17) | Instances details. | |
| [lb](outputs.tf#L34) | Load balancer details. | |
<!-- END TFDOC -->

View File

@ -0,0 +1,77 @@
/**
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
# tfdoc:file:description Instance-related locals and resources.
locals {
# use lb subnets for instances if instance subnets are not defined
subnets_instances = coalesce(
var.vpc_config.subnets_instances, var.vpc_config.subnets
)
# derive instance names/attributes from permutation of regions and zones
instances = {
for t in setproduct(local.regions, var.instances_config.zones) :
"${var.prefix}-${local.region_shortnames[t.0]}-${t.1}" => {
region = t.0
zone = "${t.0}-${t.1}"
}
}
}
module "instance-sa" {
source = "../../iam-service-account"
project_id = var.project_id
name = "vm-default"
prefix = var.prefix
display_name = "Cross-region LB instances service account"
iam_project_roles = {
(var.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter"
]
}
}
module "instances" {
source = "../../compute-vm"
for_each = local.instances
project_id = var.project_id
zone = each.value.zone
name = each.key
instance_type = var.instances_config.machine_type
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
network_interfaces = [{
network = var.vpc_config.network
subnetwork = local.subnets_instances[each.value.region]
}]
tags = [var.prefix]
metadata = {
user-data = file("${path.module}/nginx-cloud-config.yaml")
}
service_account = {
email = module.instance-sa.email
}
group = {
named_ports = {
http = 80
https = 443
}
}
}

View File

@ -0,0 +1,134 @@
/**
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
# tfdoc:file:description Load balancer and VPC resources.
locals {
# define regions for both instances and lb for easy access
regions = keys(var.vpc_config.subnets)
}
module "vpc" {
source = "../../net-vpc"
count = var.vpc_config.proxy_subnets_config == null ? 0 : 1
project_id = regex("projects/([^/]+)/", var.vpc_config.network)[0]
name = regex("global/networks/([^/]+)$", var.vpc_config.network)[0]
vpc_create = false
subnets_proxy_only = [
for k, v in var.vpc_config.proxy_subnets_config : {
ip_cidr_range = v
name = "${var.prefix}-proxy-${local.region_shortnames[k]}"
region = k
active = true
global = true
}
]
}
module "load-balancer" {
source = "../../net-lb-app-int-cross-region"
name = var.prefix
project_id = var.project_id
backend_service_configs = {
default = {
port_name = "http"
backends = [
for k, v in module.instances : { group = v.group.id }
]
}
}
vpc_config = {
network = var.vpc_config.network
subnetworks = var.vpc_config.subnets
}
depends_on = [module.vpc]
}
module "dns" {
source = "../../dns"
project_id = var.project_id
name = var.prefix
zone_config = {
domain = var.dns_config.domain
private = {
client_networks = (
var.dns_config.client_networks != null
? var.dns_config.client_networks
: [var.vpc_config.network]
)
}
}
recordsets = {
"A ${coalesce(var.dns_config.hostname, var.prefix)}" = {
geo_routing = [
for k in local.regions : {
location = k
health_checked_targets = [
{
load_balancer_type = "globalL7ilb"
ip_address = module.load-balancer.addresses[k]
port = "80"
ip_protocol = "tcp"
network_url = var.vpc_config.network
project = var.project_id
}
] }
]
}
}
}
module "firewall" {
source = "../../net-vpc-firewall"
count = var.vpc_config.firewall_config == null ? 0 : 1
project_id = var.project_id
network = var.vpc_config.network
default_rules_config = { disabled = true }
ingress_rules = merge(
{
"ingress-${var.prefix}-proxies" = {
description = "Allow load balancer proxy traffic to instances."
source_ranges = var.vpc_config.firewall_config.proxy_subnet_ranges
targets = [var.prefix]
rules = [{ protocol = "tcp", ports = [80, 443] }]
}
},
var.vpc_config.firewall_config.client_allowed_ranges == null ? {} : {
"ingress-${var.prefix}-http-clients" = {
description = "Allow client HTTP traffic to instances."
source_ranges = var.vpc_config.firewall_config.client_allowed_ranges
targets = [var.prefix]
rules = [{ protocol = "tcp", ports = [80, 443] }]
}
},
var.vpc_config.firewall_config.enable_health_check != true ? {} : {
"ingress-${var.prefix}-health-checks" = {
description = "Allow health check traffic to instances."
source_ranges = ["35.191.0.0/16", "130.211.0.0/22"]
targets = [var.prefix]
rules = [{ protocol = "tcp", ports = [80, 443] }]
}
},
var.vpc_config.firewall_config.enable_iap_ssh != true ? {} : {
"ingress-${var.prefix}-iap-ssh" = {
description = "Allow SSH traffic to instances via IAP."
source_ranges = ["35.235.240.0/20"]
targets = [var.prefix]
rules = [{ protocol = "tcp", ports = [22] }]
}
}
)
}

View File

@ -0,0 +1,79 @@
#cloud-config
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# https://hub.docker.com/r/nginx/nginx/
# https://nginx.io/manual/toc/#installation
users:
- name: nginx
uid: 2000
write_files:
- path: /var/lib/docker/daemon.json
permissions: 0644
owner: root
content: |
{
"live-restore": true,
"storage-driver": "overlay2",
"log-opts": {
"max-size": "1024m"
}
}
- path: /etc/nginx/nginx.conf
permissions: 0644
owner: root
content: |
server {
listen 80;
listen 8080;
listen [::]:80;
listen [::]:8080;
location / {
default_type text/plain;
expires -1;
return 200 'Server address: $server_addr:$server_port\nRemote address: $remote_addr\nServer name: $hostname\nDate: $time_local\nURI: $request_uri\nRequest ID: $request_id\n';
}
}
# nginx container service
- path: /etc/systemd/system/nginx.service
permissions: 0644
owner: root
content: |
[Unit]
Description=Start nginx container
After=gcr-online.target docker.socket
Wants=gcr-online.target docker.socket docker-events-collector.service
[Service]
Environment="HOME=/home/nginx"
ExecStartPre=/usr/bin/docker-credential-gcr configure-docker
ExecStart=/usr/bin/docker run --rm --name=nginx \
--network host \
-v /etc/nginx:/etc/nginx/conf.d \
nginxdemos/hello:plain-text
ExecStop=/usr/bin/docker stop nginx
bootcmd:
- systemctl start node-problem-detector
runcmd:
- iptables -I INPUT 1 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
- iptables -I INPUT 1 -p tcp -m tcp --dport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT
- systemctl daemon-reload
- systemctl start nginx

View File

@ -0,0 +1,39 @@
/**
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
output "instances" {
description = "Instances details."
value = {
addresses = {
for k, v in module.instances : k => v.internal_ip
}
commands = {
for k, v in module.instances : k => (
"gssh ${nonsensitive(v.instance.name)} --project ${var.project_id} --zone ${nonsensitive(v.instance.zone)}"
)
}
groups = {
for k, v in module.instances : k => v.group.id
}
}
}
output "lb" {
description = "Load balancer details."
value = {
addresses = module.load-balancer.addresses
}
}

View File

@ -0,0 +1,43 @@
/**
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
# tfdoc:file:description Region shortnames via locals.
# adapted from FAST networking stages
locals {
# used when the first character would not work
_region_cardinal = {
southeast = "se"
}
_region_geo = {
australia = "o"
}
# split in [geo, cardinal, number] tokens
_region_tokens = {
for v in local.regions : v => regexall("(?:[a-z]+)|(?:[0-9]+)", v)
}
region_shortnames = {
for k, v in local._region_tokens : k => join("", [
# first token via geo alias map or first character
lookup(local._region_geo, v.0, substr(v.0, 0, 1)),
# first token via cardinal alias map or first character
lookup(local._region_cardinal, v.1, substr(v.1, 0, 1)),
# region number as is
v.2
])
}
}

View File

@ -0,0 +1,103 @@
/**
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "dns_config" {
description = "DNS configuration."
type = object({
client_networks = optional(list(string))
domain = optional(string, "gce.example.")
hostname = optional(string)
})
nullable = false
default = {}
}
variable "instances_config" {
description = "Configuration for instances."
type = object({
machine_type = optional(string, "e2-micro")
zones = optional(list(string), ["b"])
})
nullable = false
default = {}
validation {
condition = length(var.instances_config.zones) > 0
error_message = "At least one zone is required for instances."
}
}
variable "prefix" {
description = "Prefix used for resource names."
type = string
nullable = false
default = "lb-xr-00"
}
variable "project_id" {
description = "Project used to create resources."
type = string
nullable = false
}
variable "vpc_config" {
description = "VPC configuration for load balancer and instances. Subnets are keyed by region."
type = object({
network = string
subnets = map(string)
subnets_instances = optional(map(string))
firewall_config = optional(object({
proxy_subnet_ranges = list(string)
client_allowed_ranges = optional(list(string))
enable_health_check = optional(bool, true)
enable_iap_ssh = optional(bool, false)
}))
proxy_subnets_config = optional(map(string))
})
nullable = false
validation {
condition = try(regex("/", var.vpc_config.network), null) != null
error_message = "Network must be a network id or self link, not a name."
}
validation {
condition = alltrue([
for k, v in var.vpc_config.subnets : try(regex("/", v), null) != null
])
error_message = "Subnet values must be ids or self links, not names."
}
validation {
condition = (
var.vpc_config.subnets_instances == null
||
keys(var.vpc_config.subnets) == keys(coalesce(var.vpc_config.subnets_instances, {}))
)
error_message = "Instance subnet regions must match load balancer regions if defined."
}
validation {
condition = (
var.vpc_config.proxy_subnets_config == null
||
keys(var.vpc_config.subnets) == keys(coalesce(var.vpc_config.proxy_subnets_config, {}))
)
error_message = "Proxy subnet regions must match load balancer regions if defined."
}
validation {
condition = alltrue([
for k, v in coalesce(var.vpc_config.subnets_instances, {}) :
try(regex("/", v), null) != null
])
error_message = "Instance subnet values must be ids or self links, not names."
}
}

View File

@ -1,4 +1,4 @@
# Copyright 2023 Google LLC
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -25,7 +25,8 @@ FABRIC_ROOT = Path(__file__).parents[2]
FILE_TEST_RE = re.compile(r'# tftest-file +id=([\w_.-]+) +path=([\S]+)')
FIXTURE_TEST_RE = re.compile(r'# tftest-fixture +id=([\w_.-]+)')
Example = collections.namedtuple('Example', 'name code module files fixtures')
Example = collections.namedtuple('Example',
'name code module files fixtures type')
File = collections.namedtuple('File', 'path content')
@ -74,9 +75,11 @@ def pytest_generate_tests(metafunc, test_group='example',
index += 1
code = child.children[0].children
tftest_tag = get_tftest_directive(code)
if tftest_tag is None:
continue
if tftest_tag and not filter_tests(tftest_tag):
continue
if child.lang == 'hcl':
if child.lang in ('hcl', 'tfvars'):
path = module.relative_to(FABRIC_ROOT)
name = f'{path}:{last_header}'
if index > 1:
@ -88,7 +91,8 @@ def pytest_generate_tests(metafunc, test_group='example',
# see: https://pytest-xdist.readthedocs.io/en/latest/distribution.html
marks = [pytest.mark.xdist_group("serial")
] if 'serial' in tftest_tag else []
example = Example(name, code, path, files[last_header], fixtures)
example = Example(name, code, path, files[last_header], fixtures,
child.lang)
examples.append(pytest.param(example, marks=marks))
elif isinstance(child, marko.block.Heading):
last_header = child.children[0].children

View File

@ -1,4 +1,4 @@
# Copyright 2023 Google LLC
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -15,6 +15,8 @@
import re
import subprocess
import yaml
import shutil
import tempfile
from pathlib import Path
BASE_PATH = Path(__file__).parent
@ -47,49 +49,59 @@ def prepare_files(example, test_path, files, fixtures):
destination.write_text(example.fixtures[f])
def test_example(plan_validator, tmp_path, example):
def test_example(plan_validator, example):
if match := COUNT_TEST_RE.search(example.code):
(tmp_path / 'fabric').symlink_to(BASE_PATH.parents[1])
(tmp_path / 'variables.tf').symlink_to(BASE_PATH / 'variables.tf')
(tmp_path / 'main.tf').write_text(example.code)
assets_path = BASE_PATH.parent / str(example.module).replace('-',
'_') / 'assets'
if assets_path.exists():
(tmp_path / 'assets').symlink_to(assets_path)
# for tfvars-based tests, create the temporary directory with the
# same parent as the original module
directory = example.module.parent if example.type == 'tfvars' else None
prefix = f'pytest-{example.module.name}'
with tempfile.TemporaryDirectory(prefix=prefix, dir=directory) as tmp_path:
tmp_path = Path(tmp_path)
tf_var_files = []
if example.type == 'hcl':
(tmp_path / 'fabric').symlink_to(BASE_PATH.parents[1])
(tmp_path / 'variables.tf').symlink_to(BASE_PATH / 'variables.tf')
(tmp_path / 'main.tf').write_text(example.code)
assets_path = (BASE_PATH.parent /
str(example.module).replace('-', '_') / 'assets')
if assets_path.exists():
(tmp_path / 'assets').symlink_to(assets_path)
expected_modules = int(match.group('modules'))
expected_resources = int(match.group('resources'))
prepare_files(example, tmp_path, match.group('files'),
match.group('fixtures'))
elif example.type == 'tfvars':
(tmp_path / 'terraform.auto.tfvars').write_text(example.code)
shutil.copytree(example.module, tmp_path, dirs_exist_ok=True)
tf_var_files = [(tmp_path / 'terraform.auto.tfvars').resolve()]
prepare_files(example, tmp_path, match.group('files'),
match.group('fixtures'))
expected_modules = int(match.group('modules'))
expected_resources = int(match.group('resources'))
inventory = []
if match.group('inventory') is not None:
python_test_path = str(example.module).replace('-', '_')
inventory = BASE_PATH.parent / python_test_path / 'examples'
inventory = inventory / match.group('inventory')
inventory = []
if match.group('inventory') is not None:
python_test_path = str(example.module).replace('-', '_')
inventory = BASE_PATH.parent / python_test_path / 'examples'
inventory = inventory / match.group('inventory')
# TODO: force plan_validator to never copy files (we're already
# running from a temp dir)
summary = plan_validator(module_path=tmp_path, inventory_paths=inventory,
tf_var_files=[])
summary = plan_validator(module_path=tmp_path, inventory_paths=inventory,
tf_var_files=tf_var_files)
print('\n')
print(yaml.dump({'values': summary.values}))
print(yaml.dump({'counts': summary.counts}))
print(yaml.dump({'outputs': summary.outputs}))
print('\n')
print(yaml.dump({'values': summary.values}))
print(yaml.dump({'counts': summary.counts}))
print(yaml.dump({'outputs': summary.outputs}))
counts = summary.counts
num_modules, num_resources = counts['modules'], counts['resources']
assert expected_modules == num_modules, 'wrong number of modules'
assert expected_resources == num_resources, 'wrong number of resources'
counts = summary.counts
num_modules, num_resources = counts['modules'], counts['resources']
assert expected_modules == num_modules, 'wrong number of modules'
assert expected_resources == num_resources, 'wrong number of resources'
# TODO(jccb): this should probably be done in check_documentation
# but we already have all the data here.
result = subprocess.run(
'terraform fmt -check -diff -no-color main.tf'.split(), cwd=tmp_path,
stdout=subprocess.PIPE, encoding='utf-8')
assert result.returncode == 0, f'terraform code not formatted correctly\n{result.stdout}'
# TODO(jccb): this should probably be done in check_documentation
# but we already have all the data here.
result = subprocess.run(
'terraform fmt -check -diff -no-color main.tf'.split(), cwd=tmp_path,
stdout=subprocess.PIPE, encoding='utf-8')
assert result.returncode == 0, f'terraform code not formatted correctly\n{result.stdout}'
else:
assert False, "can't find tftest directive"

View File

@ -1,4 +1,4 @@
# Copyright 2023 Google LLC
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -34,10 +34,9 @@ PlanSummary = collections.namedtuple('PlanSummary', 'values counts outputs')
def _prepare_root_module(path):
"""Context manager to prepare a terraform module to be tested.
If the TFTEST_COPY environment variable is set, `path` is copied to
a temporary directory and a few terraform files (e.g.
terraform.tfvars) are deleted to ensure a clean test environment.
Otherwise, `path` is simply returned untouched.
`path` is copied to a temporary directory and a few terraform files
(e.g. terraform.tfvars) are deleted to ensure a clean test
environment.
"""
# if we're copying the module, we might as well ignore files and
# directories that are automatically read by terraform. Useful
@ -50,31 +49,18 @@ def _prepare_root_module(path):
'.terraform.lock.hcl',
'terraform.tfvars', '.terraform')
if os.environ.get('TFTEST_COPY'):
# if the TFTEST_COPY is set, create temp dir and copy the root
# module there
with tempfile.TemporaryDirectory(dir=path.parent) as tmp_path:
tmp_path = Path(tmp_path)
with tempfile.TemporaryDirectory(dir=path.parent) as tmp_path:
tmp_path = Path(tmp_path)
# Running tests in a copy made with symlinks=True makes them run
# ~20% slower than when run in a copy made with symlinks=False.
shutil.copytree(path, tmp_path, dirs_exist_ok=True, symlinks=False,
ignore=ignore_patterns)
lockfile = _REPO_ROOT / 'tools' / 'lockfile' / '.terraform.lock.hcl'
if lockfile.exists():
shutil.copy(lockfile, tmp_path / '.terraform.lock.hcl')
# Running tests in a copy made with symlinks=True makes them run
# ~20% slower than when run in a copy made with symlinks=False.
shutil.copytree(path, tmp_path, dirs_exist_ok=True, symlinks=False,
ignore=ignore_patterns)
lockfile = _REPO_ROOT / 'tools' / 'lockfile' / '.terraform.lock.hcl'
if lockfile.exists():
shutil.copy(lockfile, tmp_path / '.terraform.lock.hcl')
yield tmp_path
else:
# check if any ignore_patterns files are present in path
if unwanted_files := ignore_patterns(path, os.listdir(path=path)):
# prevent shooting yourself in the foot (unexpected test results) when ignored files are present
raise RuntimeError(
f'Test in path {path} contains {", ".join(unwanted_files)} which may affect '
f'test results. Please run tests with TFTEST_COPY=1 environment variable'
)
# if TFTEST_COPY is not set, just return the same path
yield path
yield tmp_path
def plan_summary(module_path, basedir, tf_var_files=None, extra_files=None,

View File

@ -49,16 +49,15 @@ import urllib.parse
import click
import marko
# manipulate path to import COUNT_TEST_RE from tests/examples/test_plan.py
REPO_ROOT = os.path.dirname(os.path.dirname(__file__))
sys.path.append(os.path.join(REPO_ROOT, 'tests'))
from examples.test_plan import COUNT_TEST_RE
__version__ = '2.1.0'
# COUNT_TEST_RE copied from tests/examples/test_plan.py
COUNT_TEST_RE = re.compile(
r'# tftest +modules=(?P<modules>\d+) +resources=(?P<resources>\d+)' +
r'(?: +files=(?P<files>[\w@,_-]+))?' +
r'(?: +fixtures=(?P<fixtures>[\w@,_/.-]+))?' +
r'(?: +inventory=(?P<inventory>[\w\-.]+))?')
# TODO(ludomagno): decide if we want to support variables*.tf and outputs*.tf
FILE_DESC_DEFAULTS = {
'main.tf': 'Module-level locals and resources.',
'outputs.tf': 'Module outputs.',
@ -87,6 +86,9 @@ OUT_RE = re.compile(r'''(?smx)
(?:^(.*?)$)
''')
OUT_TEMPLATE = ('description', 'value', 'sensitive')
RECIPE_RE = re.compile(r'(?sm)^#\s*(.*?)$')
REPO_ROOT = os.path.dirname(os.path.dirname(__file__))
REPO_URL = 'https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/blob/master'
TAG_RE = re.compile(r'(?sm)^\s*#\stfdoc:([^:]+:\S+)\s+(.*?)\s*$')
TOC_BEGIN = '<!-- BEGIN TOC -->'
TOC_END = '<!-- END TOC -->'
@ -109,14 +111,21 @@ VAR_RE = re.compile(r'''(?smx)
VAR_RE_TYPE = re.compile(r'([\(\{\}\)])')
VAR_TEMPLATE = ('default', 'description', 'type', 'nullable')
Document = collections.namedtuple('Document', 'content files variables outputs')
Document = collections.namedtuple('Document',
'content files variables outputs recipes',
defaults=[None])
File = collections.namedtuple('File', 'name description modules resources')
Output = collections.namedtuple(
'Output', 'name description sensitive consumers file line')
Recipe = collections.namedtuple('Recipe', 'path title')
Variable = collections.namedtuple(
'Variable',
'name description type default required nullable source file line')
# parsing functions
def _escape(s):
'Basic, minimal HTML escaping'
return ''.join(c if c in UNESCAPED else ('&#%s;' % ord(c)) for c in s)
def _extract_tags(body):
@ -158,112 +167,56 @@ def _parse(body, enum=VAR_ENUM, re=VAR_RE, template=VAR_TEMPLATE):
item[context].append(data)
def parse_files(basepath, exclude_files=None):
'Return a list of File named tuples in root module at basepath.'
exclude_files = exclude_files or []
for name in glob.glob(os.path.join(basepath, '*tf')):
if os.path.islink(name):
continue
shortname = os.path.basename(name)
if shortname in exclude_files:
continue
try:
with open(name, encoding='utf-8') as file:
body = file.read()
except (IOError, OSError) as e:
raise SystemExit(f'Cannot read file {name}: {e}')
tags = _extract_tags(body)
description = tags.get('file:description',
FILE_DESC_DEFAULTS.get(shortname))
modules = set(
os.path.basename(urllib.parse.urlparse(m).path)
for m in FILE_RE_MODULES.findall(body))
resources = set(FILE_RE_RESOURCES.findall(body))
yield File(shortname, description, modules, resources)
def parse_outputs(basepath, exclude_files=None):
'Return a list of Output named tuples for root module outputs*.tf.'
exclude_files = exclude_files or []
names = glob.glob(os.path.join(basepath, 'outputs*tf'))
names += glob.glob(os.path.join(basepath, 'local-*outputs*tf'))
for name in names:
shortname = os.path.basename(name)
if shortname in exclude_files:
continue
try:
with open(name, encoding='utf-8') as file:
body = file.read()
except (IOError, OSError) as e:
raise SystemExit(f'Cannot open outputs file {shortname}.')
for item in _parse(body, enum=OUT_ENUM, re=OUT_RE, template=OUT_TEMPLATE):
description = ''.join(item['description'])
sensitive = item['sensitive'] != []
consumers = item['tags'].get('output:consumers', '')
yield Output(name=item['name'], description=description,
sensitive=sensitive, consumers=consumers, file=shortname,
line=item['line'])
def parse_variables(basepath, exclude_files=None):
'Return a list of Variable named tuples for root module variables*.tf.'
exclude_files = exclude_files or []
names = glob.glob(os.path.join(basepath, 'variables*tf'))
names += glob.glob(os.path.join(basepath, 'local-*variables*tf'))
for name in names:
shortname = os.path.basename(name)
if shortname in exclude_files:
continue
try:
with open(name, encoding='utf-8') as file:
body = file.read()
except (IOError, OSError) as e:
raise SystemExit(f'Cannot open variables file {shortname}.')
for item in _parse(body):
description = (''.join(item['description'])).replace('|', '\\|')
vtype = '\n'.join(item['type'])
default = HEREDOC_RE.sub(r'\1', '\n'.join(item['default']))
required = not item['default']
nullable = item.get('nullable') != ['false']
source = item['tags'].get('variable:source', '')
if not required and default != 'null' and vtype == 'string':
default = f'"{default}"'
yield Variable(name=item['name'], description=description, type=vtype,
default=default, required=required, source=source,
file=shortname, line=item['line'], nullable=nullable)
def parse_fixtures(basepath, readme):
'Return a list of file paths of all the unique fixtures used in the module.'
def create_toc(readme):
'Create a Markdown table of contents a for README.'
doc = marko.parse(readme)
used_fixtures = set()
for child in doc.children:
if isinstance(child, marko.block.FencedCode):
if child.lang == 'hcl':
code = child.children[0].children
if match := COUNT_TEST_RE.search(code):
if fixtures := match.group('fixtures'):
for fixture in fixtures.split(','):
fixture_full = os.path.join(REPO_ROOT, 'tests', fixture)
if not os.path.exists(fixture_full):
raise SystemExit(f'Unknown fixture: {fixture}')
fixture_relative = os.path.relpath(fixture_full, basepath)
used_fixtures.add(fixture_relative)
yield from sorted(used_fixtures)
lines = []
headings = [x for x in doc.children if x.get_type() == 'Heading']
for h in headings[1:]:
title = h.children[0].children
slug = title.lower().strip()
slug = re.sub(r'[^\w\s-]', '', slug)
slug = re.sub(r'[-\s]+', '-', slug)
link = f'- [{title}](#{slug})'
indent = ' ' * (h.level - 2)
lines.append(f'{indent}{link}')
return "\n".join(lines)
# formatting functions
def create_tfref(module_path, files=False, show_extra=False, exclude_files=None,
readme=None):
'Return tfdoc mark and generated content.'
if readme:
# check for overrides in doc
opts = get_tfref_opts(readme)
files = opts.get('files', files)
show_extra = opts.get('show_extra', show_extra)
abspath = os.path.abspath(module_path)
try:
if os.path.dirname(abspath).endswith('/modules'):
mod_recipes = list(
parse_recipes(module_path,
f'{REPO_URL}/modules/{os.path.basename(abspath)}'))
else:
mod_recipes = None
mod_files = list(parse_files(module_path, exclude_files)) if files else []
mod_variables = list(parse_variables(module_path, exclude_files))
mod_outputs = list(parse_outputs(module_path, exclude_files))
mod_fixtures = list(parse_fixtures(module_path, readme))
except (IOError, OSError) as e:
raise SystemExit(e)
doc = format_tfref(mod_outputs, mod_variables, mod_files, mod_fixtures,
mod_recipes, show_extra)
return Document(doc, mod_files, mod_variables, mod_outputs, mod_recipes)
def _escape(s):
'Basic, minimal HTML escaping'
return ''.join(c if c in UNESCAPED else ('&#%s;' % ord(c)) for c in s)
def format_tfref(outputs, variables, files, fixtures, show_extra=False):
def format_tfref(outputs, variables, files, fixtures, recipes=None,
show_extra=False):
'Return formatted document.'
buffer = []
if recipes:
buffer += ['', '## Recipes', '']
buffer += list(format_tfref_recipes(recipes))
if files:
buffer += ['', '## Files', '']
buffer += list(format_tfref_files(files))
@ -301,6 +254,12 @@ def format_tfref_files(items):
f' {resources} |' if num_resources else '')
def format_tfref_fixtures(items):
'Format fixtures table.'
for x in items:
yield f"- [{os.path.basename(x)}]({x})"
def format_tfref_outputs(items, show_extra=True):
'Format outputs table.'
if not items:
@ -319,6 +278,14 @@ def format_tfref_outputs(items, show_extra=True):
yield format
def format_tfref_recipes(recipes):
'Format recipes list.'
if not recipes:
return
for r in recipes:
yield f'- [{r.title}]({r.path})'
def format_tfref_variables(items, show_extra=True):
'Format variables table.'
if not items:
@ -352,29 +319,12 @@ def format_tfref_variables(items, show_extra=True):
yield format
def format_tfref_fixtures(items):
'Format fixtures table.'
for x in items:
yield f"- [{os.path.basename(x)}]({x})"
def create_toc(readme):
'Create a Markdown table of contents a for README.'
doc = marko.parse(readme)
lines = []
headings = [x for x in doc.children if x.get_type() == 'Heading']
for h in headings[1:]:
title = h.children[0].children
slug = title.lower().strip()
slug = re.sub(r'[^\w\s-]', '', slug)
slug = re.sub(r'[-\s]+', '-', slug)
link = f'- [{title}](#{slug})'
indent = ' ' * (h.level - 2)
lines.append(f'{indent}{link}')
return "\n".join(lines)
# replace functions
def get_readme(readme_path):
'Open and return README.md in module.'
try:
return open(readme_path, "r", encoding="utf-8").read()
except (IOError, OSError) as e:
raise SystemExit(f'Error opening README {readme_path}: {e}')
def get_tfref_parts(readme):
@ -385,14 +335,6 @@ def get_tfref_parts(readme):
return {'doc': m.group(1).strip(), 'start': m.start(), 'end': m.end()}
def get_toc_parts(readme):
'Check if README file is marked, and return current toc.'
t = re.search('(?sm)%s(.*)%s' % (TOC_BEGIN, TOC_END), readme)
if not t:
return
return {'toc': t.group(1).strip(), 'start': t.start(), 'end': t.end()}
def get_tfref_opts(readme):
'Check if README file is setting options via a mark, and return options.'
m = MARK_OPTS_RE.search(readme)
@ -408,31 +350,123 @@ def get_tfref_opts(readme):
return opts
def create_tfref(module_path, files=False, show_extra=False, exclude_files=None,
readme=None):
if readme:
# check for overrides in doc
opts = get_tfref_opts(readme)
files = opts.get('files', files)
show_extra = opts.get('show_extra', show_extra)
try:
mod_files = list(parse_files(module_path, exclude_files)) if files else []
mod_variables = list(parse_variables(module_path, exclude_files))
mod_outputs = list(parse_outputs(module_path, exclude_files))
mod_fixtures = list(parse_fixtures(module_path, readme))
except (IOError, OSError) as e:
raise SystemExit(e)
doc = format_tfref(mod_outputs, mod_variables, mod_files, mod_fixtures,
show_extra)
return Document(doc, mod_files, mod_variables, mod_outputs)
def get_toc_parts(readme):
'Check if README file is marked, and return current toc.'
t = re.search('(?sm)%s(.*)%s' % (TOC_BEGIN, TOC_END), readme)
if not t:
return
return {'toc': t.group(1).strip(), 'start': t.start(), 'end': t.end()}
def get_readme(readme_path):
'Open and return README.md in module.'
try:
return open(readme_path, "r", encoding="utf-8").read()
except (IOError, OSError) as e:
raise SystemExit(f'Error opening README {readme_path}: {e}')
def parse_files(basepath, exclude_files=None):
'Return a list of File named tuples in root module at basepath.'
exclude_files = exclude_files or []
for name in glob.glob(os.path.join(basepath, '*tf')):
if os.path.islink(name):
continue
shortname = os.path.basename(name)
if shortname in exclude_files:
continue
try:
with open(name, encoding='utf-8') as file:
body = file.read()
except (IOError, OSError) as e:
raise SystemExit(f'Cannot read file {name}: {e}')
tags = _extract_tags(body)
description = tags.get('file:description',
FILE_DESC_DEFAULTS.get(shortname))
modules = set(
os.path.basename(urllib.parse.urlparse(m).path)
for m in FILE_RE_MODULES.findall(body))
resources = set(FILE_RE_RESOURCES.findall(body))
yield File(shortname, description, modules, resources)
def parse_fixtures(basepath, readme):
'Return a list of file paths of all the unique fixtures used in the module.'
doc = marko.parse(readme)
used_fixtures = set()
for child in doc.children:
if isinstance(child, marko.block.FencedCode):
if child.lang == 'hcl':
code = child.children[0].children
if match := COUNT_TEST_RE.search(code):
if fixtures := match.group('fixtures'):
for fixture in fixtures.split(','):
fixture_full = os.path.join(REPO_ROOT, 'tests', fixture)
if not os.path.exists(fixture_full):
raise SystemExit(f'Unknown fixture: {fixture}')
fixture_relative = os.path.relpath(fixture_full, basepath)
used_fixtures.add(fixture_relative)
yield from sorted(used_fixtures)
def parse_outputs(basepath, exclude_files=None):
'Return a list of Output named tuples for root module outputs*.tf.'
exclude_files = exclude_files or []
names = glob.glob(os.path.join(basepath, 'outputs*tf'))
names += glob.glob(os.path.join(basepath, 'local-*outputs*tf'))
for name in names:
shortname = os.path.basename(name)
if shortname in exclude_files:
continue
try:
with open(name, encoding='utf-8') as file:
body = file.read()
except (IOError, OSError) as e:
raise SystemExit(f'Cannot open outputs file {shortname}.')
for item in _parse(body, enum=OUT_ENUM, re=OUT_RE, template=OUT_TEMPLATE):
description = ''.join(item['description'])
sensitive = item['sensitive'] != []
consumers = item['tags'].get('output:consumers', '')
yield Output(name=item['name'], description=description,
sensitive=sensitive, consumers=consumers, file=shortname,
line=item['line'])
def parse_recipes(module_path, module_url):
'Find and return module recipes.'
for dirpath, dirnames, filenames in os.walk(module_path):
name = os.path.basename(dirpath)
if name.startswith('recipe-') and 'README.md' in filenames:
try:
with open(os.path.join(dirpath, 'README.md'), encoding='utf-8') as f:
match = RECIPE_RE.search(f.read())
if match:
yield Recipe(f'{module_url}/{name}', match.group(1))
else:
raise SystemExit(f'No title for recipe {dirpath}')
except (IOError, OSError) as e:
raise SystemExit(f'Error opening recipe {dirpath}')
def parse_variables(basepath, exclude_files=None):
'Return a list of Variable named tuples for root module variables*.tf.'
exclude_files = exclude_files or []
names = glob.glob(os.path.join(basepath, 'variables*tf'))
names += glob.glob(os.path.join(basepath, 'local-*variables*tf'))
for name in names:
shortname = os.path.basename(name)
if shortname in exclude_files:
continue
try:
with open(name, encoding='utf-8') as file:
body = file.read()
except (IOError, OSError) as e:
raise SystemExit(f'Cannot open variables file {shortname}.')
for item in _parse(body):
description = (''.join(item['description'])).replace('|', '\\|')
vtype = '\n'.join(item['type'])
default = HEREDOC_RE.sub(r'\1', '\n'.join(item['default']))
required = not item['default']
nullable = item.get('nullable') != ['false']
source = item['tags'].get('variable:source', '')
if not required and default != 'null' and vtype == 'string':
default = f'"{default}"'
yield Variable(name=item['name'], description=description, type=vtype,
default=default, required=required, source=source,
file=shortname, line=item['line'], nullable=nullable)
def render_tfref(readme, doc):