Endpoints in Service Directory can be *associated* with a
VPC. In this case, they can be used by supported Google
Cloud products to send requests directly to resources inside
a VPC. This feature is called Private Network Access.
The `google_service_directory_endpoint` resource supports
this configuration with a new argument `network`.
Unfortunately, this argument has an unusual format: it
is similar to a standard VPC ID, but instead of the project ID,
it expects the project number.
Can't provide just one size (like `web_server` or `triggerrer`) because
of no defaults are taken:
module.composer.google_composer_environment.env: Modifying... [id=***]
╷
│ Error: googleapi: Error 400: Found 6 problems:
│ 1) You have to specify Scheduler CPUs not lower than 0.5.
│ 2) You have to specify number of schedulers larger than 0.
│ 3) You have to specify Web Server CPUs not lower than 0.5.
│ 4) You have to specify Worker CPUs not lower than 0.5.
│ 5) You have to specify minimum number of workers larger than 0.
│ 6) Triggerer memory must be between 1.00GB and 6.50GB for given vCpu
So provide the defaults as set workloads_config == null
* Allow terraform fixtures for examples
* Allow defining multiple fixtures, and named fixtures under tests/fixtures/
* Enable e2e for wiktorn
* Fix prepare_files call for e2e
* Move fixture to separate file, fix test
* Revert shallow-copying symlinks, performane penalty - 20%
* Update tfdoc.py to list used fixtures
---------
Co-authored-by: Wiktor Niesiobędzki <wiktorn@google.com>
This PR changes variable region's default value in example tests to real region value.
Some of the modules parse the region name to decide whether to create regional or zonal resources.
This blueprint creates a networking playground showing a number of different VPC connectivity options:
Hub and spoke via HA VPN
Hub and spoke via VPC peering
Interconnecting two networks via a network virtual appliance (aka NVA)
On top of that, this blueprint implements Policy Based Routing (aka PBR) to show how to force all traffic within a VPC to be funneled through an internal network passthrough load balancer, to implement an Intrusion Prevention System (IPS). PBR is enabled in the hub VPC, matching all traffic originating from within that VPC.