Add support for Cloud Run v2 jobs
* create a separate file for service creation (service.tf) and job
(job.tf) - for easy comparison
* add E2E tests where possibile
* remove default value for input variable `region`
* fix subnet range VPC Access Connector example
* add creation of service account for audit logs call (trigger requires
service account)
* use provided trigger service account email in
`local.trigger_sa_email`, so explicitly provided SA is passed to
trigger
* set default value for vpc_connector_create.throughput.max, to match
what is set by GCP API, as provider uses wrong default of 300 which
results in perma-diff
* create inventory fiels for all examples
Global changes
* (tests) add input variable `project_number`, to allow assigning IAM permissions to Service Accounts in fixtures
* (tests) fix not outputting the path, when object is not found in inventory
* (tests) fix `create_e2e_sandbox.sh` - now it properly finds root of the repo
Secret Manager
* added `version_versions` output, to allow specifying versions in other modules. `versions` is sensitive and it makes it unsuitable for `for_each` values
New test fixtures
* `pubsub.tf` - creating one topic
* `secret-credential.tf` - creating Secret Manager `credential` secret
* `shared-vpc.tf` - creating two projects (host and service), and vpc in host project
* `vpc-connector.tf` - creating VPC Access Connector instance
* Enhanced this blueprint to add a second producer, and modularized the producer.
* Fixed terraform formatting
* Updating README.md with tfdoc
* Fixed test case conditions & module variable passing
* var definitions
* skeleton, untested
* fix errors, test with existing cluster
* test vpc creation, todo notes
* initial variables for AR and image
* initial variables for AR and image
* Add support for remote repositories to artifact-registry
* Add support for virtual repositories to artifact-registry
* Add support for extra config options to artifact-registry
* artifact registry module: add validation and precondition, fix tests
* ar module id/name
* registry
* service accoutn and roles
* fetch pods, remove image prefix
* small changes
* use additive IAM at project level
* use additive IAM at project level
* configmaps
* manifests
* fix statefulset manifest
* service manifest
* fix configmap mode
* add todo
* job (broken)
* job
* wait on manifest, endpoints datasource
* fix job
* Fix local
* sa
* Update README.md
* Restructure gke bp
* refactor tree and infra variables
* no create test
* simplify cluster SA
* test cluster and vpc creation
* project creation fixes
* use iam_members variable
* nits
* readme with examples
* readme with examples
* outputs
* variables, provider configuration
* variables, manifests
* start cluster job
* fix redis cluster creation
Co-authored-by: Julio Castillo <juliocc@users.noreply.github.com>
* Revert changes in autopilot cluster
* Default templates path, use namespace for node names
* Update readmes
* Fix IAM bindings
* Make STABLE the default release channel
* Use Cloud DNS as default DNS provider
* Allow optional Cloud NAT creation
* Allow backup agent and proxy only subnet
* Work around terraform not short-circuiting logical operators
* Rename create variables to be more consistent with other blueprints
* Add basic features
* Update variable names
* Initial kafka JS
* Move providers to a new file
* Kafka / Strimzi
* First possibily working version for MySQL (with a lot of todo's left)
* Explicitly use proxy repo + some other fixes
* Strimzi draft
* Refactor variables, use CluterIP as pointer for mysql-router for bootstraping
* Validate number of replicas, autoscale required number of running nodes to n/2+1
* Use seaprate service for bootstrap, do not recreate all resources on change of replicas count as the config is preserved in PV
* Test dual chart kafka
* Update chart for kafka
* Expose basic kafka configuration options
* Remove unused manifest
* Added batch blueprint
* Added README
* switch to kubectl_manifest
* Add README and support for static IP address
* Move namespace creation to helm
* Interpolate kafka variables
* Rename kafka-strimzi to kafka
* Added TUTORIAL for cloudshell for batch blueprint
* deleted tutorial
* Remove commented replace trigger
* Move to helm chart
* WIP of Cloud Shell tutorial for MySQL
* Rename folders
* Fix rename
* Update paths
* Unify styles
* Update paths
* Add Readme links
* Update mysql tutorial
* Fix path according to self-link
* Use relative path to cwd
* Fix service_account variable location
* Fix tfvars creation
* Restore some fixes for helm deployment
* Add cluster deletion_prevention
* Fixes for tutorial
* Update cluster docs
* Fixes to batch tutorial
* Bare bones readme for batch
* Update batch readme
* README fixes
* Fix README title for redis
* Fix Typos
* Make it easy to pass variables from autopilot-cluster to other modules
* Add connectivity test and bastion host
* updates to readme, and gpu fix
* Add versions.tf and README updates
* Fix typo
* Kafka and Redis README updates
* Update versions.tf
* Fixes
* Add boilerplate
* Fix linting
* Move mysql to separate branch
* Update cloud shell links
* Fix broken link
---------
Co-authored-by: Ludo <ludomagno@google.com>
Co-authored-by: Daniel Marzini <44803752+danielmarzini@users.noreply.github.com>
Co-authored-by: Wiktor Niesiobędzki <wiktorn@google.com>
Co-authored-by: Miren Esnaola <mirene@google.com>
Due to the disk_type validation for auto provision node pool,
this module always forced to create a GKE standard cluster
with a auto provisioned node pool. This is not desirable if
you manage pools separately like using the `gke-nodepool`.