Fix external documentation links (#1347)

This commit is contained in:
Bob Idle 2023-05-02 07:26:58 +02:00 committed by GitHub
parent 1547a55e1f
commit 94589efc34
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 28 additions and 28 deletions

View File

@ -24,7 +24,7 @@ Building Compute Engine image (Packer part):
## Using Packer's service account
The following blueprint leverages [service account impersonation](https://cloud.google.com/iam/docs/impersonating-service-accounts)
The following blueprint leverages [service account impersonation](https://cloud.google.com/iam/docs/service-account-overview#impersonation)
to execute any operations on GCP as a dedicated Packer service account. Depending on how you execute
the Packer tool, you need to grant your principal rights to impersonate Packer's service account.

View File

@ -5,7 +5,7 @@ The image is provisioned with a sample shell scripts to update OS packages and i
The example uses following GCP features:
* [service account impersonation](https://cloud.google.com/iam/docs/impersonating-service-accounts)
* [service account impersonation](https://cloud.google.com/iam/docs/service-account-overview#impersonation)
* [Identity-Aware Proxy](https://cloud.google.com/iap/docs/using-tcp-forwarding) tunnel
<!-- BEGIN TFDOC -->

View File

@ -36,13 +36,13 @@ Once done testing, you can clean up resources by running `terraform destroy`. To
Once resources are created, you can run queries on the data you exported on Bigquery. [Here](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery#querying_an_asset_snapshot) you can find some blueprint of queries you can run.
You can also create a dashboard connecting [Datalab](https://datastudio.google.com/) or any other BI tools of your choice to your Bigquery dataset.
You can also create a dashboard connecting [Looker Studio](https://lookerstudio.google.com/) or any other BI tools of your choice to your Bigquery dataset.
## File exporter for JSON, CSV (optional).
This is an optional part.
Regular file-based exports of data from Cloud Asset Inventory may be useful for e.g. scale-out network dependencies discovery tools like [Planet Exporter](https://github.com/williamchanrico/planet-exporter), or to update legacy workloads tracking or configuration management systems. Bigquery supports multiple [export formats](https://cloud.google.com/bigquery/docs/exporting-data#export_formats_and_compression_types) and one may upload objects to Storage Bucket using provided Cloud Function. Specify `job.DestinationFormat` as defined in [documentation](https://googleapis.dev/python/bigquery/latest/generated/google.cloud.bigquery.job.DestinationFormat.html), e.g. `NEWLINE_DELIMITED_JSON`.
Regular file-based exports of data from Cloud Asset Inventory may be useful for e.g. scale-out network dependencies discovery tools like [Planet Exporter](https://github.com/williamchanrico/planet-exporter), or to update legacy workloads tracking or configuration management systems. Bigquery supports multiple [export formats](https://cloud.google.com/bigquery/docs/exporting-data#export_formats_and_compression_types) and one may upload objects to Storage Bucket using provided Cloud Function. Specify `job.DestinationFormat` as defined in [documentation](https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.job.DestinationFormat), e.g. `NEWLINE_DELIMITED_JSON`.
It helps to create custom [scheduled query](https://cloud.google.com/bigquery/docs/scheduling-queries#console) from CAI export tables, and to write out results in to dedicated table (with overwrites). Define such query's output columns to comply with downstream systems' fields requirements, and time query execution after CAI export into BQ for freshness. See [sample queries](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery-sample-queries).

View File

@ -1,6 +1,6 @@
# M4CE(v5) - ESXi Connector
This blueprint deploys a virtual machine from an OVA image and the security prerequisites to run the Migrate for Compute Engine (v5) [connector](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector) on VMWare ESXi.
This blueprint deploys a virtual machine from an OVA image and the security prerequisites to run the Migrate for Compute Engine (v5) [connector](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/migrate-connector) on VMWare ESXi.
The blueprint is designed to deploy the M4CE (v5) connector on and existing VMWare environment. The [network configuration](https://cloud.google.com/migrate/compute-engine/docs/5.0/concepts/architecture#migration_architecture) required to allow the communication of the migrate connector to the GCP API is not included in this blueprint.
@ -13,9 +13,9 @@ This is the high level diagram:
This sample creates several distinct groups of resources:
- virtual machine
- [M4CE migrate connector](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector#installing_the_migrate_connector)
- [M4CE migrate connector](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/migrate-connector#step-InstallMigrateConnector)
- IAM
- [vCenter user role](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector#step-1)
- [vCenter user role](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/migrate-connector#step-1)
<!-- BEGIN TFDOC -->
## Variables

View File

@ -16,9 +16,9 @@ This sample creates\updates several distinct groups of resources:
- Deploy M4CE host project with [required services](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#enabling_required_services_on_the_host_project) on a new or existing project.
- M4CE target project prerequisites deployed on existing projects.
- IAM
- Create a [service account](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector#step-3) used at runtime by the M4CE connector for data replication
- Grant [migration admin roles](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts
- Grant [migration viewer role](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts
- Create a [service account](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/migrate-connector#step-3) used at runtime by the M4CE connector for data replication
- Grant [migration admin roles](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts
- Grant [migration viewer role](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts
<!-- BEGIN TFDOC -->
## Variables

View File

@ -16,10 +16,10 @@ This sample creates\update several distinct groups of resources:
- M4CE host project with [required services](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#enabling_required_services_on_the_host_project) deployed on a new or existing project.
- M4CE target project prerequisites deployed on existing projects.
- IAM
- Create a [service account](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector#step-3) used at runtime by the M4CE connector for data replication
- Grant [migration admin roles](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts.
- Grant [migration viewer role](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts.
- Grant [roles on shared VPC](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/target-project#configure-permissions) to migration admins
- Create a [service account](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/migrate-connector#step-3) used at runtime by the M4CE connector for data replication
- Grant [migration admin roles](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts.
- Grant [migration viewer role](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/enable-services#using_predefined_roles) to provided user accounts.
- Grant [roles on shared VPC](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/target-project#configure-permissions) to migration admins
<!-- BEGIN TFDOC -->
## Variables
@ -41,7 +41,7 @@ This sample creates\update several distinct groups of resources:
<!-- END TFDOC -->
## Manual Steps
Once this blueprint is deployed the M4CE [m4ce_gmanaged_service_account](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/target-sa-compute-engine#configuring_the_default_service_account) has to be configured to grant the access to the shared VPC and allow the deploy of Compute Engine instances as the result of the migration.
Once this blueprint is deployed the M4CE [m4ce_gmanaged_service_account](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/target-sa-compute-engine#configuring_the_default_service_account) has to be configured to grant the access to the shared VPC and allow the deploy of Compute Engine instances as the result of the migration.
## Test

View File

@ -17,9 +17,9 @@ This sample creates several distinct groups of resources:
- networking
- Default VPC network
- IAM
- One [service account](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector#step-3) used at runtime by the M4CE connector for data replication
- Grant [migration admin roles](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#using_predefined_roles) to admin user accounts
- Grant [migration viewer role](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#using_predefined_roles) to viewer user accounts
- One [service account](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/migrate-connector#step-3) used at runtime by the M4CE connector for data replication
- Grant [migration admin roles](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/enable-services#using_predefined_roles) to admin user accounts
- Grant [migration viewer role](https://cloud.google.com/migrate/virtual-machines/docs/5.0/how-to/enable-services#using_predefined_roles) to viewer user accounts
<!-- BEGIN TFDOC -->
## Variables

View File

@ -30,7 +30,7 @@ The main components that we would be setting up are (to learn more about these p
* [Cloud Storage (GCS) bucket](https://cloud.google.com/storage/): data lake solution to store extracted raw data that must undergo some kind of transformation.
* [Cloud Dataflow pipeline](https://cloud.google.com/dataflow): to build fully managed batch and streaming pipelines to transform data stored in GCS buckets ready for processing in the Data Warehouse using Apache Beam.
* [BigQuery datasets and tables](https://cloud.google.com/bigquery): to store the transformed data in and query it using SQL, use it to make reports or begin training [machine learning](https://cloud.google.com/bigquery-ml/docs/introduction) models without having to take your data out.
* [Service accounts](https://cloud.google.com/iam/docs/service-accounts) (__created with least privilege on each resource__): one for uploading data into the GCS bucket, one for Orchestration, one for Dataflow instances and one for the BigQuery tables. You can also configure users or groups of users to assign them a viewer role on the created resources and the ability to impersonate service accounts to test the Dataflow pipelines before automating them with a tool like [Cloud Composer](https://cloud.google.com/composer).
* [Service accounts](https://cloud.google.com/iam/docs/service-account-overview) (__created with least privilege on each resource__): one for uploading data into the GCS bucket, one for Orchestration, one for Dataflow instances and one for the BigQuery tables. You can also configure users or groups of users to assign them a viewer role on the created resources and the ability to impersonate service accounts to test the Dataflow pipelines before automating them with a tool like [Cloud Composer](https://cloud.google.com/composer).
For a full list of the resources that will be created, please refer to the [github repository](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/tree/master/blueprints/data-solutions/gcs-to-bq-with-least-privileges) for this project. If you're migrating from another Cloud Provider, refer to [this](https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison) documentation to see equivalent services and comparisons in Microsoft Azure and Amazon Web Services

View File

@ -8,7 +8,7 @@ This blueprint shows how to deploy a filtering HTTP proxy to restrict Internet a
The reason for using Privat Service Connect in this setup is to have a common proxy setup between all environments without having to share a VPC between projects. This allows us to enforce the `compute.vmExternalIpAccess` [organization policy](https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints), which prevents the service projects from having external IPs, thus forcing all outbound Internet connections through the proxy.
To allow Internet connectivity to the proxy subnet, a Cloud NAT instance is configured to allow usage from [that subnet only](https://cloud.google.com/nat/docs/using-nat#specify_subnet_ranges_for_nat). All other subnets are not allowed to use the Cloud NAT instance.
To allow Internet connectivity to the proxy subnet, a Cloud NAT instance is configured to allow usage from [that subnet only](https://cloud.google.com/nat/docs/set-up-manage-network-address-translation#specify_subnet_ranges_for_nat). All other subnets are not allowed to use the Cloud NAT instance.
To simplify the usage of the proxy, a Cloud DNS private zone is created in each consumer VPC and the IP address of the proxy is exposed with the FQDN `proxy.internal`. In addition, system-wide `http_proxy` and `https_proxy` environment variables and an APT configuration are rolled out via a [startup script](startup.sh).
<!-- BEGIN TFDOC -->

View File

@ -7,7 +7,7 @@ This blueprint shows how to deploy a filtering HTTP proxy to restrict Internet a
The VPC is a Shared VPC and all the service projects will be located under a folder enforcing the `compute.vmExternalIpAccess` [organization policy](https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints). This prevents the service projects from having external IPs, thus forcing all outbound Internet connections through the proxy.
To allow Internet connectivity to the proxy subnet, a Cloud NAT instance is configured to allow usage from [that subnet only](https://cloud.google.com/nat/docs/using-nat#specify_subnet_ranges_for_nat). All other subnets are not allowed to use the Cloud NAT instance.
To allow Internet connectivity to the proxy subnet, a Cloud NAT instance is configured to allow usage from [that subnet only](https://cloud.google.com/nat/docs/set-up-manage-network-address-translation#specify_subnet_ranges_for_nat). All other subnets are not allowed to use the Cloud NAT instance.
To simplify the usage of the proxy, a Cloud DNS private zone is created and the IP address of the proxy is exposed with the FQDN `proxy.internal`.

View File

@ -7,7 +7,7 @@ The blueprint shows some of the limitations that need to be taken into account w
- no mesh networking between the spokes
- complex support for managed services hosted in tenant VPCs connected via peering (Cloud SQL, GKE, etc.)
One possible solution to the managed service limitation above is presented here, using a static VPN to establish connectivity to the GKE masters in the tenant project ([courtesy of @drebes](https://github.com/drebes/tf-samples/blob/master/gke-master-from-hub/main.tf#L10)). Other solutions typically involve the use of proxies, as [described in this GKE article](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
One possible solution to the managed service limitation above is presented here, using a static VPN to establish connectivity to the GKE masters in the tenant project ([courtesy of @drebes](https://github.com/drebes/tf-samples/blob/master/gke-master-from-hub/main.tf#L10)). Other solutions typically involve the use of proxies, as [described in this GKE article](https://cloud.google.com/kubernetes-engine/docs/archive/creating-kubernetes-engine-private-clusters-with-net-proxies).
One other topic that needs to be considered when using peering is the limit of 25 peerings in each peering group, which constrains the scalability of design like the one presented here.
@ -41,7 +41,7 @@ gcloud container clusters get-credentials cluster-1 --zone europe-west1-b
kubectl get all
```
The blueprint configures the peering with the GKE master VPC to export routes for you, so that VPN routes are passed through the peering. You can disable by hand in the console or by editing the `peering_config` variable in the `gke-cluster` module, to test non-working configurations or switch to using the [GKE proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).
The blueprint configures the peering with the GKE master VPC to export routes for you, so that VPN routes are passed through the peering. You can disable by hand in the console or by editing the `peering_config` variable in the `gke-cluster` module, to test non-working configurations or switch to using the [GKE proxy](https://cloud.google.com/kubernetes-engine/docs/archive/creating-kubernetes-engine-private-clusters-with-net-proxies).
### Export routes via Terraform (recommended)

View File

@ -4,7 +4,7 @@ This blueprint creates a simple **Hub and Spoke VPN** setup, where the VPC netwo
A few additional features are also shown:
- [custom BGP advertisements](https://cloud.google.com/router/docs/how-to/advertising-overview) to implement transitivity between spokes
- [custom BGP advertisements](https://cloud.google.com/network-connectivity/docs/router/how-to/advertising-overview) to implement transitivity between spokes
- [VPC Global Routing](https://cloud.google.com/network-connectivity/docs/router/how-to/configuring-routing-mode) to leverage a regional set of VPN gateways in different regions as next hops (used here for illustrative/study purpose, not usually done in real life)
The blueprint has been purposefully kept simple to show how to use and wire the VPC and VPN-HA modules together, and so that it can be used as a basis for experimentation. For a more complex scenario that better reflects real-life usage, including [Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) and [DNS cross-project binding](https://cloud.google.com/dns/docs/zones/cross-project-binding) please refer to the [FAST network stage](../../../fast/stages/2-networking-b-vpn/).

View File

@ -3,7 +3,7 @@
This module allows creating a Cloud Identity group and assigning members.
## Usage
To use this module you must either run terraform as a user that has the Groups Admin role in Cloud Identity or [enable domain-wide delegation](https://developers.google.com/admin-sdk/directory/v1/guides/delegation) to the service account used by terraform. If you use a service account, you must also grant that service account the Groups Admin role in Cloud Identity.
To use this module you must either run terraform as a user that has the Groups Admin role in Cloud Identity or [enable domain-wide delegation](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority) to the service account used by terraform. If you use a service account, you must also grant that service account the Groups Admin role in Cloud Identity.
Please note that the underlying terraform resources only allow the creation of groups with members that are part of the organization. If you want to create memberships for identities outside your own organization, you have to manually allow members outside your organization in the Cloud Identity admin console.

View File

@ -1,6 +1,6 @@
# Google Cloud Logging Buckets Module
This module manages [logging buckets](https://cloud.google.com/logging/docs/storage#logs-buckets) for a project, folder, organization or billing account.
This module manages [logging buckets](https://cloud.google.com/logging/docs/routing/overview#buckets) for a project, folder, organization or billing account.
Note that some logging buckets are automatically created for a given folder, project, organization, and billing account cannot be deleted. Creating a resource of this type will acquire and update the resource that already exists at the desired location. These buckets cannot be removed so deleting this resource will remove the bucket config from your terraform state but will leave the logging bucket unchanged. The buckets that are currently automatically created are "_Default" and "_Required".

View File

@ -1,6 +1,6 @@
# Cloud HA VPN Module
This module makes it easy to deploy either GCP-to-GCP or GCP-to-On-prem [Cloud HA VPN](https://cloud.google.com/vpn/docs/concepts/overview#ha-vpn).
This module makes it easy to deploy either GCP-to-GCP or GCP-to-On-prem [Cloud HA VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview#ha-vpn).
## Examples

View File

@ -26,7 +26,7 @@ IAM is managed via several variables that implement different levels of control:
- `group_iam` and `iam` configure authoritative bindings that manage individual roles exclusively, mapping to the [`google_project_iam_binding`](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_project_iam#google_project_iam_binding) resource
- `iam_additive` and `iam_additive_members` configure additive bindings that only manage individual role/member pairs, mapping to the [`google_project_iam_member`](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_project_iam#google_project_iam_member) resource
Be mindful about service identity roles when using authoritative IAM, as you might inadvertently remove a role from a [service identity](https://cloud.google.com/iam/docs/service-accounts#google-managed) or default service account. For example, using `roles/editor` with `iam` or `group_iam` will remove the default permissions for the Cloud Services identity. A simple workaround for these scenarios is described below.
Be mindful about service identity roles when using authoritative IAM, as you might inadvertently remove a role from a [service identity](https://cloud.google.com/iam/docs/service-account-types#google-managed) or default service account. For example, using `roles/editor` with `iam` or `group_iam` will remove the default permissions for the Cloud Services identity. A simple workaround for these scenarios is described below.
### Authoritative IAM