Update to multiple READMEs (#740)

* Update CONTRIBUTING.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* fix gcs to bq example readme

* tfdoc

Co-authored-by: Ludovico Magnocavallo <ludomagno@google.com>
This commit is contained in:
Alejandro Leal 2022-08-11 03:40:55 -04:00 committed by GitHub
parent c0e17f4732
commit a85f3aa907
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 25 additions and 26 deletions

View File

@ -49,7 +49,7 @@ If you changed variables or outputs you need to regenerate the relevant tables i
./tools/tfdoc.py modules/my-changed-module
```
If the folder contains files which won't be pushed to the repository, for example provider files used in FAST stages, you need to change the command above to specifically exclude them from `tfdoc` geerated output.
If the folder contains files which won't be pushed to the repository, for example provider files used in FAST stages, you need to change the command above to specifically exclude them from `tfdoc` generated output.
```bash
# exclude a local provider file from the generated documentation
@ -106,7 +106,7 @@ This is probably our oldest and most important design principle. When designing
It's a radically different approach from designing by product or feature, where boundaries are drawn around a single GCP functionality.
Our modules -- and in a much broader sense our FAST stages -- are all designed to encapsulate a set of functionally related resources and their configurations. This achieves two main goals: to dramatically improve readabiity by using a single block of code -- a module declaration -- for a logical component; and to allow consumers to rely on outputs without having to worry about the dependency chain, as all related resources and configurations are managed internally in the module or stage.
Our modules -- and in a much broader sense our FAST stages -- are all designed to encapsulate a set of functionally related resources and their configurations. This achieves two main goals: to dramatically improve readability by using a single block of code -- a module declaration -- for a logical component; and to allow consumers to rely on outputs without having to worry about the dependency chain, as all related resources and configurations are managed internally in the module or stage.
Taking IAM as an example, we do not offer a single module to centrally manage role bindings (the product/feature based approach) but implement it instead in each module (the logical component approach) since:
@ -335,7 +335,7 @@ module "simple-vm-example" {
#### Depend outputs on internal resources
We mentioned this principle when discussing encapsulation above but it's worth repating it explicitly: **set explicit dependencies in outputs so consumers will wait for full resource configuration**.
We mentioned this principle when discussing encapsulation above but it's worth repeating it explicitly: **set explicit dependencies in outputs so consumers will wait for full resource configuration**.
As an example, users can safely reference the project module's `project_id` output from other modules, knowing that the dependency tree for project configurations (service activation, IAM, etc.) has already been defined inside the module itself. In this particular example the output is also interpolated instead of derived from the resource, so as to avoid issues when used in `for_each` keys.
@ -410,7 +410,7 @@ variable "folder_ids" {
}
```
When creating a new stage or adding a feature to an existing one, always try to leverage the existing interfaces when some of the information you produce needs to cross the stage boundary, so as to minimize impact on producers and consumers lofically dependent on your stage.
When creating a new stage or adding a feature to an existing one, always try to leverage the existing interfaces when some of the information you produce needs to cross the stage boundary, so as to minimize impact on producers and consumers logically dependent on your stage.
#### Output files
@ -566,7 +566,7 @@ We run two GitHub workflows on PRs:
The linting workflow tests:
- that the correct copyright boilerplate is present in all files, using `tools/check_boilerplate.py`
- that all Teraform code is linted via `terraform fmt`
- that all Terraform code is linted via `terraform fmt`
- that all README files have up to date outputs, variables, and files (where relevant) tables, via `tools/check_documentation.py`
- that all links in README files are syntactically correct and valid if internal, via `tools/check_links.py`
- that resource names used in FAST stages stay within a length limit, via `tools/check_names.py`

View File

@ -6,7 +6,7 @@ The examples in this folder show how to wire together different Google Cloud ser
<a href="./asset-inventory-feed-remediation" title="Resource tracking and remediation via Cloud Asset feeds"><img src="./asset-inventory-feed-remediation/diagram.png" align="left" width="280px"></a> This [example](./asset-inventory-feed-remediation) shows how to leverage [Cloud Asset Inventory feeds](https://cloud.google.com/asset-inventory/docs/monitoring-asset-changes) to stream resource changes in real time, and how to programmatically use the feed change notifications for alerting or remediation, via a Cloud Function wired to the feed PubSub queue.
The example's feed tracks changes to Google Compute instances, and the Cloud Function enforces policy compliance on each change so that tags match a set of simple rules. The obious use case is when instance tags are used to scope firewall rules, but the example can easily be adapted to suit different use cases.
The example's feed tracks changes to Google Compute instances, and the Cloud Function enforces policy compliance on each change so that tags match a set of simple rules. The obvious use case is when instance tags are used to scope firewall rules, but the example can easily be adapted to suit different use cases.
<br clear="left">

View File

@ -13,7 +13,7 @@ Terraform:
Ansible:
- Installs the required Windows features and joins the computer to the AD domain.
- Provisions some tests users, groups and group memberships in AD. The data to provision is in the ifles directory of the ad-provisioning ansible role. There is script available in the scripts/ad-provisioning folder that you can use to generate an alternative users or memberships file.
- Provisions some tests users, groups and group memberships in AD. The data to provision is in the files directory of the ad-provisioning ansible role. There is script available in the scripts/ad-provisioning folder that you can use to generate an alternative users or memberships file.
- Installs AD FS
In addition to this, we also include a Powershell script that facilitates the configuration required for Anthos when authenticating users with AD FS as IdP.
@ -43,7 +43,7 @@ Once the resources have been created, do the following:
https://adfs.my-domain.org/adfs/ls/IdpInitiatedSignOn.aspx
2. Enter the username and password of one of the users provisioned. The username has to be in the format: username@my-domain.org
3. Verify that you have successfuly signed in.
3. Verify that you have successfully signed in.
Once done testing, you can clean up resources by running `terraform destroy`.
<!-- BEGIN TFDOC -->

View File

@ -10,7 +10,7 @@ They are meant to be used as minimal but complete starting points to create migr
<a href="./single-project/" title="M4CE with single project"><img src="./single-project/diagram.png" align="left" width="280px"></a> This [example](./single-project/) implements a simple environment for Migrate for Compute Engine (v5) where both the API backend and the migration target environment are deployed on a single GCP project.
This example represents the easiest sceario to implement a Migrate for Compute Engine (v5) enviroment suitable for small migrations on simple enviroments or for product showcases.
This example represents the easiest scenario to implement a Migrate for Compute Engine (v5) environment suitable for small migrations on simple environments or for product showcases.
<br clear="left">
### M4CE with host and target projects

View File

@ -2,7 +2,7 @@
This example deploys a virtual machine from an OVA image and the security prerequisites to run the Migrate for Compute Engine (v5) [connector](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector) on VMWare ESXi.
The example is designed to deploy the M4CE (v5) connector on and existing VMWare environment. The [network configuration](https://cloud.google.com/migrate/compute-engine/docs/5.0/concepts/architecture#migration_architecture) required to allow the communication of the migrate connetor to the GCP API is not included in this example.
The example is designed to deploy the M4CE (v5) connector on and existing VMWare environment. The [network configuration](https://cloud.google.com/migrate/compute-engine/docs/5.0/concepts/architecture#migration_architecture) required to allow the communication of the migrate connector to the GCP API is not included in this example.
This is the high level diagram:
@ -31,7 +31,7 @@ This sample creates several distinct groups of resources:
<!-- END TFDOC -->
## Manual Steps
Once this example is deployed a VCenter user has to be created and binded to the M4CE role in order to allow the connector access the VMWare resources.
The user can be created manually through the VCenter web interface or througt GOV commandline if it is available:
The user can be created manually through the VCenter web interface or through GOV commandline if it is available:
```bash
export GOVC_URL=<VCENTER_URL> (eg. https://192.168.1.100/sdk)
export GOVC_USERNAME=<VCENTER_ADMIN_USER> (eg. administrator@example.local)

View File

@ -2,7 +2,7 @@
This example creates a Migrate for Compute Engine (v5) environment deployed on an host project with multiple [target projects](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project) and shared VPCs.
The example is designed to implement a M4CE (v5) environment on-top of complex migration landing environment where VMs have to be migrated to multiple target projects. In this example targets are alse service projects for a shared VPC. It also includes the IAM wiring needed to make such scenarios work.
The example is designed to implement a M4CE (v5) environment on-top of complex migration landing environment where VMs have to be migrated to multiple target projects. In this example targets are also service projects for a shared VPC. It also includes the IAM wiring needed to make such scenarios work.
This is the high level diagram:

View File

@ -1,6 +1,6 @@
# M4CE(v5) - Single Project
This sample creates a simple M4CE (v5) environment deployed on a signle [host project](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project).
This sample creates a simple M4CE (v5) environment deployed on a single [host project](https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services#identifying_your_host_project).
The example is designed for quick tests or product demos where it is required to setup a simple and minimal M4CE (v5) environment. It also includes the IAM wiring needed to make such scenarios work.

View File

@ -23,8 +23,8 @@ This sample creates several distinct groups of resources:
- One service account for the GGE instance
- KMS
- One key ring
- One crypto key (Procection level: softwere) for Cloud Engine
- One crypto key (Protection level: softwere) for Cloud Storage
- One crypto key (Protection level: software) for Cloud Engine
- One crypto key (Protection level: software) for Cloud Storage
- GCE
- One instance encrypted with a CMEK Cryptokey hosted in Cloud KMS
- GCS

View File

@ -165,7 +165,7 @@ The default configuration will implement 3 tags:
Anything that is not tagged is available to all users who have access to the data warehouse.
For the porpuse of the example no groups has access to tagged data. You can configure your tags and roles associated by configuring the `data_catalog_tags` variable. We suggest useing the "[Best practices for using policy tags in BigQuery](https://cloud.google.com/bigquery/docs/best-practices-policy-tags)" article as a guide to designing your tags structure and access pattern.
For the purpose of the example no groups has access to tagged data. You can configure your tags and roles associated by configuring the `data_catalog_tags` variable. We suggest using the "[Best practices for using policy tags in BigQuery](https://cloud.google.com/bigquery/docs/best-practices-policy-tags)" article as a guide to designing your tags structure and access pattern.
## How to run this script
@ -223,7 +223,6 @@ To do this, you need to remove IAM binging at project-level for the `data-analys
The application layer is out of scope of this script. As a demo purpuse only, several Cloud Composer DAGs are provided. Demos will import data from the `drop off` area to the `Data Warehouse Confidential` dataset suing different features.
You can find examples in the `[demo](./demo)` folder.
<!-- BEGIN TFDOC -->
## Variables

View File

@ -20,7 +20,7 @@ Below you can find a description of each example:
- Import data with Policy Tags: [`datapipeline_dc_tags.py`](./datapipeline.py) imports provided data from `drop off` bucket to the Data Hub Confidential layer protecting sensitive data using Data Catalog policy Tags.
- Delete tables: [`delete_table.py`](./delete_table.py) deletes BigQuery tables created by import pipelines.
## Runnin the demo
## Running the demo
To run demo examples, please follow the following steps:
- 01: copy sample data to the `drop off` Cloud Storage bucket impersonating the `load` service account.

View File

@ -23,7 +23,7 @@ Whether youre transferring from another Cloud Service Provider or youre ta
## Architecture
![GCS to Biquery High-level diagram](diagram.png "GCS to Biquery High-level diagram")
![GCS to BigQuery High-level diagram](diagram.png "GCS to BigQuery High-level diagram")
The main components that we would be setting up are (to learn more about these products, click on the hyperlinks):

View File

@ -24,7 +24,7 @@ Terraform natively supports YaML, JSON and CSV parsing - however Fabric has deci
- YaML is easier to parse for a human, and allows for comments and nested, complex structures
- JSON and CSV can't include comments, which can be used to document configurations, but are often useful to bridge from other systems in automated pipelines
- JSON is more verbose (reads: longer) and harder to parse visually for humans
- CSV isn't often expressive enough (e.g. dit oesn't allow for nested structures)
- CSV isn't often expressive enough (e.g. dit doesn't allow for nested structures)
If needed, converting factories to consume JSON is a matter of switching from `yamldecode()` to `jsondecode()` in the right place on each module.

View File

@ -1,6 +1,6 @@
# Business-units based organizational sample
This sample creates an organizational layout with two folder levels, where the first level is usually mapped to one business unit or team (infra, data, analytics) and the second level represents enviroments (prod, test). It also sets up all prerequisites for automation (GCS state buckets, service accounts, etc.), and the correct roles on those to enforce separation of duties at the environment level.
This sample creates an organizational layout with two folder levels, where the first level is usually mapped to one business unit or team (infra, data, analytics) and the second level represents environments (prod, test). It also sets up all prerequisites for automation (GCS state buckets, service accounts, etc.), and the correct roles on those to enforce separation of duties at the environment level.
This layout is well suited for medium-sized infrastructures managed by different sets of teams, and in cases where the core infrastructure is managed centrally, as the top-level automation service accounts for each environment allow cross-team management of the base resources (projects, IAM, etc.).

View File

@ -8,9 +8,9 @@ They are meant to be used as minimal but complete starting points to create actu
### Hub and Spoke via Peering
<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering example"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [example](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is conncted to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.
<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering example"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [example](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is connected to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.
The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workarund is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.
The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workaround is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.
<br clear="left">
### Hub and Spoke via Dynamic VPN

View File

@ -2,7 +2,7 @@
This example creates a simple **Hub and Spoke** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [VPC Peering](https://cloud.google.com/vpc/docs/vpc-peering).
The example shows some of the limitations that need to be taken into account when using VPC Peering, mostly due to the lack of transivity between peerings:
The example shows some of the limitations that need to be taken into account when using VPC Peering, mostly due to the lack of transitivity between peerings:
- no mesh networking between the spokes
- complex support for managed services hosted in tenant VPCs connected via peering (Cloud SQL, GKE, etc.)

View File

@ -33,7 +33,7 @@ Create a large file on a destination VM (eg `ilb-test-vm-right-1`) to test long-
dd if=/dev/zero of=/var/www/html/test.txt bs=10M count=100 status=progress
```
Run curl from a source VM (eg `ilb-test-vm-left-1`) to send requests to a destination VM artifically slowing traffic.
Run curl from a source VM (eg `ilb-test-vm-left-1`) to send requests to a destination VM artificially slowing traffic.
```bash
curl -0 --output /dev/null --limit-rate 10k 10.0.1.3/test.txt

View File

@ -40,7 +40,7 @@ In the list of addresses, look for the address with purpose `DNS_RESOLVER` in th
### Update the forwarder address variable and recreate on-prem
If the forwader address does not match the Terraform variable, add the correct value in your `terraform.tfvars` (or change the default value in `variables.tf`), then taint the onprem instance and apply to recreate it with the correct value in the DNS configuration:
If the forwarder address does not match the Terraform variable, add the correct value in your `terraform.tfvars` (or change the default value in `variables.tf`), then taint the onprem instance and apply to recreate it with the correct value in the DNS configuration:
```bash
tf apply