Update multi region cloud SQL markdown file (#746)

* initial update commit readme HA SQL

* fix(readme cloud sql ha): remove cost section

* feat(sql ha readme): upload images

* fix(read me sql ha): move images to dir

* feat(sql ha readme): add images to readme file

* fix(readme sql ha): broken button

* fix(ha sql readme): fix button

* fix(sql ha readme): com/brand fix

* fix(read me sql ha): headers fix

* fix(sql ha readme): update button link

* linting

* linting

Co-authored-by: Ludovico Magnocavallo <ludomagno@google.com>
This commit is contained in:
bensadikgoogle 2022-07-20 21:13:56 +02:00 committed by GitHub
parent 93fb171854
commit a5536f890c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 72 additions and 18 deletions

View File

@ -1,63 +1,117 @@
# Cloud SQL instance with multi-region read replicas
From startups to enterprises, database disaster recovery planning is critical to provide the continuity of processing. While Cloud SQL does provide high availability within a single region, regional failures or unavailability can occur from cyber attacks to natural disasters. Such incidents or outages lead to a quick domino effect for startups, making it difficult to recover from the loss of revenue and customers, which is especially true for bootstrapped or lean startups. It is critical that your database is regionally resilient and made available promptly in a secondary region. With Cloud SQL for PostgreSQL, you can configure cross-region read replicas for a complete DR failover and fallback process.
This example creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
The solution is resilient to a regional outage. To get familiar with the procedure needed in the unfortunate case of a disaster recovery, please follow steps described in [part two](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback#phase-2) of the aforementioned article.
Use cases:
Configuring the CloudSQL instance for DR can be done in the following steps:
- Create an HA Cloud SQL for PostgreSQL instance.
- Deploy a cross-region read replica on Google Cloud using Cloud SQL for PostgreSQL.
The solution will use:
- VPC with Private Service Access to deploy the instances and VM
- Cloud SQL - Postgre SQL instanced with Private IP
- Goocle Cloud Storage bucket to handle database import/export
- Google Cloud Engine instance to connect to the Posgre SQL instance
- Google Cloud NAT to access internet resources
- [VPC](https://cloud.google.com/vpc) with Private Service Access to deploy the instances and VM
- [Cloud SQL - Postgre SQL](https://cloud.google.com/sql/pricing) instanced with Private IP
- [Goocle Cloud Storage](https://cloud.google.com/storage/) bucket to handle database import/export
- [Google Cloud Engine](https://cloud.google.com/compute) instance to connect to the Posgre SQL instance
- [Google Cloud NAT](https://cloud.google.com/nat/docs/overview) to access internet resources
This is the high level diagram:
![Cloud SQL multi-region.](diagram.png "Cloud SQL multi-region")
![Cloud SQL multi-region.](images/diagram.png "Cloud SQL multi-region")
# Requirements
If you're migrating from another Cloud Provider, refer to [this](https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison) documentation to see equivalent services and comparisons in Microsoft Azure and Amazon Web Services.
## Requirements
This example will deploy all its resources into the project defined by the `project_id` variable. Please note that we assume this project already exists. However, if you provide the appropriate values to the `project_create` variable, the project will be created as part of the deployment.
If `project_create` is left to `null`, the identity performing the deployment needs the `owner` role on the project defined by the `project_id` variable. Otherwise, the identity performing the deployment needs `resourcemanager.projectCreator` on the resource hierarchy node specified by `project_create.parent` and `billing.user` on the billing account specified by `project_create.billing_account_id`.
## Deployment
### Step 0: Cloning the repository
Click on the image below, sign in if required and when the prompt appears, click on “confirm”.
[<p align="center"> <img alt="Open Cloudshell" width = "300px" src="images/button.png" /> </p>](https://goo.gle/GoCloudSQL)
This will clone the repository to your cloud shell and a screen like this one will appear:
![Import package](images/image1.png)
Before you deploy the architecture, make sure you run the following command to move your cloudshell session into your service project: `gcloud config set project [SERVICE_PROJECT_ID]`
Before we deploy the architecture, you will need the following information:
- The service project ID.
- A unique prefix that you want all the deployed resources to have (for example: cloudsql-multiregion-hpjy). This must be a string with no spaces or tabs.
Once you can see your service project id in the yellow parenthesis, youre ready to start.
### Step 1: Deploy resources
Once you have the required information, head back to the cloud shell editor. Make sure youre in the following directory: `cloudshell_open/cloud-foundation-fabric/examples/data-solutions/cloudsql-multiregion/`
Configure the Terraform variables in your `terraform.tfvars` file. You need to specify at least the `project_id` and `prefix` variables. See [`terraform.tfvars.sample`](terraform.tfvars.sample) as starting point.
![Deploy ressources](images/image2.png)
Run Terraform init:
```
$ terraform init
$ terraform apply
```shell
terraform init
terraform apply
```
You should see the output of the Terraform script with resources created and some commands that you'll need in the following steps below.
The resource creation will take a few minutes, at the end this is the output you should expect for successful completion along with a list of the created resources:
![Ressources installed](images/image3.png)
## Move to real use case consideration
This implementation is intentionally minimal and easy to read. A real world use case should consider:
- Using a Shared VPC
- Using VPC-SC to mitigate data exfiltration
- Using a Shared VPC
- Using VPC-SC to mitigate data exfiltration
## Test your environment
We assume all those steps are run using a user listed on `data_eng_principals`. You can authenticate as the user using the following command:
```
$ gcloud init
$ gcloud auth application-default login
```shell
gcloud init
gcloud auth application-default login
```
Below you can find commands to connect to the VM instance and Cloud SQL instance.
```
```shell
$ gcloud compute ssh sql-test --project PROJECT_ID --zone ZONE
sql-test:~$ cloud_sql_proxy -instances=CLOUDSQL_INSTANCE=tcp:5432
sql-test:~$ psql 'host=127.0.0.1 port=5432 sslmode=disable dbname=DATABASE user=USER'
```
You can find computed commands on the Terraform `demo_commands` output.
## How to recover your initial deployment by using a fallback
To implement a fallback to your original region (R1) after it becomes available, you can follow the same process that is described in the above section. The process is summarized [here](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback#phase_3_implementing_a_fallback).
## Clean up your environment
The easiest way to remove all the deployed resources is to run the following command in Cloud Shell:
```shell
terraform destroy
```
The above command will delete the associated resources so there will be no billable charges made afterwards.
<!-- BEGIN TFDOC -->
## Variables

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB