From startups to enterprises, database disaster recovery planning is critical to provide the continuity of processing. While Cloud SQL does provide high availability within a single region, regional failures or unavailability can occur from cyber attacks to natural disasters. Such incidents or outages lead to a quick domino effect for startups, making it difficult to recover from the loss of revenue and customers, which is especially true for bootstrapped or lean startups. It is critical that your database is regionally resilient and made available promptly in a secondary region. With Cloud SQL for PostgreSQL, you can configure cross-region read replicas for a complete DR failover and fallback process.
This blueprint creates a [Cloud SQL instance](https://cloud.google.com/sql) with multi-region read replicas as described in the [Cloud SQL for PostgreSQL disaster recovery](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback) article.
The solution is resilient to a regional outage. To get familiar with the procedure needed in the unfortunate case of a disaster recovery, please follow steps described in [part two](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback#phase-2) of the aforementioned article.
If you're migrating from another Cloud Provider, refer to [this](https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison) documentation to see equivalent services and comparisons in Microsoft Azure and Amazon Web Services.
This blueprint will deploy all its resources into the project defined by the `project_id` variable. Please note that we assume this project already exists. However, if you provide the appropriate values to the `project_create` variable, the project will be created as part of the deployment.
If `project_create` is left to `null`, the identity performing the deployment needs the `owner` role on the project defined by the `project_id` variable. Otherwise, the identity performing the deployment needs `resourcemanager.projectCreator` on the resource hierarchy node specified by `project_create.parent` and `billing.user` on the billing account specified by `project_create.billing_account_id`.
This will clone the repository to your cloud shell and a screen like this one will appear:
![Import package](images/image1.png)
Before you deploy the architecture, make sure you run the following command to move your cloudshell session into your service project: `gcloud config set project [SERVICE_PROJECT_ID]`
Before we deploy the architecture, you will need the following information:
- The service project ID.
- A unique prefix that you want all the deployed resources to have (for example: cloudsql-multiregion-hpjy). This must be a string with no spaces or tabs.
Once you can see your service project id in the yellow parenthesis, you’re ready to start.
Once you have the required information, head back to the cloud shell editor. Make sure you’re in the following directory: `cloudshell_open/cloud-foundation-fabric/blueprints/data-solutions/cloudsql-multiregion/`
Configure the Terraform variables in your `terraform.tfvars` file. You need to specify at least the `project_id` and `prefix` variables. See [`terraform.tfvars.sample`](terraform.tfvars.sample) as starting point.
The resource creation will take a few minutes, at the end this is the output you should expect for successful completion along with a list of the created resources:
In order to run the example and deploy Cloud SQL on a shared VPC the identity running Terraform must have the following IAM role on the Shared VPC Host project.
## How to recover your initial deployment by using a fallback
To implement a fallback to your original region (R1) after it becomes available, you can follow the same process that is described in the above section. The process is summarized [here](https://cloud.google.com/architecture/cloud-sql-postgres-disaster-recovery-complete-failover-fallback#phase_3_implementing_a_fallback).
## Clean up your environment
The easiest way to remove all the deployed resources is to run the following command in Cloud Shell:
```shell
terraform destroy
```
The above command will delete the associated resources so there will be no billable charges made afterwards.
| [prefix](variables.tf#L45) | Unique prefix used for resource names. Not used for project if 'project_create' is null. | <code>string</code> | ✓ | |
| [project_id](variables.tf#L59) | Project id, references existing project if `project_create` is null. | <code>string</code> | ✓ | |
| [data_eng_principals](variables.tf#L17) | Groups with Service Account Token creator role on service accounts in IAM format, only user supported on CloudSQL, eg 'user@domain.com'. | <code>list(string)</code> | | <code>[]</code> |
| [network_config](variables.tf#L23) | Shared VPC network configurations to use. If null networks will be created in projects with preconfigured values. | <codetitle="object({ host_project = string network_self_link = string subnet_self_link = string cloudsql_psa_range = string })">object({…})</code> | | <code>null</code> |
| [project_create](variables.tf#L50) | Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. | <codetitle="object({ billing_account_id = string parent = string })">object({…})</code> | | <code>null</code> |
| [regions](variables.tf#L64) | Map of instance_name => location where instances will be deployed. | <code>map(string)</code> | | <codetitle="{ primary = "europe-west1" replica = "europe-west3" }">{…}</code> |
| [service_encryption_keys](variables.tf#L77) | Cloud KMS keys to use to encrypt resources. Provide a key for each reagion configured. | <code>map(string)</code> | | <code>null</code> |