This blueprint shows how to leverage [Cloud Asset Inventory Exporting to Bigquery](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery) feature to keep track of your project wide assets over time storing information in Bigquery.
The blueprint uses export resources at the project level for ease of testing, in actual use a few changes are needed to operate at the resource hierarchy level:
Once resources are created, you can run queries on the data you exported on Bigquery. [Here](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery#querying_an_asset_snapshot) you can find some blueprint of queries you can run.
You can also create a dashboard connecting [Looker Studio](https://lookerstudio.google.com/) or any other BI tools of your choice to your Bigquery dataset.
Regular file-based exports of data from Cloud Asset Inventory may be useful for e.g. scale-out network dependencies discovery tools like [Planet Exporter](https://github.com/williamchanrico/planet-exporter), or to update legacy workloads tracking or configuration management systems. Bigquery supports multiple [export formats](https://cloud.google.com/bigquery/docs/exporting-data#export_formats_and_compression_types) and one may upload objects to Storage Bucket using provided Cloud Function. Specify `job.DestinationFormat` as defined in [documentation](https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.job.DestinationFormat), e.g. `NEWLINE_DELIMITED_JSON`.
It helps to create custom [scheduled query](https://cloud.google.com/bigquery/docs/scheduling-queries#console) from CAI export tables, and to write out results in to dedicated table (with overwrites). Define such query's output columns to comply with downstream systems' fields requirements, and time query execution after CAI export into BQ for freshness. See [sample queries](https://cloud.google.com/asset-inventory/docs/exporting-to-bigquery-sample-queries).
| [project_id](variables.tf#L101) | Project id that references existing project. | <code>string</code> | ✓ | |
| [billing_account](variables.tf#L17) | Billing account id used as default for new projects. | <code>string</code> | | <code>null</code> |
| [bundle_path](variables.tf#L23) | Path used to write the intermediate Cloud Function code bundle. | <code>string</code> | | <code>"./bundle.zip"</code> |
| [bundle_path_cffile](variables.tf#L30) | Path used to write the intermediate Cloud Function code bundle. | <code>string</code> | | <code>"./bundle_cffile.zip"</code> |
| [file_config](variables.tf#L54) | Optional BQ table as a file export function config. | <codetitle="object({ bucket = string filename = string format = string bq_dataset = string bq_table = string })">object({…})</code> | | <codetitle="{ bucket = null filename = null format = null bq_dataset = null bq_table = null }">{…}</code> |
| [location](variables.tf#L73) | Appe Engine location used in the example. | <code>string</code> | | <code>"europe-west"</code> |
| [name](variables.tf#L80) | Arbitrary string used to name created resources. | <code>string</code> | | <code>"asset-inventory"</code> |
| [name_cffile](variables.tf#L88) | Arbitrary string used to name created resources. | <code>string</code> | | <code>"cffile-exporter"</code> |
| [region](variables.tf#L106) | Compute region used in the example. | <code>string</code> | | <code>"europe-west1"</code> |
| [root_node](variables.tf#L112) | The resource name of the parent folder or organization for project creation, in 'folders/folder_id' or 'organizations/org_id' format. | <code>string</code> | | <code>null</code> |