Networking dashboard to display per VPC and per VPC peering group limits that are not shown in the console

This commit is contained in:
Aurélien Legrand 2022-03-08 18:36:02 +01:00 committed by Julio Castillo
parent b0fcc94b1d
commit 971726224f
10 changed files with 1847 additions and 0 deletions

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,37 @@
# Networking Dashboard
This repository provides an end-to-end solution to gather some GCP Networking quotas and limits (that cannot be seen in the GCP console today) and display them in a dashboard.
The goal is to allow for better visibility of these limits, facilitating capacity planning and avoiding hitting these limits.
## Usage
Clone this repository, then go through the following steps to create resources:
- Create a terraform.tfvars file with the following content:
- organization_id = "[YOUR-ORG-ID]"
- billing_account = "[YOUR-BILLING-ACCOUNT]"
- monitoring_project_id = "project-0" # Monitoring project where the dashboard will be created and the solution deployed
- monitored_projects_list = ["project-1", "project2"] # Projects to be monitored by the solution
- `terraform init`
- `terraform apply`
Once the resources are deployed, go to the following page to see the dashboard: https://console.cloud.google.com/monitoring/dashboards?project=<YOUR-MONITORING-PROJECT>.
A dashboard called "quotas-utilization" should be created.
The Cloud Function runs every 5 minutes by default so you should start getting some data points after a few minutes.
You can change this frequency by modifying the "schedule_cron" variable in variables.tf.
Once done testing, you can clean up resources by running `terraform destroy`.
## Supported limits and quotas
The Cloud Function currently tracks usage, limit and utilization of:
- active VPC peerings per VPC
- VPC peerings per VPC
- instances per VPC
- instances per VPC peering group
- Subnet IP ranges per VPC peering group
- internal forwarding rules for internal L4 load balancers per VPC
- internal forwarding rules for internal L7 load balancers per VPC
- internal forwarding rules for internal L4 load balancers per VPC peering group
- internal forwarding rules for internal L7 load balancers per VPC peering group
It writes this values to custom metrics in Cloud Monitoring and creates a dashboard to visualize the current utilization of these metrics in Cloud Monitoring.

View File

@ -0,0 +1,588 @@
#
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from google.cloud import monitoring_v3
from googleapiclient import discovery
from google.api import metric_pb2 as ga_metric
import time
import os
import google.api_core
import re
import random
monitored_projects_list = os.environ.get("monitored_projects_list").split(",") # list of projects from which function will get quotas information
monitoring_project_id = os.environ.get("monitoring_project_id") # project where the metrics and dahsboards will be created
monitoring_project_link = f"projects/{monitoring_project_id}"
service = discovery.build('compute', 'v1')
# DEFAULT LIMITS
limit_vpc_peer = os.environ.get("LIMIT_VPC_PEER").split(",") # 25
limit_l4 = os.environ.get("LIMIT_L4").split(",") # 75
limit_l7 = os.environ.get("LIMIT_L7").split(",") # 75
limit_instances = os.environ.get("LIMIT_INSTANCES").split(",") # ["default_value", "15000"]
limit_instances_ppg = os.environ.get("LIMIT_INSTANCES_PPG").split(",") # 15000
limit_subnets = os.environ.get("LIMIT_SUBNETS").split(",") # 400
limit_l4_ppg = os.environ.get("LIMIT_L4_PPG").split(",") # 175
limit_l7_ppg = os.environ.get("LIMIT_L7_PPG").split(",") # 175
# Cloud Function entry point
def quotas(request):
global client, interval
client, interval = create_client()
# Instances per VPC
instance_metric = create_gce_instances_metrics()
get_gce_instances_data(instance_metric)
# Number of VPC peerings per VPC
vpc_peering_active_metric, vpc_peering_metric = create_vpc_peering_metrics()
get_vpc_peering_data(vpc_peering_active_metric, vpc_peering_metric)
# Internal L4 Forwarding Rules per VPC
forwarding_rules_metric = create_l4_forwarding_rules_metric()
get_l4_forwarding_rules_data(forwarding_rules_metric)
# Internal L4 Forwarding Rules per VPC peering group
# Existing GCP Monitoring metrics for L4 Forwarding Rules per Network
l4_forwarding_rules_usage = "compute.googleapis.com/quota/internal_lb_forwarding_rules_per_vpc_network/usage"
l4_forwarding_rules_limit = "compute.googleapis.com/quota/internal_lb_forwarding_rules_per_vpc_network/limit"
l4_forwarding_rules_ppg_metric = create_l4_forwarding_rules_ppg_metric()
get_pgg_data(l4_forwarding_rules_ppg_metric, l4_forwarding_rules_usage, l4_forwarding_rules_limit, limit_l4_ppg)
# Internal L7 Forwarding Rules per VPC peering group
# Existing GCP Monitoring metrics for L7 Forwarding Rules per Network
l7_forwarding_rules_usage = "compute.googleapis.com/quota/internal_managed_forwarding_rules_per_vpc_network/usage"
l7_forwarding_rules_limit = "compute.googleapis.com/quota/internal_managed_forwarding_rules_per_vpc_network/limit"
l7_forwarding_rules_ppg_metric = create_l7_forwarding_rules_ppg_metric()
get_pgg_data(l7_forwarding_rules_ppg_metric, l7_forwarding_rules_usage, l7_forwarding_rules_limit, limit_l7_ppg)
# Subnet ranges per VPC peering group
# Existing GCP Monitoring metrics for Subnet Ranges per Network
subnet_ranges_usage = "compute.googleapis.com/quota/subnet_ranges_per_vpc_network/usage"
subnet_ranges_limit = "compute.googleapis.com/quota/subnet_ranges_per_vpc_network/limit"
subnet_ranges_ppg_metric = create_subnet_ranges_ppg_metric()
get_pgg_data(subnet_ranges_ppg_metric, subnet_ranges_usage, subnet_ranges_limit, limit_subnets)
# GCE Instances per VPC peering group
# Existing GCP Monitoring metrics for GCE per Network
gce_instances_usage = "compute.googleapis.com/quota/instances_per_vpc_network/usage"
gce_instances_limit = "compute.googleapis.com/quota/instances_per_vpc_network/limit"
gce_instances_metric = create_gce_instances_ppg_metric()
get_pgg_data(gce_instances_metric, gce_instances_usage, gce_instances_limit, limit_instances_ppg)
return 'Function executed successfully'
def create_client():
try:
client = monitoring_v3.MetricServiceClient()
now = time.time()
seconds = int(now)
nanos = int((now - seconds) * 10 ** 9)
interval = monitoring_v3.TimeInterval(
{
"end_time": {"seconds": seconds, "nanos": nanos},
"start_time": {"seconds": (seconds - 86400), "nanos": nanos},
})
return (client, interval)
except Exception as e:
raise Exception("Error occurred creating the client: {}".format(e))
# Creates usage, limit and utilization Cloud monitoring metrics for GCE instances
# Returns a dictionary with the names and descriptions of the created metrics
def create_gce_instances_metrics():
instance_metric = {}
instance_metric["usage_name"] = "number_of_instances_usage"
instance_metric["limit_name"] = "number_of_instances_limit"
instance_metric["utilization_name"] = "number_of_instances_utilization"
instance_metric["usage_description"] = "Number of instances per VPC network - usage."
instance_metric["limit_description"] = "Number of instances per VPC network - effective limit."
instance_metric["utilization_description"] = "Number of instances per VPC network - utilization."
create_metric(instance_metric["usage_name"], instance_metric["usage_description"])
create_metric(instance_metric["limit_name"], instance_metric["limit_description"])
create_metric(instance_metric["utilization_name"], instance_metric["utilization_description"])
return instance_metric
# Creates a Cloud Monitoring metric based on the parameter given if the metric is not already existing
def create_metric(metric_name, description):
client = monitoring_v3.MetricServiceClient()
metric_link = f"custom.googleapis.com/{metric_name}"
types = []
for desc in client.list_metric_descriptors(name=monitoring_project_link):
types.append(desc.type)
# If the metric doesn't exist yet, then we create it
if metric_link not in types:
descriptor = ga_metric.MetricDescriptor()
descriptor.type = f"custom.googleapis.com/{metric_name}"
descriptor.metric_kind = ga_metric.MetricDescriptor.MetricKind.GAUGE
descriptor.value_type = ga_metric.MetricDescriptor.ValueType.DOUBLE
descriptor.description = description
descriptor = client.create_metric_descriptor(name=monitoring_project_link, metric_descriptor=descriptor)
print("Created {}.".format(descriptor.name))
def get_gce_instances_data(instance_metric):
# Existing GCP Monitoring metrics for GCE instances
metric_instances_usage = "compute.googleapis.com/quota/instances_per_vpc_network/usage"
metric_instances_limit = "compute.googleapis.com/quota/instances_per_vpc_network/limit"
for project in monitored_projects_list:
network_dict = get_networks(project)
current_quota_usage = get_quota_current_usage(f"projects/{project}", metric_instances_usage)
current_quota_limit = get_quota_current_limit(f"projects/{project}", metric_instances_limit)
current_quota_usage_view = customize_quota_view(current_quota_usage)
current_quota_limit_view = customize_quota_view(current_quota_limit)
for net in network_dict:
set_usage_limits(net, current_quota_usage_view, current_quota_limit_view, limit_instances)
write_data_to_metric(project, net['usage'], instance_metric["usage_name"], net['network name'])
write_data_to_metric(project, net['limit'], instance_metric["limit_name"], net['network name'])
write_data_to_metric(project, net['usage']/ net['limit'], instance_metric["utilization_name"], net['network name'])
print(f"Wrote number of instances to metric for projects/{project}")
# Creates 2 metrics: VPC peerings per VPC and active VPC peerings per VPC
def create_vpc_peering_metrics():
vpc_peering_active_metric = {}
vpc_peering_active_metric["usage_name"] = "number_of_active_vpc_peerings_usage"
vpc_peering_active_metric["limit_name"] = "number_of_active_vpc_peerings_limit"
vpc_peering_active_metric["utilization_name"] = "number_of_active_vpc_peerings_utilization"
vpc_peering_active_metric["usage_description"] = "Number of active VPC Peerings per VPC - usage."
vpc_peering_active_metric["limit_description"] = "Number of active VPC Peerings per VPC - effective limit."
vpc_peering_active_metric["utilization_description"] = "Number of active VPC Peerings per VPC - utilization."
vpc_peering_metric = {}
vpc_peering_metric["usage_name"] = "number_of_vpc_peerings_usage"
vpc_peering_metric["limit_name"] = "number_of_vpc_peerings_limit"
vpc_peering_metric["utilization_name"] = "number_of_vpc_peerings_utilization"
vpc_peering_metric["usage_description"] = "Number of VPC Peerings per VPC - usage."
vpc_peering_metric["limit_description"] = "Number of VPC Peerings per VPC - effective limit."
vpc_peering_metric["utilization_description"] = "Number of VPC Peerings per VPC - utilization."
create_metric(vpc_peering_active_metric["usage_name"], vpc_peering_active_metric["usage_description"])
create_metric(vpc_peering_active_metric["limit_name"], vpc_peering_active_metric["limit_description"])
create_metric(vpc_peering_active_metric["utilization_name"], vpc_peering_active_metric["utilization_description"])
create_metric(vpc_peering_metric["usage_name"], vpc_peering_metric["usage_description"])
create_metric(vpc_peering_metric["limit_name"], vpc_peering_metric["limit_description"])
create_metric(vpc_peering_metric["utilization_name"], vpc_peering_metric["utilization_description"])
return vpc_peering_active_metric, vpc_peering_metric
# Populates data for VPC peerings per VPC and active VPC peerings per VPC
def get_vpc_peering_data(vpc_peering_active_metric, vpc_peering_metric):
for project in monitored_projects_list:
active_vpc_peerings, vpc_peerings = gather_vpc_peerings_data(project, limit_vpc_peer)
for peering in active_vpc_peerings:
write_data_to_metric(project, peering['active_peerings'], vpc_peering_active_metric["usage_name"], peering['network name'])
write_data_to_metric(project, peering['network_limit'], vpc_peering_active_metric["limit_name"], peering['network name'])
write_data_to_metric(project, peering['active_peerings'] / peering['network_limit'], vpc_peering_active_metric["utilization_name"], peering['network name'])
print("Wrote number of active VPC peerings to custom metric for project:", project)
for peering in vpc_peerings:
write_data_to_metric(project, peering['peerings'], vpc_peering_metric["usage_name"], peering['network name'])
write_data_to_metric(project, peering['network_limit'], vpc_peering_metric["limit_name"], peering['network name'])
write_data_to_metric(project, peering['peerings'] / peering['network_limit'], vpc_peering_metric["utilization_name"], peering['network name'])
print("Wrote number of VPC peerings to custom metric for project:", project)
# gathers number of VPC peerings, active VPC peerings and limits, returns 2 dictionaries: active_vpc_peerings and vpc_peerings (including inactive and active ones)
def gather_vpc_peerings_data(project_id, limit_list):
active_peerings_dict = []
peerings_dict = []
request = service.networks().list(project=project_id)
response = request.execute()
if 'items' in response:
for network in response['items']:
if 'peerings' in network:
STATE = network['peerings'][0]['state']
if STATE == "ACTIVE":
active_peerings_count = len(network['peerings'])
else:
active_peerings_count = 0
peerings_count = len(network['peerings'])
else:
peerings_count = 0
active_peerings_count = 0
active_d = {'project_id': project_id,'network name':network['name'],'active_peerings':active_peerings_count,'network_limit': get_limit(network['name'], limit_list)}
active_peerings_dict.append(active_d)
d = {'project_id': project_id,'network name':network['name'],'peerings':peerings_count,'network_limit': get_limit(network['name'], limit_list)}
peerings_dict.append(d)
return active_peerings_dict, peerings_dict
# Check if the VPC has a specific limit for a specific metric, if so, returns that limit, if not, returns the default limit
def get_limit(network_name, limit_list):
if network_name in limit_list:
return int(limit_list[limit_list.index(network_name) + 1])
else:
if 'default_value' in limit_list:
return int(limit_list[limit_list.index('default_value') + 1])
else:
return 0
# Creates the custom metrics for L4 internal forwarding Rules
def create_l4_forwarding_rules_metric():
forwarding_rules_metric = {}
forwarding_rules_metric["usage_name"] = "internal_forwarding_rules_l4_usage"
forwarding_rules_metric["limit_name"] = "internal_forwarding_rules_l4_limit"
forwarding_rules_metric["utilization_name"] = "internal_forwarding_rules_l4_utilization"
forwarding_rules_metric["usage_description"] = "Number of Internal Forwarding Rules for Internal L4 Load Balancers - usage."
forwarding_rules_metric["limit_description"] = "Number of Internal Forwarding Rules for Internal L4 Load Balancers - effective limit."
forwarding_rules_metric["utilization_description"] = "Number of Internal Forwarding Rules for Internal L4 Load Balancers - utilization."
create_metric(forwarding_rules_metric["usage_name"], forwarding_rules_metric["usage_description"])
create_metric(forwarding_rules_metric["limit_name"], forwarding_rules_metric["limit_description"])
create_metric(forwarding_rules_metric["utilization_name"], forwarding_rules_metric["utilization_description"])
return forwarding_rules_metric
def get_l4_forwarding_rules_data(forwarding_rules_metric):
# Existing GCP Monitoring metrics for L4 Forwarding Rules
l4_forwarding_rules_usage = "compute.googleapis.com/quota/internal_lb_forwarding_rules_per_vpc_network/usage"
l4_forwarding_rules_limit = "compute.googleapis.com/quota/internal_lb_forwarding_rules_per_vpc_network/limit"
for project in monitored_projects_list:
network_dict = get_networks(project)
current_quota_usage = get_quota_current_usage(f"projects/{project}", l4_forwarding_rules_usage)
current_quota_limit = get_quota_current_limit(f"projects/{project}", l4_forwarding_rules_limit)
current_quota_usage_view = customize_quota_view(current_quota_usage)
current_quota_limit_view = customize_quota_view(current_quota_limit)
for net in network_dict:
set_usage_limits(net, current_quota_usage_view, current_quota_limit_view, limit_l4)
write_data_to_metric(project, net['usage'], forwarding_rules_metric["usage_name"], net['network name'])
write_data_to_metric(project, net['limit'], forwarding_rules_metric["limit_name"], net['network name'])
write_data_to_metric(project, net['usage']/ net['limit'], forwarding_rules_metric["utilization_name"], net['network name'])
print(f"Wrote number of L4 forwarding rules to metric for projects/{project}")
# Creates the custom metrics for L4 internal forwarding Rules per VPC Peering Group
def create_l4_forwarding_rules_ppg_metric():
forwarding_rules_metric = {}
forwarding_rules_metric["usage_name"] = "internal_forwarding_rules_l4_ppg_usage"
forwarding_rules_metric["limit_name"] = "internal_forwarding_rules_l4_ppg_limit"
forwarding_rules_metric["utilization_name"] = "internal_forwarding_rules_l4_ppg_utilization"
forwarding_rules_metric["usage_description"] = "Number of Internal Forwarding Rules for Internal l4 Load Balancers per peering group - usage."
forwarding_rules_metric["limit_description"] = "Number of Internal Forwarding Rules for Internal l4 Load Balancers per peering group - effective limit."
forwarding_rules_metric["utilization_description"] = "Number of Internal Forwarding Rules for Internal l4 Load Balancers per peering group - utilization."
create_metric(forwarding_rules_metric["usage_name"], forwarding_rules_metric["usage_description"])
create_metric(forwarding_rules_metric["limit_name"], forwarding_rules_metric["limit_description"])
create_metric(forwarding_rules_metric["utilization_name"], forwarding_rules_metric["utilization_description"])
return forwarding_rules_metric
# Creates the custom metrics for L7 internal forwarding Rules per VPC Peering Group
def create_l7_forwarding_rules_ppg_metric():
forwarding_rules_metric = {}
forwarding_rules_metric["usage_name"] = "internal_forwarding_rules_l7_ppg_usage"
forwarding_rules_metric["limit_name"] = "internal_forwarding_rules_l7_ppg_limit"
forwarding_rules_metric["utilization_name"] = "internal_forwarding_rules_l7_ppg_utilization"
forwarding_rules_metric["usage_description"] = "Number of Internal Forwarding Rules for Internal l7 Load Balancers per peering group - usage."
forwarding_rules_metric["limit_description"] = "Number of Internal Forwarding Rules for Internal l7 Load Balancers per peering group - effective limit."
forwarding_rules_metric["utilization_description"] = "Number of Internal Forwarding Rules for Internal l7 Load Balancers per peering group - utilization."
create_metric(forwarding_rules_metric["usage_name"], forwarding_rules_metric["usage_description"])
create_metric(forwarding_rules_metric["limit_name"], forwarding_rules_metric["limit_description"])
create_metric(forwarding_rules_metric["utilization_name"], forwarding_rules_metric["utilization_description"])
return forwarding_rules_metric
def create_subnet_ranges_ppg_metric():
metric = {}
metric["usage_name"] = "number_of_subnet_IP_ranges_usage"
metric["limit_name"] = "number_of_subnet_IP_ranges_effective_limit"
metric["utilization_name"] = "number_of_subnet_IP_ranges_utilization"
metric["usage_description"] = "Number of Subnet Ranges per peering group - usage."
metric["limit_description"] = "Number of Subnet Ranges per peering group - effective limit."
metric["utilization_description"] = "Number of Subnet Ranges per peering group - utilization."
create_metric(metric["usage_name"], metric["usage_description"])
create_metric(metric["limit_name"], metric["limit_description"])
create_metric(metric["utilization_name"], metric["utilization_description"])
return metric
def create_gce_instances_ppg_metric():
metric = {}
metric["usage_name"] = "number_of_instances_ppg_usage"
metric["limit_name"] = "number_of_instances_ppg_limit"
metric["utilization_name"] = "number_of_instances_ppg_utilization"
metric["usage_description"] = "Number of instances per peering group - usage."
metric["limit_description"] = "Number of instances per peering group - effective limit."
metric["utilization_description"] = "Number of instances per peering group - utilization."
create_metric(metric["usage_name"], metric["usage_description"])
create_metric(metric["limit_name"], metric["limit_description"])
create_metric(metric["utilization_name"], metric["utilization_description"])
return metric
# Populates data for the custom metrics for L4 internal forwarding Rules per Peering Group
def get_pgg_data(forwarding_rules_ppg_metric, usage_metric, limit_metric, limit_ppg):
for project in monitored_projects_list:
network_dict_list = gather_peering_data(project)
# Network dict list is a list of dictionary (one for each network)
# For each network, this dictionary contains:
# project_id, network_name, network_id, usage, limit, peerings (list of peered networks)
# peerings is a list of dictionary (one for each peered network) and contains:
# project_id, network_name, network_id
# For each network in this GCP project
for network_dict in network_dict_list:
current_quota_usage = get_quota_current_usage(f"projects/{project}", usage_metric)
current_quota_limit = get_quota_current_limit(f"projects/{project}", limit_metric)
current_quota_usage_view = customize_quota_view(current_quota_usage)
current_quota_limit_view = customize_quota_view(current_quota_limit)
usage, limit = get_usage_limit(network_dict, current_quota_usage_view, current_quota_limit_view, limit_ppg)
# Here we add usage and limit to the network dictionary
network_dict["usage"] = usage
network_dict["limit"] = limit
# For every peered network, get usage and limits
for peered_network in network_dict['peerings']:
peering_project_usage = customize_quota_view(get_quota_current_usage(f"projects/{peered_network['project_id']}", usage_metric))
peering_project_limit = customize_quota_view(get_quota_current_limit(f"projects/{peered_network['project_id']}", limit_metric))
usage, limit = get_usage_limit(peered_network, peering_project_usage, peering_project_limit, limit_ppg)
# Here we add usage and limit to the peered network dictionary
peered_network["usage"] = usage
peered_network["limit"] = limit
count_effective_limit(project, network_dict, forwarding_rules_ppg_metric["usage_name"], forwarding_rules_ppg_metric["limit_name"], forwarding_rules_ppg_metric["utilization_name"], limit_ppg)
print(f"Wrote {forwarding_rules_ppg_metric['usage_name']} to metric for peering group {network_dict['network_name']} in {project}")
# Calculates the effective limits (using algorithm in the link below) for peering groups and writes data (usage, limit, utilization) to metric
# https://cloud.google.com/vpc/docs/quota#vpc-peering-effective-limit
def count_effective_limit(project_id, network_dict, usage_metric_name, limit_metric_name, utilization_metric_name, limit_ppg):
if network_dict['peerings'] == []:
return
# Get usage: Sums usage for current network + all peered networks
peering_group_usage = network_dict['usage']
for peered_network in network_dict['peerings']:
peering_group_usage += peered_network['usage']
# Calculates effective limit: Step 1: max(per network limit, per network_peering_group limit)
limit_step1 = max(network_dict['limit'], get_limit(network_dict['network_name'], limit_ppg))
# Calculates effective limit: Step 2: List of max(per network limit, per network_peering_group limit) for each peered network
limit_step2 = []
for peered_network in network_dict['peerings']:
limit_step2.append(max(peered_network['limit'], get_limit(peered_network['network_name'], limit_ppg)))
# Calculates effective limit: Step 3: Find minimum from the list created by Step 2
limit_step3 = min(limit_step2)
# Calculates effective limit: Step 4: Find maximum from step 1 and step 3
effective_limit = max(limit_step1, limit_step3)
utilization = peering_group_usage / effective_limit
write_data_to_metric(project_id, peering_group_usage, usage_metric_name, network_dict['network_name'])
write_data_to_metric(project_id, effective_limit, limit_metric_name, network_dict['network_name'])
write_data_to_metric(project_id, utilization, utilization_metric_name, network_dict['network_name'])
# Takes a project id as argument (and service for the GCP API call) and return a dictionary with the list of networks
def get_networks(project_id):
request = service.networks().list(project=project_id)
response = request.execute()
network_dict = []
if 'items' in response:
for network in response['items']:
NETWORK = network['name']
ID = network['id']
d = {'project_id':project_id,'network name':NETWORK,'network id':ID}
network_dict.append(d)
return network_dict
# gathers data for peered networks for the given project_id
def gather_peering_data(project_id):
request = service.networks().list(project=project_id)
response = request.execute()
# list of networks in that project
network_list = []
if 'items' in response:
for network in response['items']:
net = {'project_id':project_id,'network_name':network['name'],'network_id':network['id'], 'peerings':[]}
if 'peerings' in network:
STATE = network['peerings'][0]['state']
if STATE == "ACTIVE":
for peered_network in network['peerings']: # "projects/{project_name}/global/networks/{network_name}"
start = peered_network['network'].find("projects/") + len('projects/')
end = peered_network['network'].find("/global")
peered_project = peered_network['network'][start:end]
peered_network_name = peered_network['network'].split("networks/")[1]
peered_net = {'project_id': peered_project, 'network_name':peered_network_name, 'network_id': get_network_id(peered_project, peered_network_name)}
net["peerings"].append(peered_net)
network_list.append(net)
return network_list
def get_network_id(project_id, network_name):
request = service.networks().list(project=project_id)
response = request.execute()
network_id = 0
if 'items' in response:
for network in response['items']:
if network['name'] == network_name:
network_id = network['id']
break
if network_id == 0:
print(f"Error: network_id not found for {network_name} in {project_id}")
return network_id
# retrieves quota for "type" in project_link, otherwise returns null (assume 0 for building comparison vs limits)
def get_quota_current_usage(project_link, type):
results = client.list_time_series(request={
"name": project_link,
"filter": f'metric.type = "{type}"',
"interval": interval,
"view": monitoring_v3.ListTimeSeriesRequest.TimeSeriesView.FULL
})
results_list = list(results)
return (results_list)
# retrieves quota for services limits
def get_quota_current_limit(project_link, type):
results = client.list_time_series(request={
"name": project_link,
"filter": f'metric.type = "{type}"',
"interval": interval,
"view": monitoring_v3.ListTimeSeriesRequest.TimeSeriesView.FULL
})
results_list = list(results)
return (results_list)
# Customize the quota output
def customize_quota_view(quota_results):
quotaViewList = []
for result in quota_results:
quotaViewJson = {}
quotaViewJson.update(dict(result.resource.labels))
quotaViewJson.update(dict(result.metric.labels))
for val in result.points:
quotaViewJson.update({'value': val.value.int64_value})
quotaViewList.append(quotaViewJson)
return (quotaViewList)
# Takes a network dictionary and updates it with the quota usage and limits values
def set_usage_limits(network, quota_usage, quota_limit, limit_list):
if quota_usage:
for net in quota_usage:
if net['network_id'] == network['network id']: # if network ids in GCP quotas and in dictionary (using API) are the same
network['usage'] = net['value'] # set network usage in dictionary
break
else:
network['usage'] = 0 # if network does not appear in GCP quotas
else:
network['usage'] = 0 # if quotas does not appear in GCP quotas
if quota_limit:
for net in quota_limit:
if net['network_id'] == network['network id']: # if network ids in GCP quotas and in dictionary (using API) are the same
network['limit'] = net['value'] # set network limit in dictionary
break
else:
if network['network name'] in limit_list: # if network limit is in the environmental variables
network['limit'] = int(limit_list[limit_list.index(network['network name']) + 1])
else:
network['limit'] = int(limit_list[limit_list.index('default_value') + 1]) # set default value
else: # if quotas does not appear in GCP quotas
if network['network name'] in limit_list:
network['limit'] = int(limit_list[limit_list.index(network['network name']) + 1]) # ["default", 100, "networkname", 200]
else:
network['limit'] = int(limit_list[limit_list.index('default_value') + 1])
# Takes a network dictionary (with at least network_id and network_name) and returns usage and limit for that network
def get_usage_limit(network, quota_usage, quota_limit, limit_list):
usage = 0
limit = 0
if quota_usage:
for net in quota_usage:
if net['network_id'] == network['network_id']: # if network ids in GCP quotas and in dictionary (using API) are the same
usage = net['value'] # set network usage in dictionary
break
if quota_limit:
for net in quota_limit:
if net['network_id'] == network['network_id']: # if network ids in GCP quotas and in dictionary (using API) are the same
limit = net['value'] # set network limit in dictionary
break
else:
if network['network_name'] in limit_list: # if network limit is in the environmental variables
limit = int(limit_list[limit_list.index(network['network_name']) + 1])
else:
limit = int(limit_list[limit_list.index('default_value') + 1]) # set default value
else: # if quotas does not appear in GCP quotas
if network['network_name'] in limit_list:
limit = int(limit_list[limit_list.index(network['network_name']) + 1]) # ["default", 100, "networkname", 200]
else:
limit = int(limit_list[limit_list.index('default_value') + 1])
return usage, limit
# Writes data to Cloud Monitoring data
# Note that the monitoring_project_id here should be the monitoring project where the metrics are written
# and monitored_project_id is the monitored project, containing the network and resources
def write_data_to_metric(monitored_project_id, value, metric_name, network_name):
series = monitoring_v3.TimeSeries()
series.metric.type = f"custom.googleapis.com/{metric_name}"
series.resource.type = "global"
series.metric.labels["network_name"] = network_name
series.metric.labels["project"] = monitored_project_id
now = time.time()
seconds = int(now)
nanos = int((now - seconds) * 10 ** 9)
interval = monitoring_v3.TimeInterval({"end_time": {"seconds": seconds, "nanos": nanos}})
point = monitoring_v3.Point({"interval": interval, "value": {"double_value": value}})
series.points = [point]
client.create_time_series(name=monitoring_project_link, time_series=[series])

View File

@ -0,0 +1,8 @@
regex
google-api-python-client
google-auth
google-auth-httplib2
google-cloud-logging
google-cloud-monitoring
oauth2client
google-api-core

View File

@ -0,0 +1,351 @@
{
"displayName": "quotas_utilization",
"mosaicLayout": {
"columns": 12,
"tiles": [
{
"height": 4,
"widget": {
"title": "internal_forwarding_rules_l4_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l4_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "1800s",
"perSeriesAligner": "ALIGN_MEAN"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6
},
{
"height": 4,
"widget": {
"title": "internal_forwarding_rules_l7_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l7_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_MEAN"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"xPos": 6
},
{
"height": 4,
"widget": {
"title": "number_of_instances_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_instances_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_MEAN"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"yPos": 8
},
{
"height": 4,
"widget": {
"title": "number_of_vpc_peerings_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_vpc_peerings_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_MEAN"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"xPos": 6,
"yPos": 4
},
{
"height": 4,
"widget": {
"title": "number_of_active_vpc_peerings_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_active_vpc_peerings_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_INTERPOLATE"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"yPos": 4
},
{
"height": 4,
"widget": {
"title": "number_of_subnet_IP_ranges_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_subnet_IP_ranges_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_MEAN"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"xPos": 6,
"yPos": 8
},
{
"height": 4,
"widget": {
"title": "internal_forwarding_rules_l4_ppg_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l4_ppg_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_MEAN"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"yPos": 12
},
{
"height": 4,
"widget": {
"title": "internal_forwarding_rules_l7_ppg_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/internal_forwarding_rules_l7_ppg_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s",
"perSeriesAligner": "ALIGN_MEAN"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"xPos": 6,
"yPos": 12
},
{
"height": 4,
"widget": {
"title": "number_of_instances_ppg_utilization",
"xyChart": {
"chartOptions": {
"mode": "COLOR"
},
"dataSets": [
{
"minAlignmentPeriod": "3600s",
"plotType": "LINE",
"targetAxis": "Y1",
"timeSeriesQuery": {
"timeSeriesFilter": {
"aggregation": {
"alignmentPeriod": "3600s",
"perSeriesAligner": "ALIGN_NEXT_OLDER"
},
"filter": "metric.type=\"custom.googleapis.com/number_of_instances_ppg_utilization\" resource.type=\"global\"",
"secondaryAggregation": {
"alignmentPeriod": "60s"
}
}
}
}
],
"timeshiftDuration": "0s",
"yAxis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}
},
"width": 6,
"yPos": 16
}
]
}
}

View File

@ -0,0 +1,183 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
locals {
project_id_list = toset(var.monitored_projects_list)
projects = join(",", local.project_id_list)
limit_subnets_list = tolist(var.limit_subnets)
limit_subnets = join(",", local.limit_subnets_list)
limit_instances_list = tolist(var.limit_instances)
limit_instances = join(",", local.limit_instances_list)
limit_instances_ppg_list = tolist(var.limit_instances_ppg)
limit_instances_ppg = join(",", local.limit_instances_ppg_list)
limit_vpc_peer_list = tolist(var.limit_vpc_peer)
limit_vpc_peer = join(",", local.limit_vpc_peer_list)
limit_l4_list = tolist(var.limit_l4)
limit_l4 = join(",", local.limit_l4_list)
limit_l7_list = tolist(var.limit_l7)
limit_l7 = join(",", local.limit_l7_list)
limit_l4_ppg_list = tolist(var.limit_l4_ppg)
limit_l4_ppg = join(",", local.limit_l4_ppg_list)
limit_l7_ppg_list = tolist(var.limit_l7_ppg)
limit_l7_ppg = join(",", local.limit_l7_ppg_list)
}
################################################
# Monitoring project creation #
################################################
module "project-monitoring" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/project"
name = "monitoring"
parent = "organizations/${var.organization_id}"
prefix = var.prefix
billing_account = var.billing_account
services = var.project_monitoring_services
}
################################################
# Service account creation and IAM permissions #
################################################
module "service-account-function" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/iam-service-account"
project_id = module.project-monitoring.project_id
name = "sa-dash"
generate_key = false
# Required IAM permissions for this service account are:
# 1) compute.networkViewer on projects to be monitored (I gave it at organization level for now for simplicity)
# 2) monitoring viewer on the projects to be monitored (I gave it at organization level for now for simplicity)
iam_organization_roles = {
"${var.organization_id}" = [
"roles/compute.networkViewer",
"roles/monitoring.viewer",
]
}
iam_project_roles = {
"${module.project-monitoring.project_id}" = [
"roles/monitoring.metricWriter"
]
}
}
################################################
# Cloud Function configuration (& Scheduler) #
################################################
# Create an app engine application (required for Cloud Scheduler)
resource "google_app_engine_application" "scheduler_app" {
project = module.project-monitoring.project_id
# "europe-west1" is called "europe-west" and "us-central1" is "us-central" for App Engine, see https://cloud.google.com/appengine/docs/locations
location_id = var.region == "europe-west1" || var.region == "us-central1" ? substr(var.region, 0, length(var.region) - 1) : var.region
}
# Create a storage bucket for the Cloud Function's code
resource "google_storage_bucket" "bucket" {
name = "net-quotas-bucket"
location = "EU"
project = module.project-monitoring.project_id
}
data "archive_file" "file" {
type = "zip"
source_dir = "cloud-function"
output_path = "cloud-function.zip"
depends_on = [google_storage_bucket.bucket]
}
resource "google_storage_bucket_object" "archive" {
# md5 hash in the bucket object name to redeploy the Cloud Function when the code is modified
name = format("cloud-function#%s", data.archive_file.file.output_md5)
bucket = google_storage_bucket.bucket.name
source = "cloud-function.zip"
depends_on = [data.archive_file.file]
}
resource "google_cloudfunctions_function" "function_quotas" {
name = "function-quotas"
project = module.project-monitoring.project_id
region = var.region
description = "Function which creates metric to show number, limit and utlizitation."
runtime = "python39"
available_memory_mb = 512
source_archive_bucket = google_storage_bucket.bucket.name
source_archive_object = google_storage_bucket_object.archive.name
service_account_email = module.service-account-function.email
timeout = 180
entry_point = "quotas"
trigger_http = true
environment_variables = {
monitored_projects_list = local.projects
monitoring_project_id = module.project-monitoring.project_id
LIMIT_SUBNETS = local.limit_subnets
LIMIT_INSTANCES = local.limit_instances
LIMIT_INSTANCES_PPG = local.limit_instances_ppg
LIMIT_VPC_PEER = local.limit_vpc_peer
LIMIT_L4 = local.limit_l4
LIMIT_L7 = local.limit_l7
LIMIT_L4_PPG = local.limit_l4_ppg
LIMIT_L7_PPG = local.limit_l7_ppg
}
}
resource "google_cloud_scheduler_job" "job" {
name = "scheduler-net-dash"
project = module.project-monitoring.project_id
region = var.region
description = "Cloud Scheduler job to trigger the Networking Dashboard Cloud Function"
schedule = var.schedule_cron
retry_config {
retry_count = 1
}
http_target {
http_method = "POST"
uri = google_cloudfunctions_function.function_quotas.https_trigger_url
# We could pass useful data in the body later
body = base64encode("{\"foo\":\"bar\"}")
}
}
# TODO: How to secure Cloud Function invokation? Not member = "allUsers" but specific Scheduler service Account?
# Maybe "service-YOUR_PROJECT_NUMBER@gcp-sa-cloudscheduler.iam.gserviceaccount.com"?
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = module.project-monitoring.project_id
region = var.region
cloud_function = google_cloudfunctions_function.function_quotas.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
################################################
# Cloud Monitoring Dashboard creation #
################################################
resource "google_monitoring_dashboard" "dashboard" {
dashboard_json = file("${path.module}/dashboards/quotas-utilization.json")
project = module.project-monitoring.project_id
}

View File

@ -0,0 +1 @@
Creating here resources to test the Cloud Function and ensuring metrics are correctly populated

View File

@ -0,0 +1,279 @@
# Creating test infrastructure
resource "google_folder" "test-net-dash" {
display_name = "test-net-dash"
parent = "organizations/${var.organization_id}"
}
##### Creating host projects, VPCs, service projects #####
module "project-hub" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/project"
name = "test-host-hub"
parent = google_folder.test-net-dash.name
prefix = var.prefix
billing_account = var.billing_account
services = var.project_vm_services
shared_vpc_host_config = {
enabled = true
service_projects = [] # defined later
}
}
module "vpc-hub" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/net-vpc"
project_id = module.project-hub.project_id
name = "vpc-hub"
subnets = [
{
ip_cidr_range = "10.0.10.0/24"
name = "subnet-hub-1"
region = var.region
secondary_ip_range = {}
}
]
}
module "project-svc-hub" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/project"
parent = google_folder.test-net-dash.name
billing_account = var.billing_account
prefix = var.prefix
name = "test-svc-hub"
services = var.project_vm_services
shared_vpc_service_config = {
attach = true
host_project = module.project-hub.project_id
}
}
module "project-prod" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/project"
name = "test-host-prod"
parent = google_folder.test-net-dash.name
prefix = var.prefix
billing_account = var.billing_account
services = var.project_vm_services
shared_vpc_host_config = {
enabled = true
service_projects = [] # defined later
}
}
module "vpc-prod" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/net-vpc"
project_id = module.project-prod.project_id
name = "vpc-prod"
subnets = [
{
ip_cidr_range = "10.0.20.0/24"
name = "subnet-prod-1"
region = var.region
secondary_ip_range = {}
}
]
}
module "project-svc-prod" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/project"
parent = google_folder.test-net-dash.name
billing_account = var.billing_account
prefix = var.prefix
name = "test-svc-prod"
services = var.project_vm_services
shared_vpc_service_config = {
attach = true
host_project = module.project-prod.project_id
}
}
module "project-dev" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/project"
name = "test-host-dev"
parent = google_folder.test-net-dash.name
prefix = var.prefix
billing_account = var.billing_account
services = var.project_vm_services
shared_vpc_host_config = {
enabled = true
service_projects = [] # defined later
}
}
module "vpc-dev" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/net-vpc"
project_id = module.project-dev.project_id
name = "vpc-dev"
subnets = [
{
ip_cidr_range = "10.0.30.0/24"
name = "subnet-dev-1"
region = var.region
secondary_ip_range = {}
}
]
}
module "project-svc-dev" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/project"
parent = google_folder.test-net-dash.name
billing_account = var.billing_account
prefix = var.prefix
name = "test-svc-dev"
services = var.project_vm_services
shared_vpc_service_config = {
attach = true
host_project = module.project-dev.project_id
}
}
##### Creating VPC peerings #####
module "hub-to-prod-peering" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-prod.self_link
}
module "prod-to-hub-peering" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/net-vpc-peering"
local_network = module.vpc-prod.self_link
peer_network = module.vpc-hub.self_link
depends_on = [module.hub-to-prod-peering]
}
module "hub-to-dev-peering" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-dev.self_link
}
module "dev-to-hub-peering" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric///modules/net-vpc-peering"
local_network = module.vpc-dev.self_link
peer_network = module.vpc-hub.self_link
depends_on = [module.hub-to-dev-peering]
}
##### Creating VMs #####
resource "google_compute_instance" "test-vm-prod1" {
project = module.project-svc-prod.project_id
name = "test-vm-prod1"
machine_type = "f1-micro"
zone = var.zone
tags = ["${var.region}"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
subnetwork = module.vpc-prod.subnet_self_links["${var.region}/subnet-prod-1"]
subnetwork_project = module.project-prod.project_id
}
allow_stopping_for_update = true
}
resource "google_compute_instance" "test-vm-prod2" {
project = module.project-prod.project_id
name = "test-vm-prod2"
machine_type = "f1-micro"
zone = var.zone
tags = ["${var.region}"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
subnetwork = module.vpc-prod.subnet_self_links["${var.region}/subnet-prod-1"]
subnetwork_project = module.project-prod.project_id
}
allow_stopping_for_update = true
}
resource "google_compute_instance" "test-vm-dev1" {
count = 10
project = module.project-svc-dev.project_id
name = "test-vm-dev${count.index}"
machine_type = "f1-micro"
zone = var.zone
tags = ["${var.region}"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
subnetwork = module.vpc-dev.subnet_self_links["${var.region}/subnet-dev-1"]
subnetwork_project = module.project-dev.project_id
}
allow_stopping_for_update = true
}
resource "google_compute_instance" "test-vm-hub1" {
project = module.project-svc-hub.project_id
name = "test-vm-hub1"
machine_type = "f1-micro"
zone = var.zone
tags = ["${var.region}"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
subnetwork = module.vpc-hub.subnet_self_links["${var.region}/subnet-hub-1"]
subnetwork_project = module.project-hub.project_id
}
allow_stopping_for_update = true
}
# Forwarding Rules
resource "google_compute_forwarding_rule" "forwarding-rule-dev" {
name = "forwarding-rule-dev"
project = module.project-svc-dev.project_id
network = module.vpc-dev.self_link
subnetwork = module.vpc-dev.subnet_self_links["${var.region}/subnet-dev-1"]
region = var.region
backend_service = google_compute_region_backend_service.test-backend.id
ip_protocol = "TCP"
load_balancing_scheme = "INTERNAL"
all_ports = true
allow_global_access = true
}
# backend service
resource "google_compute_region_backend_service" "test-backend" {
name = "test-backend"
region = var.region
project = module.project-svc-dev.project_id
protocol = "TCP"
load_balancing_scheme = "INTERNAL"
}

View File

@ -0,0 +1,49 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "organization_id" {
description = "The organization id for the associated services"
}
variable "billing_account" {
description = "The ID of the billing account to associate this project with"
}
variable "prefix" {
description = "Customer name to use as prefix for resources' naming"
default = "net-dash"
}
variable "project_vm_services" {
description = "Service APIs enabled by default in new projects."
default = [
"cloudbilling.googleapis.com",
"compute.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com",
"servicenetworking.googleapis.com",
]
}
variable "region" {
description = "Region used to deploy subnets"
default = "europe-west1"
}
variable "zone" {
description = "Zone used to deploy vms"
default = "europe-west1-b"
}

View File

@ -0,0 +1,150 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
variable "organization_id" {
description = "The organization id for the associated services"
}
variable "billing_account" {
description = "The ID of the billing account to associate this project with"
}
variable "prefix" {
description = "Customer name to use as prefix for resources' naming"
default = "net-dash"
}
# Not used for now as I am creating the monitoring project in my main.tf file
variable "monitoring_project_id" {
type = string
description = "ID of the monitoring project, where the Cloud Function and dashboards will be deployed."
}
# TODO: support folder instead of a list of projects?
variable "monitored_projects_list" {
type = list(string)
description = "ID of the projects to be monitored (where limits and quotas data will be pulled)"
}
variable "schedule_cron" {
description = "Cron format schedule to run the Cloud Function. Default is every 5 minutes."
default = "*/5 * * * *"
}
variable "project_monitoring_services" {
description = "Service APIs enabled by default in new projects."
default = [
"cloudbilling.googleapis.com",
"cloudbuild.googleapis.com",
"cloudresourcemanager.googleapis.com",
"cloudscheduler.googleapis.com",
"compute.googleapis.com",
"cloudfunctions.googleapis.com",
"iam.googleapis.com",
"iamcredentials.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com",
"oslogin.googleapis.com",
"servicenetworking.googleapis.com",
"serviceusage.googleapis.com",
]
}
variable "project_vm_services" {
description = "Service APIs enabled by default in new projects."
default = [
"cloudbilling.googleapis.com",
"compute.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com",
"servicenetworking.googleapis.com",
]
}
variable "region" {
description = "Region used to deploy subnets"
default = "europe-west1"
}
variable "zone" {
description = "Zone used to deploy vms"
default = "europe-west1-b"
}
variable "limit_l4" {
description = "Maximum number of forwarding rules for Internal TCP/UDP Load Balancing per network."
type = list(string)
default = [
"default_value", "75",
]
}
variable "limit_l7" {
description = "Maximum number of forwarding rules for Internal HTTP(S) Load Balancing per network."
type = list(string)
default = [
"default_value", "75",
]
}
variable "limit_subnets" {
description = "Maximum number of subnet IP ranges (primary and secondary) per peering group"
type = list(string)
default = [
"default_value", "400",
]
}
variable "limit_instances" {
description = "Maximum number of instances per network"
type = list(string)
default = [
"default_value", "15000",
]
}
variable "limit_instances_ppg" {
description = "Maximum number of instances per peering group."
type = list(string)
default = [
"default_value", "15000",
]
}
variable "limit_vpc_peer" {
description = "Maximum number of peering VPC peerings per network."
type = list(string)
default = [
"default_value", "25",
"test-vpc", "40",
]
}
variable "limit_l4_ppg" {
description = "Maximum number of forwarding rules for Internal TCP/UDP Load Balancing per network."
type = list(string)
default = [
"default_value", "175",
]
}
variable "limit_l7_ppg" {
description = "Maximum number of forwarding rules for Internal HTTP(S) Load Balancing per network."
type = list(string)
default = [
"default_value", "175",
]
}