cloud-foundation-fabric/modules/compute-vm
Sam Bentley b1679ad21a
Fix: Instance level stateful disk config (#1578)
* update doco

* fix bug in TF code

* change instance name in README to fix test

* revert disk name

* Update stateful.yaml

* fix examples and tests

---------

Co-authored-by: Julio Castillo <juliocc@gmail.com>
Co-authored-by: Ludovico Magnocavallo <ludomagno@google.com>
2023-08-11 15:25:17 +00:00
..
README.md Fix: Instance level stateful disk config (#1578) 2023-08-11 15:25:17 +00:00
main.tf allow using a separate resource for boot disk (#1496) 2023-07-07 15:40:13 +00:00
outputs.tf Ensure all modules have an `id` output (#1410) 2023-06-02 16:07:22 +02:00
resource-policies.tf Add support for resource policies to compute vm module (#1467) 2023-06-26 06:49:05 +00:00
tags.tf Add support for resource policies to compute vm module (#1467) 2023-06-26 06:49:05 +00:00
variables.tf Fix: Instance level stateful disk config (#1578) 2023-08-11 15:25:17 +00:00
versions.tf Moved allow_net_admin to enable_features flag. Bumped provider version to 4.76 2023-08-07 14:27:20 +01:00

README.md

Google Compute Engine VM module

This module can operate in two distinct modes:

  • instance creation, with optional unmanaged group
  • instance template creation

In both modes, an optional service account can be created and assigned to either instances or template. If you need a managed instance group when using the module in template mode, refer to the compute-mig module.

Examples

Instance using defaults

The simplest example leverages defaults for the boot disk image and size, and uses a service account created by the module. Multiple instances can be managed via the instance_count variable.

module "simple-vm-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  service_account_create = true
}
# tftest modules=1 resources=2 inventory=simple.yaml

Service account management

VM service accounts can be managed in three different ways:

  • You can let the module create a service account for you by setting service_account_create = true
  • You can use an existing service account by setting service_account_create = false (the default value) and passing the full email address of the service account to the service_account variable. This is useful, for example, if you want to reuse the service account from another previously created instance, or if you want to create the service account manually with the iam-service-account module. In this case, you probably also want to set service_account_scopes to cloud-platform.
  • Lastly, you can use the default compute service account by setting service_account_crate = false. Please note that using the default compute service account is not recommended.
module "vm-managed-sa-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "test1"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  service_account_create = true
}

module "vm-managed-sa-example2" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "test2"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  service_account        = module.vm-managed-sa-example.service_account_email
  service_account_scopes = ["cloud-platform"]
}

# not recommended
module "vm-default-sa-example2" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "test3"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  service_account_create = false
}

# tftest modules=3 resources=4 inventory=sas.yaml

Disk management

Disk sources

Attached disks can be created and optionally initialized from a pre-existing source, or attached to VMs when pre-existing. The source and source_type attributes of the attached_disks variable allows several modes of operation:

  • source_type = "image" can be used with zonal disks in instances and templates, set source to the image name or self link
  • source_type = "snapshot" can be used with instances only, set source to the snapshot name or self link
  • source_type = "attach" can be used for both instances and templates to attach an existing disk, set source to the name (for zonal disks) or self link (for regional disks) of the existing disk to attach; no disk will be created
  • source_type = null can be used where an empty disk is needed, source becomes irrelevant and can be left null

This is an example of attaching a pre-existing regional PD to a new instance:

module "vm-disks-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "${var.region}-b"
  name       = "test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  attached_disks = [{
    name        = "repd-1"
    size        = 10
    source_type = "attach"
    source      = "regions/${var.region}/disks/repd-test-1"
    options = {
      replica_zone = "${var.region}-c"
    }
  }]
  service_account_create = true
}
# tftest modules=1 resources=2

And the same example for an instance template (where not using the full self link of the disk triggers recreation of the template)

module "vm-disks-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "${var.region}-b"
  name       = "test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  attached_disks = [{
    name        = "repd"
    size        = 10
    source_type = "attach"
    source      = "https://www.googleapis.com/compute/v1/projects/${var.project_id}/regions/${var.region}/disks/repd-test-1"
    options = {
      replica_zone = "${var.region}-c"
    }
  }]
  service_account_create = true
  create_template        = true
}
# tftest modules=1 resources=2

Disk types and options

The attached_disks variable exposes an option attribute that can be used to fine tune the configuration of each disk. The following example shows a VM with multiple disks

module "vm-disk-options-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  attached_disks = [
    {
      name        = "data1"
      size        = "10"
      source_type = "image"
      source      = "image-1"
      options = {
        auto_delete  = false
        replica_zone = "europe-west1-c"
      }
    },
    {
      name        = "data2"
      size        = "20"
      source_type = "snapshot"
      source      = "snapshot-2"
      options = {
        type = "pd-ssd"
        mode = "READ_ONLY"
      }
    }
  ]
  service_account_create = true
}
# tftest modules=1 resources=4 inventory=disk-options.yaml

Boot disk as an independent resource

To create the boot disk as an independent resources instead of as part of the instance creation flow, set boot_disk.use_independent_disk to true and optionally configure boot_disk.initialize_params.

This will create the boot disk as its own resource and attach it to the instance, allowing to recreate the instance from Terraform while preserving the boot.

module "simple-vm-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "test"
  boot_disk = {
    initialize_params    = {}
    use_independent_disk = true
  }
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  service_account_create = true
}
# tftest modules=1 resources=3 inventory=independent-boot-disk.yaml

Network interfaces

Internal and external IPs

By default VNs are create with an automatically assigned IP addresses, but you can change it through the addresses and nat attributes of the network_interfaces variable:

module "vm-internal-ip" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "vm-internal-ip"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    addresses  = { internal = "10.0.0.2" }
  }]
}

module "vm-external-ip" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "vm-external-ip"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nat        = true
    addresses  = { external = "8.8.8.8" }
  }]
}
# tftest modules=2 resources=2 inventory=ips.yaml

Using Alias IPs

This example shows how to add additional Alias IPs to your VM.

module "vm-with-alias-ips" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    alias_ips = {
      alias1 = "10.16.0.10/32"
    }
  }]
}
# tftest modules=1 resources=1 inventory=alias-ips.yaml

Using gVNIC

This example shows how to enable gVNIC on your VM by customizing a cos image. Given that gVNIC needs to be enabled as an instance configuration and as a guest os configuration, you'll need to supply a bootable disk with guest_os_features=GVNIC. SEV_CAPABLE, UEFI_COMPATIBLE and VIRTIO_SCSI_MULTIQUEUE are enabled implicitly in the cos, rhel, centos and other images.


resource "google_compute_image" "cos-gvnic" {
  project      = "my-project"
  name         = "my-image"
  source_image = "https://www.googleapis.com/compute/v1/projects/cos-cloud/global/images/cos-89-16108-534-18"

  guest_os_features {
    type = "GVNIC"
  }
  guest_os_features {
    type = "SEV_CAPABLE"
  }
  guest_os_features {
    type = "UEFI_COMPATIBLE"
  }
  guest_os_features {
    type = "VIRTIO_SCSI_MULTIQUEUE"
  }
}

module "vm-with-gvnic" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "test"
  boot_disk = {
    initialize_params = {
      image = google_compute_image.cos-gvnic.self_link
      type  = "pd-ssd"
    }
  }
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nic_type   = "GVNIC"
  }]
  service_account_create = true
}
# tftest modules=1 resources=3 inventory=gvnic.yaml

Metadata

You can define labels and custom metadata values. Metadata can be leveraged, for example, to define a custom startup script.

module "vm-metadata-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "nginx-server"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  labels = {
    env    = "dev"
    system = "crm"
  }
  metadata = {
    startup-script = <<-EOF
      #! /bin/bash
      apt-get update
      apt-get install -y nginx
    EOF
  }
  service_account_create = true
}
# tftest modules=1 resources=2 inventory=metadata.yaml

IAM

Like most modules, you can assign IAM roles to the instance using the iam variable.

module "vm-iam-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "webserver"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  iam = {
    "roles/compute.instanceAdmin" = [
      "group:webserver@example.com",
      "group:admin@example.com"
    ]
  }
}
# tftest modules=1 resources=2 inventory=iam.yaml

Spot VM

Spot VMs are ephemeral compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs provide new features that preemptible instances do not support, such as the absence of a maximum runtime.

module "spot-vm-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "test"
  options = {
    spot               = true
    termination_action = "STOP"
  }
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
}
# tftest modules=1 resources=1 inventory=spot.yaml

Confidential compute

You can enable confidential compute with the confidential_compute variable, which can be used for standalone instances or for instance templates.

module "vm-confidential-example" {
  source               = "./fabric/modules/compute-vm"
  project_id           = var.project_id
  zone                 = "europe-west1-b"
  name                 = "confidential-vm"
  confidential_compute = true
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]

}

module "template-confidential-example" {
  source               = "./fabric/modules/compute-vm"
  project_id           = var.project_id
  zone                 = "europe-west1-b"
  name                 = "confidential-template"
  confidential_compute = true
  create_template      = true
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
}

# tftest modules=2 resources=2 inventory=confidential.yaml

Disk encryption with Cloud KMS

This example shows how to control disk encryption via the the encryption variable, in this case the self link to a KMS CryptoKey that will be used to encrypt boot and attached disk. Managing the key with the ../kms module is of course possible, but is not shown here.

module "kms-vm-example" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  zone       = "europe-west1-b"
  name       = "kms-test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  attached_disks = [{
    name = "attached-disk"
    size = 10
  }]
  service_account_create = true
  encryption = {
    encrypt_boot      = true
    kms_key_self_link = var.kms_key.self_link
  }
}
# tftest modules=1 resources=3 inventory=cmek.yaml

Instance template

This example shows how to use the module to manage an instance template that defines an additional attached disk for each instance, and overrides defaults for the boot disk image and service account.

module "cos-test" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  boot_disk = {
    initialize_params = {
      image = "projects/cos-cloud/global/images/family/cos-stable"
    }
  }
  attached_disks = [
    {
      name = "disk-1"
      size = 10
    }
  ]
  service_account = "vm-default@my-project.iam.gserviceaccount.com"
  create_template = true
}
# tftest modules=1 resources=1 inventory=template.yaml

Instance group

If an instance group is needed when operating in instance mode, simply set the group variable to a non null map. The map can contain named port declarations, or be empty if named ports are not needed.

locals {
  cloud_config = "my cloud config"
}

module "instance-group" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "ilb-test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
  }
  service_account        = var.service_account.email
  service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
  metadata = {
    user-data = local.cloud_config
  }
  group = { named_ports = {} }
}
# tftest modules=1 resources=2 inventory=group.yaml

Instance Schedule

Instance start and stop schedules can be defined via an existing or auto-created resource policy.

To use an existing policy pass its id to the instance_schedule variable:

module "instance" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "schedule-test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
  }
  instance_schedule = {
    resource_policy_id = "projects/my-project/regions/europe-west1/resourcePolicies/test"
  }
}
# tftest modules=1 resources=1 inventory=instance-schedule-id.yaml

To create a new policy set its configuration in the instance_schedule variable. When removing the policy follow a two-step process by first setting active = false in the schedule configuration, which will unattach the policy, then removing the variable so the policy is destroyed.

module "instance" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "schedule-test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
  }
  instance_schedule = {
    create_config = {
      vm_start = "0 8 * * *"
      vm_stop  = "0 17 * * *"
    }
  }
}
# tftest modules=1 resources=2 inventory=instance-schedule-create.yaml

Snapshot Schedules

Snapshot policies can be attached to disks with optional creation managed by the module.

module "instance" {
  source     = "./fabric/modules/compute-vm"
  project_id = "my-project"
  zone       = "europe-west1-b"
  name       = "schedule-test"
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }]
  boot_disk = {
    image             = "projects/cos-cloud/global/images/family/cos-stable"
    snapshot_schedule = "boot"
  }
  attached_disks = [
    {
      name              = "disk-1"
      size              = 10
      snapshot_schedule = "generic-vm"
    }
  ]
  snapshot_schedules = {
    boot = {
      schedule = {
        daily = {
          days_in_cycle = 1
          start_time    = "03:00"
        }
      }
    }
  }
}
# tftest modules=1 resources=5 inventory=snapshot-schedule-create.yaml

Variables

name description type required default
name Instance name. string
network_interfaces Network interfaces configuration. Use self links for Shared VPC, set addresses to null if not needed. list(object({…}))
project_id Project id. string
zone Compute zone. string
attached_disk_defaults Defaults for attached disks options. object({…}) {…}
attached_disks Additional disks, if options is null defaults will be used in its place. Source type is one of 'image' (zonal disks in vms and template), 'snapshot' (vm), 'existing', and null. list(object({…})) []
boot_disk Boot disk properties. object({…}) {…}
can_ip_forward Enable IP forwarding. bool false
confidential_compute Enable Confidential Compute for these instances. bool false
create_template Create instance template instead of instances. bool false
description Description of a Compute Instance. string "Managed by the compute-vm Terraform module."
enable_display Enable virtual display on the instances. bool false
encryption Encryption options. Only one of kms_key_self_link and disk_encryption_key_raw may be set. If needed, you can specify to encrypt or not the boot disk. object({…}) null
group Define this variable to create an instance group for instances. Disabled for template use. object({…}) null
hostname Instance FQDN name. string null
iam IAM bindings in {ROLE => [MEMBERS]} format. map(list(string)) {}
instance_schedule Assign or create and assign an instance schedule policy. Either resource policy id or create_config must be specified if not null. Set active to null to dtach a policy from vm before destroying. object({…}) null
instance_type Instance type. string "f1-micro"
labels Instance labels. map(string) {}
metadata Instance metadata. map(string) {}
min_cpu_platform Minimum CPU platform. string null
options Instance options. object({…}) {…}
scratch_disks Scratch disks configuration. object({…}) {…}
service_account Service account email. Unused if service account is auto-created. string null
service_account_create Auto-create service account. bool false
service_account_scopes Scopes applied to service account. list(string) []
shielded_config Shielded VM configuration of the instances. object({…}) null
snapshot_schedules Snapshot schedule resource policies that can be attached to disks. map(object({…})) {}
tag_bindings Tag bindings for this instance, in key => tag value id format. map(string) null
tags Instance network tags for firewall rule targets. list(string) []

Outputs

name description sensitive
external_ip Instance main interface external IP addresses.
group Instance group resource.
id Fully qualified instance id.
instance Instance resource.
internal_ip Instance main interface internal IP address.
internal_ips Instance interfaces internal IP addresses.
self_link Instance self links.
service_account Service account resource.
service_account_email Service account email.
service_account_iam_email Service account email.
template Template resource.
template_name Template name.

TODO

  • add support for instance groups