google.cloud.gcp_compute_autoscaler module – Creates a GCP Autoscaler
Note
This module is part of the google.cloud collection (version 1.3.0).
You might already have this collection installed if you are using the ansible
package.
It is not included in ansible-core
.
To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install google.cloud
.
You need further requirements to be able to use this module,
see Requirements for details.
To use it in a playbook, specify: google.cloud.gcp_compute_autoscaler
.
Synopsis
Represents an Autoscaler resource.
Autoscalers allow you to automatically scale virtual machine instances in managed instance groups according to an autoscaling policy that you define.
Requirements
The below requirements are needed on the host that executes this module.
python >= 2.6
requests >= 2.18.4
google-auth >= 1.3.0
Parameters
Parameter |
Comments |
---|---|
An OAuth2 access token if credential type is accesstoken. |
|
The type of credential used. Choices:
|
|
The configuration parameters for the autoscaling algorithm. You can define one or more of the policies for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. |
|
The number of seconds that the autoscaler should wait before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. Default: |
|
Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. |
|
Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: - NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics.
Default: |
|
The target CPU utilization that the autoscaler should maintain. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales down the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales up until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. |
|
Configuration parameters of autoscaling based on a custom metric. |
|
The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. |
|
The target value of the metric that autoscaler should maintain. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilizationTarget is www.googleapis.com/compute/instance/network/received_bytes_count. The autoscaler will work to keep this value constant for each of the instances. |
|
Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Some valid choices include: “GAUGE”, “DELTA_PER_SECOND”, “DELTA_PER_MINUTE” |
|
Configuration parameters of autoscaling based on a load balancer. |
|
Fraction of backend capacity utilization (set in HTTP(s) load balancing configuration) that autoscaler should maintain. Must be a positive float value. If not defined, the default is 0.8. |
|
The maximum number of instances that the autoscaler can scale up to. This is required when creating or updating an autoscaler. The maximum number of replicas should not be lower than minimal number of replicas. |
|
The minimum number of replicas that the autoscaler can scale down to. This cannot be less than 0. If not provided, autoscaler will choose a default value depending on maximum number of instances allowed. |
|
Defines operating mode for this policy. Some valid choices include: “OFF”, “ONLY_UP”, “ON” Default: |
|
Defines scale in controls to reduce the risk of response latency and outages due to abrupt scale-in events . |
|
A nested object resource. |
|
Specifies a fixed number of VM instances. This must be a positive integer. |
|
Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. |
|
How long back autoscaling should look when computing recommendations to include directives regarding slower scale down, as described above. |
|
An optional description of this resource. |
|
Specifies which Ansible environment you’re running this module within. This should not be set unless you know what you’re doing. This only alters the User Agent string for any API requests. |
|
Name of the resource. The name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. |
|
The Google Cloud Platform project to use. |
|
Array of scopes to be used |
|
The contents of a Service Account JSON file, either in a dictionary or as a JSON string that represents it. |
|
An optional service account email address if machineaccount is selected and the user does not wish to use the default email. |
|
The path of a Service Account JSON file if serviceaccount is selected as type. |
|
Whether the given object should exist in GCP Choices:
|
|
URL of the managed instance group that this autoscaler will scale. This field represents a link to a InstanceGroupManager resource in GCP. It can be specified in two ways. First, you can place a dictionary with key ‘selfLink’ and value of your resource’s selfLink Alternatively, you can add `register: name-of-resource` to a gcp_compute_instance_group_manager task and then set this target field to “{{ name-of-resource }}” |
|
URL of the zone where the instance group resides. |
Notes
Note
API Reference: https://cloud.google.com/compute/docs/reference/rest/v1/autoscalers
Autoscaling Groups of Instances: https://cloud.google.com/compute/docs/autoscaler/
for authentication, you can set service_account_file using the
GCP_SERVICE_ACCOUNT_FILE
env variable.for authentication, you can set service_account_contents using the
GCP_SERVICE_ACCOUNT_CONTENTS
env variable.For authentication, you can set service_account_email using the
GCP_SERVICE_ACCOUNT_EMAIL
env variable.For authentication, you can set access_token using the
GCP_ACCESS_TOKEN
env variable.For authentication, you can set auth_kind using the
GCP_AUTH_KIND
env variable.For authentication, you can set scopes using the
GCP_SCOPES
env variable.Environment variables values will only be used if the playbook values are not set.
The service_account_email and service_account_file options are mutually exclusive.
Examples
- name: create a network
google.cloud.gcp_compute_network:
name: network-instancetemplate
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: network
- name: create a address
google.cloud.gcp_compute_address:
name: address-instancetemplate
region: us-central1
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: address
- name: create a instance template
google.cloud.gcp_compute_instance_template:
name: "{{ resource_name }}"
properties:
disks:
- auto_delete: 'true'
boot: 'true'
initialize_params:
source_image: projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts
machine_type: n1-standard-1
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: test-config
type: ONE_TO_ONE_NAT
nat_ip: "{{ address }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: instancetemplate
- name: create a instance group manager
google.cloud.gcp_compute_instance_group_manager:
name: "{{ resource_name }}"
base_instance_name: test1-child
instance_template: "{{ instancetemplate }}"
target_size: 3
zone: us-central1-a
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: igm
- name: create a autoscaler
google.cloud.gcp_compute_autoscaler:
name: test_object
zone: us-central1-a
target: "{{ igm }}"
autoscaling_policy:
max_num_replicas: 5
min_num_replicas: 1
cool_down_period_sec: 60
cpu_utilization:
utilization_target: 0.5
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
Return Values
Common return values are documented here, the following are the fields unique to this module:
Key |
Description |
---|---|
The configuration parameters for the autoscaling algorithm. You can define one or more of the policies for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%. Returned: success |
|
The number of seconds that the autoscaler should wait before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process. Returned: success |
|
Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group. Returned: success |
|
Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: - NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics.
Returned: success |
|
The target CPU utilization that the autoscaler should maintain. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales down the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales up until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization. Returned: success |
|
Configuration parameters of autoscaling based on a custom metric. Returned: success |
|
The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE. Returned: success |
|
The target value of the metric that autoscaler should maintain. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilizationTarget is www.googleapis.com/compute/instance/network/received_bytes_count. The autoscaler will work to keep this value constant for each of the instances. Returned: success |
|
Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Returned: success |
|
Configuration parameters of autoscaling based on a load balancer. Returned: success |
|
Fraction of backend capacity utilization (set in HTTP(s) load balancing configuration) that autoscaler should maintain. Must be a positive float value. If not defined, the default is 0.8. Returned: success |
|
The maximum number of instances that the autoscaler can scale up to. This is required when creating or updating an autoscaler. The maximum number of replicas should not be lower than minimal number of replicas. Returned: success |
|
The minimum number of replicas that the autoscaler can scale down to. This cannot be less than 0. If not provided, autoscaler will choose a default value depending on maximum number of instances allowed. Returned: success |
|
Defines operating mode for this policy. Returned: success |
|
Defines scale in controls to reduce the risk of response latency and outages due to abrupt scale-in events . Returned: success |
|
A nested object resource. Returned: success |
|
Specifies a fixed number of VM instances. This must be a positive integer. Returned: success |
|
Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%. Returned: success |
|
How long back autoscaling should look when computing recommendations to include directives regarding slower scale down, as described above. Returned: success |
|
Creation timestamp in RFC3339 text format. Returned: success |
|
An optional description of this resource. Returned: success |
|
Unique identifier for the resource. Returned: success |
|
Name of the resource. The name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. Returned: success |
|
URL of the managed instance group that this autoscaler will scale. Returned: success |
|
URL of the zone where the instance group resides. Returned: success |