google.cloud.gcp_container_node_pool_info – Gather info for GCP NodePool
Note
This plugin is part of the google.cloud collection (version 1.0.2).
You might already have this collection installed if you are using the ansible
package.
It is not included in ansible-core
.
To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install google.cloud
.
To use it in a playbook, specify: google.cloud.gcp_container_node_pool_info
.
Requirements
The below requirements are needed on the host that executes this module.
python >= 2.6
requests >= 2.18.4
google-auth >= 1.3.0
Parameters
Parameter |
Comments |
---|---|
The type of credential used. Choices:
|
|
The cluster this node pool belongs to. This field represents a link to a Cluster resource in GCP. It can be specified in two ways. First, you can place a dictionary with key ‘name’ and value of your resource’s name Alternatively, you can add register: name-of-resource to a gcp_container_cluster task and then set this cluster field to “{{ name-of-resource }}” |
|
Specifies which Ansible environment you’re running this module within. This should not be set unless you know what you’re doing. This only alters the User Agent string for any API requests. |
|
The location where the node pool is deployed. |
|
The Google Cloud Platform project to use. |
|
Array of scopes to be used |
|
The contents of a Service Account JSON file, either in a dictionary or as a JSON string that represents it. |
|
An optional service account email address if machineaccount is selected and the user does not wish to use the default email. |
|
The path of a Service Account JSON file if serviceaccount is selected as type. |
Notes
Note
for authentication, you can set service_account_file using the
gcp_service_account_file
env variable.for authentication, you can set service_account_contents using the
GCP_SERVICE_ACCOUNT_CONTENTS
env variable.For authentication, you can set service_account_email using the
GCP_SERVICE_ACCOUNT_EMAIL
env variable.For authentication, you can set auth_kind using the
GCP_AUTH_KIND
env variable.For authentication, you can set scopes using the
GCP_SCOPES
env variable.Environment variables values will only be used if the playbook values are not set.
The service_account_email and service_account_file options are mutually exclusive.
Examples
- name: get info on a node pool
gcp_container_node_pool_info:
cluster: "{{ cluster }}"
location: us-central1-a
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
Return Values
Common return values are documented here, the following are the fields unique to this module:
Key |
Description |
---|---|
List of resources Returned: always |
|
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present. Returned: success |
|
Is autoscaling enabled for this node pool. Returned: success |
|
Maximum number of nodes in the NodePool. Must be >= minNodeCount. There has to enough quota to scale up the cluster. Returned: success |
|
Minimum number of nodes in the NodePool. Must be >= 1 and <= maxNodeCount. Returned: success |
|
The cluster this node pool belongs to. Returned: success |
|
Which conditions caused the current node pool state. Returned: success |
|
Machine-friendly representation of the condition. Returned: success |
|
The node configuration of the pool. Returned: success |
|
A list of hardware accelerators to be attached to each node. Returned: success |
|
The number of the accelerator cards exposed to an instance. Returned: success |
|
The accelerator type resource name. Returned: success |
|
Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB. Returned: success |
|
Type of the disk attached to each node (e.g. ‘pd-standard’ or ‘pd-ssd’) If unspecified, the default disk type is ‘pd-standard’ . Returned: success |
|
The image type to use for this node. Note that for a given image type, the latest version of it will be used. Returned: success |
|
The map of Kubernetes labels (key/value pairs) to be applied to each node. These will added in addition to any default label(s) that Kubernetes may apply to the node. In case of conflict in label keys, the applied set may differ depending on the Kubernetes version – it’s best to assume the behavior is undefined and conflicts should be avoided. For more information, including usage and the valid values, see: http://kubernetes.io/v1.1/docs/user-guide/labels.html An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1.3kg”, “count”: “3” }. Returned: success |
|
The number of local SSD disks to be attached to the node. The limit for this value is dependant upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd#local_ssd_limits for more information. Returned: success |
|
The name of a Google Compute Engine machine type (e.g. n1-standard-1). If unspecified, the default machine type is n1-standard-1. Returned: success |
|
The metadata key/value pairs assigned to instances in the cluster. Keys must conform to the regexp [a-zA-Z0-9-_]+ and be less than 128 bytes in length. These are reflected as part of a URL in the metadata server. Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the four reserved keys: “instance-template”, “kube-env”, “startup-script”, and “user-data” Values are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on them is that each value’s size must be less than or equal to 32 KB. The total size of all keys and values must be less than 512 KB. An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1.3kg”, “count”: “3” }. Returned: success |
|
Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform . Returned: success |
|
The set of Google API scopes to be made available on all of the node VMs under the “default” service account. The following scopes are recommended, but not required, and by default are not included: https://www.googleapis.com/auth/compute is required for mounting persistent storage on your nodes. https://www.googleapis.com/auth/devstorage.read_only is required for communicating with gcr.io (the Google Container Registry). If unspecified, no scopes are added, unless Cloud Logging or Cloud Monitoring are enabled, in which case their required scopes will be added. Returned: success |
|
Whether the nodes are created as preemptible VM instances. See: https://cloud.google.com/compute/docs/instances/preemptible for more information about preemptible VM instances. Returned: success |
|
The Google Cloud Platform Service Account to be used by the node VMs. If no Service Account is specified, the “default” service account is used. Returned: success |
|
Shielded Instance options. Returned: success |
|
Defines whether the instance has integrity monitoring enabled. Enables monitoring and attestation of the boot integrity of the instance. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the instance is created. Returned: success |
|
Defines whether the instance has Secure Boot enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. Returned: success |
|
The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during cluster or node pool creation. Each tag within the list must comply with RFC1035. Returned: success |
|
List of kubernetes taints to be applied to each node. Returned: success |
|
Effect for taint. Returned: success |
|
Key for taint. Returned: success |
|
Value for taint. Returned: success |
|
WorkloadMetadataConfig defines the metadata configuration to expose to workloads on the node pool. Returned: success |
|
Mode is the configuration for how to expose metadata to workloads running on the node pool. Returned: success |
|
The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota. Returned: success |
|
The location where the node pool is deployed. Returned: success |
|
Management configuration for this NodePool. Returned: success |
|
A flag that specifies whether the node auto-repair is enabled for the node pool. If enabled, the nodes in this node pool will be monitored and, if they fail health checks too many times, an automatic repair action will be triggered. Returned: success |
|
A flag that specifies whether node auto-upgrade is enabled for the node pool. If enabled, node auto-upgrade helps keep the nodes in your node pool up to date with the latest release version of Kubernetes. Returned: success |
|
Specifies the Auto Upgrade knobs for the node pool. Returned: success |
|
This field is set when upgrades are about to commence with the approximate start time for the upgrades, in RFC3339 text format. Returned: success |
|
This field is set when upgrades are about to commence with the description of the upgrade. Returned: success |
|
The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool. Returned: success |
|
Constraint enforced on the max num of pods per node. Returned: success |
|
The name of the node pool. Returned: success |
|
The pod CIDR block size per node in this node pool. Returned: success |
|
Status of nodes in this pool instance. Returned: success |
|
Additional information about the current status of this node pool instance. Returned: success |
|
The version of the Kubernetes of this node. Returned: success |
Authors
Google Inc. (@googlecloudplatform)