google.cloud.gcp_bigquery_dataset_info – Gather info for GCP Dataset¶
Note
This plugin is part of the google.cloud collection (version 1.0.2).
To install it use: ansible-galaxy collection install google.cloud
.
To use it in a playbook, specify: google.cloud.gcp_bigquery_dataset_info
.
Requirements¶
The below requirements are needed on the host that executes this module.
python >= 2.6
requests >= 2.18.4
google-auth >= 1.3.0
Parameters¶
Notes¶
Note
for authentication, you can set service_account_file using the
gcp_service_account_file
env variable.for authentication, you can set service_account_contents using the
GCP_SERVICE_ACCOUNT_CONTENTS
env variable.For authentication, you can set service_account_email using the
GCP_SERVICE_ACCOUNT_EMAIL
env variable.For authentication, you can set auth_kind using the
GCP_AUTH_KIND
env variable.For authentication, you can set scopes using the
GCP_SCOPES
env variable.Environment variables values will only be used if the playbook values are not set.
The service_account_email and service_account_file options are mutually exclusive.
Examples¶
- name: get info on a dataset
gcp_bigquery_dataset_info:
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
Return Values¶
Common return values are documented here, the following are the fields unique to this module:
Key | Returned | Description | |||
---|---|---|---|---|---|
resources
complex
|
always |
List of resources
|
|||
access
complex
|
success |
An array of objects that define dataset access for one or more entities.
|
|||
domain
string
|
success |
A domain to grant access to. Any users signed in with the domain specified will be granted the specified access .
|
|||
groupByEmail
string
|
success |
An email address of a Google Group to grant access to.
|
|||
role
string
|
success |
Describes the rights granted to the user specified by the other member of the access object. Basic, predefined, and custom roles are supported. Predefined roles that have equivalent basic roles are swapped by the API to their basic counterparts. See [official docs](https://cloud.google.com/bigquery/docs/access-control).
|
|||
specialGroup
string
|
success |
A special group to grant access to. Possible values include: * `projectOwners`: Owners of the enclosing project.
* `projectReaders`: Readers of the enclosing project.
* `projectWriters`: Writers of the enclosing project.
* `allAuthenticatedUsers`: All authenticated BigQuery users.
|
|||
userByEmail
string
|
success |
An email address of a user to grant access to. For example: [email protected] .
|
|||
view
complex
|
success |
A view from a different dataset to grant access to. Queries executed against that view will have read access to tables in this dataset. The role field is not required when this field is set. If that view is updated by any user, access to the view needs to be granted again via an update operation.
|
|||
datasetId
string
|
success |
The ID of the dataset containing this table.
|
|||
projectId
string
|
success |
The ID of the project containing this table.
|
|||
tableId
string
|
success |
The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores. The maximum length is 1,024 characters.
|
|||
creationTime
integer
|
success |
The time when this dataset was created, in milliseconds since the epoch.
|
|||
datasetReference
complex
|
success |
A reference that identifies the dataset.
|
|||
datasetId
string
|
success |
A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores. The maximum length is 1,024 characters.
|
|||
projectId
string
|
success |
The ID of the project containing this dataset.
|
|||
defaultEncryptionConfiguration
complex
|
success |
The default encryption key for all tables in the dataset. Once this property is set, all newly-created partitioned tables in the dataset will have encryption key set to this value, unless table creation request (or query) overrides the key.
|
|||
kmsKeyName
string
|
success |
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
|
|||
defaultPartitionExpirationMs
integer
|
success |
The default partition expiration for all partitioned tables in the dataset, in milliseconds.
Once this property is set, all newly-created partitioned tables in the dataset will have an `expirationMs` property in the `timePartitioning` settings set to this value, and changing the value will only affect new tables, not existing ones. The storage in a partition will have an expiration time of its partition time plus this value.
Setting this property overrides the use of `defaultTableExpirationMs` for partitioned tables: only one of `defaultTableExpirationMs` and `defaultPartitionExpirationMs` will be used for any new partitioned table. If you provide an explicit `timePartitioning.expirationMs` when creating or updating a partitioned table, that value takes precedence over the default partition expiration time indicated by this property.
|
|||
defaultTableExpirationMs
integer
|
success |
The default lifetime of all tables in the dataset, in milliseconds.
The minimum value is 3600000 milliseconds (one hour).
Once this property is set, all newly-created tables in the dataset will have an `expirationTime` property set to the creation time plus the value in this property, and changing the value will only affect new tables, not existing ones. When the `expirationTime` for a given table is reached, that table will be deleted automatically.
If a table's `expirationTime` is modified or removed before the table expires, or if you provide an explicit `expirationTime` when creating a table, that value takes precedence over the default expiration time indicated by this property.
|
|||
description
string
|
success |
A user-friendly description of the dataset.
|
|||
etag
string
|
success |
A hash of the resource.
|
|||
friendlyName
string
|
success |
A descriptive name for the dataset.
|
|||
id
string
|
success |
The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field .
|
|||
labels
dictionary
|
success |
The labels associated with this dataset. You can use these to organize and group your datasets .
|
|||
lastModifiedTime
integer
|
success |
The date when this dataset or any of its tables was last modified, in milliseconds since the epoch.
|
|||
location
string
|
success |
The geographic location where the dataset should reside.
See [official docs](https://cloud.google.com/bigquery/docs/dataset-locations).
There are two types of locations, regional or multi-regional. A regional location is a specific geographic place, such as Tokyo, and a multi-regional location is a large geographic area, such as the United States, that contains at least two geographic places.
The default value is multi-regional location `US`.
Changing this forces a new resource to be created.
|
|||
name
string
|
success |
Dataset name.
|
Authors¶
Google Inc. (@googlecloudplatform)