community.grafana.grafana_datasource module – Manage Grafana datasources
Note
This module is part of the community.grafana collection (version 1.5.3).
You might already have this collection installed if you are using the ansible
package.
It is not included in ansible-core
.
To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install community.grafana
.
To use it in a playbook, specify: community.grafana.grafana_datasource
.
Synopsis
Create/update/delete Grafana datasources via API.
Parameters
Parameter |
Comments |
---|---|
The access mode for this datasource. Choices:
|
|
Defined data is used for datasource jsonData Data may be overridden by specifically defined parameters (like zabbix_user) Default: |
|
Defined data is used for datasource secureJsonData Data may be overridden by specifically defined parameters (like tls_client_cert) Stored as secure data, see Default: |
|
AWS access key for CloudWatch datasource type when Stored as secure data, see Default: |
|
AWS IAM role arn to assume for CloudWatch datasource type when Default: |
|
Type for AWS authentication for CloudWatch datasource type (authType of grafana api) Choices:
|
|
Profile for AWS credentials for CloudWatch datasource type when Default: |
|
Namespaces of Custom Metrics for CloudWatch datasource type Default: |
|
AWS default region for CloudWatch datasource type Choices:
|
|
AWS secret key for CloudWatch datasource type when Stored as secure data, see Default: |
|
The application/client ID for the Azure AD app registration to use for authentication. |
|
The national cloud for your Azure account Choices:
|
|
The application client secret for the Azure AD app registration to use for auth |
|
The directory/tenant ID for the Azure AD app registration to use for authentication |
|
The datasource basic auth password, when Stored as secure data, see |
|
The datasource basic auth user. Setting this option with basic_auth_password will enable basic auth. |
|
PEM formatted certificate chain file to be used for SSL client authentication. This file can also include the key as well, and if the key is included, client_key is not required |
|
PEM formatted file that contains your private key to be used for SSL client authentication. If client_cert contains both the certificate and key, this option is not required. |
|
Name of the database for the datasource. This options is required when the |
|
The type of the datasource. Required when Choices:
|
|
The URL of the datasource. Required when |
|
Secure data is not updated per default (see notes!) To update secure data you have to enable this option! Enabling this, the task will always report changed=True Choices:
|
|
Elasticsearch version (for Version 56 is for elasticsearch 5.6+ where you can specify the Choices:
|
|
The Grafana API key. If set, |
|
For elasticsearch Choices:
|
|
Make this datasource the default one. Choices:
|
|
Starting with elasticsearch 5.6, you can specify the max concurrent shard per requests. Default: |
|
The name of the datasource. |
|
Grafana Organisation ID in which the datasource should be created. Not used when Default: |
|
The datasource password. Stored as secure data, see |
|
SSL mode for Choices:
|
|
Status of the datasource Choices:
|
|
Name of the time field in elasticsearch ds. For example Default: |
|
Minimum group by interval for for example |
|
The TLS CA certificate for self signed certificates. Only used when Stored as secure data, see |
|
The client TLS certificate. If Starts with —– BEGIN CERTIFICATE —– Stored as secure data, see |
|
The client TLS private key Starts with —– BEGIN RSA PRIVATE KEY —– Stored as secure data, see |
|
Skip the TLS datasource certificate verification. Choices:
|
|
Use trends or not for zabbix datasource type. Choices:
|
|
The opentsdb time resolution. Choices:
|
|
The opentsdb version. Use Choices:
|
|
The Grafana URL. |
|
The Grafana password for API authentication. Default: |
|
The Grafana user for API authentication. Default: |
|
If Choices:
|
|
The datasource login user for influxdb datasources. |
|
If This should only set to Choices:
|
|
Whether credentials such as cookies or auth headers should be sent with cross-site requests. Choices:
|
|
Password for Zabbix API |
|
User for Zabbix API |
Notes
Note
Secure data will get encrypted by the Grafana API, thus it can not be compared on subsequent runs. To workaround this, secure data will not be updated after initial creation! To force the secure data update you have to set enforce_secure_data=True.
Hint, with the
enforce_secure_data
always reporting changed=True, you might just do one Task updating the datasource without any secure data and make a separate playbook/task also changing the secure data. This way it will not break any workflow.
Examples
---
- name: Create elasticsearch datasource
community.grafana.grafana_datasource:
name: "datasource-elastic"
grafana_url: "https://grafana.company.com"
grafana_user: "admin"
grafana_password: "xxxxxx"
org_id: "1"
ds_type: "elasticsearch"
ds_url: "https://elastic.company.com:9200"
database: "[logstash_]YYYY.MM.DD"
basic_auth_user: "grafana"
basic_auth_password: "******"
time_field: "@timestamp"
time_interval: "1m"
interval: "Daily"
es_version: 56
max_concurrent_shard_requests: 42
tls_ca_cert: "/etc/ssl/certs/ca.pem"
- name: Create influxdb datasource
community.grafana.grafana_datasource:
name: "datasource-influxdb"
grafana_url: "https://grafana.company.com"
grafana_user: "admin"
grafana_password: "xxxxxx"
org_id: "1"
ds_type: "influxdb"
ds_url: "https://influx.company.com:8086"
database: "telegraf"
time_interval: ">10s"
tls_ca_cert: "/etc/ssl/certs/ca.pem"
- name: Create postgres datasource
community.grafana.grafana_datasource:
name: "datasource-postgres"
grafana_url: "https://grafana.company.com"
grafana_user: "admin"
grafana_password: "xxxxxx"
org_id: "1"
ds_type: "postgres"
ds_url: "postgres.company.com:5432"
database: "db"
user: "postgres"
sslmode: "verify-full"
additional_json_data:
postgresVersion: 12
timescaledb: false
additional_secure_json_data:
password: "iampgroot"
- name: Create cloudwatch datasource
community.grafana.grafana_datasource:
name: "datasource-cloudwatch"
grafana_url: "https://grafana.company.com"
grafana_user: "admin"
grafana_password: "xxxxxx"
org_id: "1"
ds_type: "cloudwatch"
ds_url: "http://monitoring.us-west-1.amazonaws.com"
aws_auth_type: "keys"
aws_default_region: "us-west-1"
aws_access_key: "speakFriendAndEnter"
aws_secret_key: "mel10n"
aws_custom_metrics_namespaces: "n1,n2"
- name: grafana - add thruk datasource
community.grafana.grafana_datasource:
name: "datasource-thruk"
grafana_url: "https://grafana.company.com"
grafana_user: "admin"
grafana_password: "xxxxxx"
org_id: "1"
ds_type: "sni-thruk-datasource"
ds_url: "https://thruk.company.com/sitename/thruk"
basic_auth_user: "thruk-user"
basic_auth_password: "******"
# handle secure data - workflow example
# this will create/update the datasource but dont update the secure data on updates
# so you can assert if all tasks are changed=False
- name: create prometheus datasource
community.grafana.grafana_datasource:
name: openshift_prometheus
ds_type: prometheus
ds_url: https://openshift-monitoring.company.com
access: proxy
tls_skip_verify: true
additional_json_data:
httpHeaderName1: "Authorization"
additional_secure_json_data:
httpHeaderValue1: "Bearer ihavenogroot"
# in a separate task or even play you then can force to update
# and assert if each datasource is reporting changed=True
- name: update prometheus datasource
community.grafana.grafana_datasource:
name: openshift_prometheus
ds_type: prometheus
ds_url: https://openshift-monitoring.company.com
access: proxy
tls_skip_verify: true
additional_json_data:
httpHeaderName1: "Authorization"
additional_secure_json_data:
httpHeaderValue1: "Bearer ihavenogroot"
enforce_secure_data: true
Return Values
Common return values are documented here, the following are the fields unique to this module:
Key |
Description |
---|---|
datasource created/updated by module Returned: changed Sample: |