azure.azcollection.azure_rm_monitordatacollectionrules module – Create, update and delete Data Collection Rules
Note
This module is part of the azure.azcollection collection (version 3.7.0).
You might already have this collection installed if you are using the ansible
package.
It is not included in ansible-core
.
To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install azure.azcollection
.
You need further requirements to be able to use this module,
see Requirements for details.
To use it in a playbook, specify: azure.azcollection.azure_rm_monitordatacollectionrules
.
New in azure.azcollection 3.7.0
Synopsis
Create, update and delete Data Collection Rules
Requirements
The below requirements are needed on the host that executes this module.
python >= 2.7
The host that executes this module must have the azure.azcollection collection installed via galaxy
All python packages listed in collection’s requirements.txt must be installed via pip on the host that executes modules from azure.azcollection
Full installation instructions may be found https://galaxy.ansible.com/azure/azcollection
Parameters
Parameter |
Comments |
---|---|
Active Directory username. Use when authenticating with an Active Directory user rather than service principal. |
|
Azure AD authority url. Use when authenticating with Username/password, and has your own ADFS authority. |
|
Selects an API profile to use when communicating with Azure services. Default value of Default: |
|
Use to control if tags field is canonical or just appends to existing tags. When canonical, any tags not found in the tags parameter will be removed from the object’s metadata. Choices:
|
|
Controls the source of the credentials to use for authentication. Can also be set via the When set to When set to When set to When set to When set to The Choices:
|
|
Controls the certificate validation behavior for Azure endpoints. By default, all modules will validate the server certificate, but when an HTTPS proxy is in use, or against Azure Stack, it may be necessary to disable this behavior by passing Choices:
|
|
Azure client ID. Use when authenticating with a Service Principal or Managed Identity (msi). Can also be set via the |
|
For cloud environments other than the US public cloud, the environment name (as defined by Azure Python SDK, eg, Default: |
|
The resource ID of the data collection endpoint that this rule can be used with |
|
Definition of which streams are sent to which destinations. |
|
The builtIn transform to transform stream data. |
|
List of destinations for this data flow. |
|
The output stream of the transform. Only required if the transform changes data to a different stream. |
|
The KQL query to transform stream data. |
|
The specification of data sources. This property is optional and can be omitted if the rule is meant to be used via direct calls to the provisioned endpoint. |
|
Specifications of pull based data sources. |
|
Definition of Event Hub configuration. |
|
Event Hub consumer group name. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
The stream to collect from EventHub. |
|
Definition of which data will be collected from a separate VM extension that integrates with the Azure Monitor Agent. Collected from either Windows and Linux machines, depending on which extension is defined. |
|
The name of the VM extension. |
|
The extension settings. The format is specific for particular extension. |
|
The list of data sources this extension needs data from. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
List of streams that this data source will be sent to. A stream indicates what schema will be used for this data and usually what table in Log Analytics the data will be sent to. |
|
Enables IIS logs to be collected by this data collection rule. |
|
Absolute paths file location. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
IIS streams. |
|
Definition of which custom log files will be collected by this data collection rule. |
|
File Patterns where the log files are located |
|
The data format of the log files. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
The log files specific settings. |
|
Text settings |
|
List of streams that this data source will be sent to. A stream indicates what schema will be used for this data and usually what table in Log Analytics the data will be sent to. |
|
Definition of which performance counters will be collected and how they will be collected by this data collection rule. Collected from both Windows and Linux machines where the counter is present. |
|
A list of specifier names of the performance counters you want to collect. Use a wildcard (*) to collect a counter for all instances. To get a list of performance counters on Windows, run the command ‘typeperf’ |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
The number of seconds between consecutive counter measurements (samples). |
|
List of streams that this data source will be sent to. A stream indicates what schema will be used for this data and usually what table in Log Analytics the data will be sent to. |
|
Definition of platform telemetry data source configuration. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
List of platform telemetry streams to collect. |
|
Definition of Prometheus metrics forwarding configuration. |
|
The list of label inclusion filters in the form of label “name-value” pairs. Currently only one label is supported “microsoft_metrics_include_label”. Label values are matched case-insensitively. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
List of streams that this data source will be sent to. |
|
Definition of which syslog data will be collected and how it will be collected. Only collected from Linux machines. |
|
The list of facility names. |
|
The log levels to collect. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
List of streams that this data source will be sent to. A stream indicates what schema will be used for this data and usually what table in Log Analytics the data will be sent to. |
|
Definition of which Windows Event Log events will be collected and how they will be collected. Only collected from Windows machines. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
List of streams that this data source will be sent to. A stream indicates what schema will be used for this data and usually what table in Log Analytics the data will be sent to. |
|
A list of Windows Event Log queries in XPATH format. |
|
Enables Firewall logs to be collected by this data collection rule. |
|
A friendly name for the data source. This name should be unique across all data sources (regardless of type) within the data collection rule. |
|
Firewall logs streams. |
|
Description for the data collection rule |
|
The resource ID of the event hub. |
|
A friendly name for the destination. This name should be unique across all destinations (regardless of type) within the data collection rule. |
|
List of Event Hubs Direct destinations. |
|
The resource ID of the event hub. |
|
A friendly name for the destination. This name should be unique across all destinations (regardless of type) within the data collection rule. |
|
List of Log Analytics destinations. |
|
A friendly name for the destination. This name should be unique across all destinations (regardless of type) within the data collection rule. |
|
The resource ID of the Log Analytics workspace. |
|
List of monitoring account destinations. |
|
The resource ID of the monitoring account. |
|
A friendly name for the destination. This name should be unique across all destinations (regardless of type) within the data collection rule. |
|
List of storage accounts destinations. |
|
The container name of the Storage Blob. |
|
A friendly name for the destination. This name should be unique across all destinations (regardless of type) within the data collection rule. |
|
The resource ID of the storage account. |
|
List of Storage Blob Direct destinations. To be used only for sending data directly to store from the agent. |
|
The container name of the Storage Blob. |
|
A friendly name for the destination. This name should be unique across all destinations (regardless of type) within the data collection rule. |
|
The resource ID of the storage account. |
|
List of Storage Table Direct destinations. |
|
A friendly name for the destination. This name should be unique across all destinations (regardless of type) within the data collection rule. |
|
The resource ID of the storage account. |
|
The name of the Storage Table. |
|
Determines whether or not instance discovery is performed when attempting to authenticate. Setting this to true will completely disable both instance discovery and authority validation. This functionality is intended for use in scenarios where the metadata endpoint cannot be reached such as in private clouds or Azure Stack. The process of instance discovery entails retrieving authority metadata from https://login.microsoft.com/ to validate the authority. By setting this to **True**, the validation of the authority is disabled. As a result, it is crucial to ensure that the configured authority host is valid and trustworthy. Set via credential file profile or the Choices:
|
|
Kind of the data collection rule Use Use Choices:
|
|
Location of the data colelction rule defaults to location of exiting data collection rule or location of the resource group if unspecified |
|
Parent argument. |
|
Parent argument. |
|
The name of the data collection rule you’re creating/changing |
|
Active Directory user password. Use when authenticating with an Active Directory user rather than service principal. |
|
Security profile found in ~/.azure/credentials file. |
|
The name of the resource group |
|
Azure client secret. Use when authenticating with a Service Principal. |
|
State of the data collection rule Use Use Choices:
|
|
Declaration of a custom stream. Sub dict is a list of columns used by data in this stream. top level key is the name of the stream_declaration |
|
Name of the stream |
|
Declaration of a custom stream. |
|
The name of the column. |
|
The type of the column data. Choices:
|
|
Your Azure subscription Id. |
|
Dictionary of string:string pairs to assign as metadata to the object. Metadata tags on the object will be updated with any provided values. To remove tags set append_tags option to false. Currently, Azure DNS zones and Traffic Manager services also don’t allow the use of spaces in the tag. Azure Front Door doesn’t support the use of Azure Automation and Azure CDN only support 15 tags on resources. |
|
Azure tenant ID. Use when authenticating with a Service Principal. |
|
The thumbprint of the private key specified in x509_certificate_path. Use when authenticating with a Service Principal. Required if x509_certificate_path is defined. |
|
Path to the X509 certificate used to create the service principal in PEM format. The certificate must be appended to the private key. Use when authenticating with a Service Principal. |
Notes
Note
For authentication with Azure you can pass parameters, set environment variables, use a profile stored in ~/.azure/credentials, or log in before you run your tasks or playbook with
az login
.Authentication is also possible using a service principal or Active Directory user.
To authenticate via service principal, pass subscription_id, client_id, secret and tenant or set environment variables AZURE_SUBSCRIPTION_ID, AZURE_CLIENT_ID, AZURE_SECRET and AZURE_TENANT.
To authenticate via Active Directory user, pass ad_user and password, or set AZURE_AD_USER and AZURE_PASSWORD in the environment.
Alternatively, credentials can be stored in ~/.azure/credentials. This is an ini file containing a [default] section and the following keys: subscription_id, client_id, secret and tenant or subscription_id, ad_user and password. It is also possible to add additional profiles. Specify the profile by passing profile or setting AZURE_PROFILE in the environment.
See Also
See also
- Sign in with Azure CLI
How to authenticate using the
az login
command.
Examples
- name: Add a data collection rule
azure.azcollection.azure_rm_monitordatacollectionrules:
state: present
name: data_collection_rule_name
resource_group: resource_group_name
location: westeurope
kind: Linux
description: This is an example description of a data collection rule
data_sources:
performance_counters:
- name: perfCounterDataSource
streams:
- Microsoft-Perf
sampling_frequency_in_seconds: 60
counter_specifiers:
- Processor(*)\% Processor Time
- Processor(*)\% Idle Time
destinations:
log_analytics:
- workspace_resource_id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/resougce_group_name_log_analytics_workspace/providers/Microsoft.OperationalInsights/workspaces/log_analytics_workspace_name
name: log_analytics_workspace_name
data_flows:
- destinations:
- log_analytics_workspace_name
streams:
- Microsoft-Perf
append_tags: false
tags:
ThisIsAnExampleTag: ExampleValue
- name: Add a data collection rule
azure.azcollection.azure_rm_monitordatacollectionrules:
state: present
name: data_collection_rule_name
resource_group: resource_group_name
append_tags: true
tags:
ThisIsAnAddedExampleTag: ExampleValue
# Note this needs a DCR endpoint, not sure why, creating one via portal does not need that
# Also the table in your log analytics workspace has to already exist
- name: Add a data collection rule for collecting a custom log
azure.azcollection.azure_rm_monitordatacollectionrules:
name: data_collection_rule_name
resource_group: resource_group_name
location: westeurope
kind: Linux
data_sources:
log_files:
- file_patterns:
- /var/log/dnf.rpm.log
format: text
name: Custom-Text-CustomLogs_CL
streams:
- Custom-Text-CustomLogs_CL
destinations:
log_analytics:
- workspace_resource_id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/resougce_group_name_log_analytics_workspace/providers/Microsoft.OperationalInsights/workspaces/log_analytics_workspace_name
name: log_analytics_workspace_name
data_flows:
- destinations:
- log_analytics_workspace_name
output_stream: Custom-CustomLogs_CL
streams:
- Custom-Text-CustomLogs_CL
transform_kql: source
data_collection_endpoint_id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/resougce_group_name_log_analytics_workspace/providers/Microsoft.Insights/dataCollectionEndpoints/dcr-endpoint
stream_declarations:
Custom-Text-CustomLogs_CL:
columns:
- name: TimeGenerated
type: datetime
- name: RawData
type: string
- name: FilePath
type: string
- name: Computer
type: string
- name: Delete a data collection rule
azure.azcollection.azure_rm_monitordatacollectionrules:
state: present
name: data_collection_rule_name
resource_group: resource_group_name
Return Values
Common return values are documented here, the following are the fields unique to this module:
Key |
Description |
---|---|
Details of the data collection rule Is null on state==absent (data collection rule does not exist or will be deleted) Assumes you make legal changes in check mode Returned: always Sample: |