community.aws.aws_ssm connection – connect to EC2 instances via AWS Systems Manager
Note
This connection plugin is part of the community.aws collection (version 9.3.0).
You might already have this collection installed if you are using the ansible package.
It is not included in ansible-core.
To check whether it is installed, run ansible-galaxy collection list.
To install it, use: ansible-galaxy collection install community.aws.
You need further requirements to be able to use this connection plugin,
see Requirements for details.
To use it in a playbook, specify: community.aws.aws_ssm.
Synopsis
- This connection plugin allows Ansible to execute tasks on an EC2 instance via an AWS SSM Session. 
Requirements
The below requirements are needed on the local controller node that executes this connection.
- The remote EC2 instance must be running the AWS Systems Manager Agent (SSM Agent). https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html 
- The control machine must have the AWS session manager plugin installed. https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html 
- The remote EC2 Linux instance must have curl installed. 
- The remote EC2 Linux instance and the controller both need network connectivity to S3. 
- The remote instance does not require IAM credentials for S3. This module will generate a presigned URL for S3 from the controller, and then will pass that URL to the target over SSM, telling the target to download/upload from S3 with - curl.
- The controller requires IAM permissions to upload, download and delete files from the specified S3 bucket. This includes `s3:GetObject`, `s3:PutObject`, `s3:ListBucket`, `s3:DeleteObject` and `s3:GetBucketLocation`. 
Parameters
| Parameter | Comments | 
|---|---|
| The STS access key to use when connecting via session-manager. Configuration: 
 | |
| The S3 endpoint URL of the bucket used for file transfers. Configuration: 
 | |
| The name of the S3 bucket used for file transfers. Configuration: 
 | |
| KMS key id to use when encrypting objects using  Configuration: 
 | |
| Server-side encryption mode to use for uploads on the S3 bucket used for file transfer. Choices: 
 Configuration: 
 | |
| The EC2 instance ID. Configuration: 
 | |
| This defines the location of the session-manager-plugin binary. Support for environment variable was added in version 9.1.0. The plugin will first check the  Configuration: 
 | |
| Sets AWS profile to use. Configuration: 
 | |
| Number of attempts to connect. Default:  Configuration: 
 | |
| The region the EC2 instance is located. Default:  Configuration: 
 | |
| The addressing style to use when using S3 URLs. When the S3 bucket isn’t in the same region as the Instance explicitly setting the addressing style to ‘virtual’ may be necessary https://repost.aws/knowledge-center/s3-http-307-response as this forces the use of a specific endpoint. Choices: 
 Configuration: 
 | |
| The STS secret key to use when connecting via session-manager. Configuration: 
 | |
| The STS session token to use when connecting via session-manager. Configuration: 
 | |
| SSM Session document to use when connecting. To configure the remote_user (when  Configuration: 
 | |
| Connection timeout seconds. Default:  Configuration: 
 | 
Note
Configuration entries listed above for each entry type (Ansible variable, environment variable, and so on) have a low to high priority order. For example, a variable that is lower in the list will override a variable that is higher up. The entry types are also ordered by precedence from low to high priority order. For example, an ansible.cfg entry (further up in the list) is overwritten by an Ansible variable (further down in the list).
Notes
Note
- The - community.aws.aws_ssmconnection plugin does not support using the ``remote_user`` and ``ansible_user`` variables to configure the remote user. The ``become_user`` parameter should be used to configure which user to run commands as. Remote commands will often default to running as the ``ssm-agent`` user, however this will also depend on how SSM has been configured.
- This plugin requires an S3 bucket to send files to/from the remote instance. This is required even for modules which do not explicitly send files (such as the - shellor- commandmodules), because Ansible sends over the- .pyfiles of the module itself, via S3.
- Files sent via S3 will be named in S3 with the EC2 host ID (e.g. - i-123abc/) as the prefix.
- The files in S3 will be deleted by the end of the playbook run. If the play is terminated ungracefully, the files may remain in the bucket. If the bucket has versioning enabled, the files will remain in version history. If your tasks involve sending secrets to/from the remote instance (e.g. within a - shellcommand, or a SQL password in the- community.postgresql.postgresql_querymodule) then those passwords will be included in plaintext in those files in S3 indefinitely, visible to anyone with access to that bucket. Therefore it is recommended to use a bucket with versioning disabled/suspended.
- The files in S3 will be deleted even if the - keep_remote_filessetting is- true.
Examples
---
# Wait for SSM Agent to be available on the Instance
- name: Wait for connection to be available
  vars:
    ansible_connection: aws_ssm
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-west-2
    # When the S3 bucket isn't in the same region as the Instance
    # Explicitly setting the addressing style to 'virtual' may be necessary
    # https://repost.aws/knowledge-center/s3-http-307-response
    ansible_aws_ssm_s3_addressing_style: virtual
  tasks:
    - name: Wait for connection
      wait_for_connection:
# Stop Spooler Process on Windows Instances
- name: Stop Spooler Service on Windows Instances
  vars:
    ansible_connection: aws_ssm
    ansible_shell_type: powershell
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-east-1
  tasks:
    - name: Stop spooler service
      win_service:
        name: spooler
        state: stopped
# Install a Nginx Package on Linux Instance
- name: Install a Nginx Package
  vars:
    ansible_connection: aws_ssm
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-west-2
  tasks:
    - name: Install a Nginx Package
      yum:
        name: nginx
        state: present
# Create a directory in Windows Instances
- name: Create a directory in Windows Instance
  vars:
    ansible_connection: aws_ssm
    ansible_shell_type: powershell
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-east-1
  tasks:
    - name: Create a Directory
      win_file:
        path: C:\Windows\temp
        state: directory
---
# Making use of Dynamic Inventory Plugin
# =======================================
# # aws_ec2.yml (Dynamic Inventory - Linux)
# plugin: aws_ec2
# regions:
#   - us-east-1
# hostnames:
#   - instance-id
# # This will return the Instances with the tag "SSMTag" set to "ssmlinux"
# filters:
#   tag:SSMTag: ssmlinux
# -----------------------
- name: install aws-cli
  hosts: all
  gather_facts: false
  vars:
    ansible_connection: aws_ssm
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-east-1
  tasks:
    - name: aws-cli
      raw: yum install -y awscli
      tags: aws-cli
---
# Execution: ansible-playbook linux.yaml -i aws_ec2.yml
# =====================================================
# # aws_ec2.yml (Dynamic Inventory - Windows)
# plugin: aws_ec2
# regions:
#   - us-east-1
# hostnames:
#   - instance-id
# # This will return the Instances with the tag "SSMTag" set to "ssmwindows"
# filters:
#   tag:SSMTag: ssmwindows
# -----------------------
- name: Create a dir.
  hosts: all
  gather_facts: false
  vars:
    ansible_connection: aws_ssm
    ansible_shell_type: powershell
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-east-1
  tasks:
    - name: Create the directory
      win_file:
        path: C:\Temp\SSM_Testing5
        state: directory
---
# Execution:  ansible-playbook win_file.yaml -i aws_ec2.yml
# The playbook tasks will get executed on the instance ids returned from the dynamic inventory plugin using ssm connection.
# Install a Nginx Package on Linux Instance; with specific SSE CMK used for the file transfer
- name: Install a Nginx Package
  vars:
    ansible_connection: aws_ssm
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-west-2
    ansible_aws_ssm_bucket_sse_mode: 'aws:kms'
    ansible_aws_ssm_bucket_sse_kms_key_id: alias/kms-key-alias
  tasks:
    - name: Install a Nginx Package
      yum:
        name: nginx
        state: present
# Install a Nginx Package on Linux Instance; using the specified SSM document
- name: Install a Nginx Package
  vars:
    ansible_connection: aws_ssm
    ansible_aws_ssm_bucket_name: nameofthebucket
    ansible_aws_ssm_region: us-west-2
    ansible_aws_ssm_document: nameofthecustomdocument
  tasks:
    - name: Install a Nginx Package
      yum:
        name: nginx
        state: present
---
# Execution: ansible-playbook play.yaml -i ssm_inventory.ini
# =====================================================
# ssm_inventory.ini
# [all]
# linux ansible_aws_ssm_instance_id=i-01234567829abcdef ansible_aws_ssm_region=us-east-1
# [all:vars]
# ansible_connection=community.aws.aws_ssm
# ansible_python_interpreter=/usr/bin/python3
# local_tmp=/tmp/ansible-local-ssm-0123456
# ansible_aws_ssm_bucket_name=my-test-bucket
# ansible_aws_ssm_s3_addressing_style=virtual
# -----------------------
# Transfer file and run script on remote host
- name: Transfer file and Run script into SSM manage node
  hosts: all
  gather_facts: false
  tasks:
    - name: Create shell script
      ansible.builtin.copy:
        mode: '0755'
        dest: '/tmp/date.sh'
        content: |
          #!/usr/bin/env bash
          date
    - name: Execute script from remote host
      ansible.builtin.shell:
        cmd: '/tmp/date.sh'
