amazon.aws.s3_object module – Manage objects in S3
Note
This module is part of the amazon.aws collection (version 9.1.0).
You might already have this collection installed if you are using the ansible
package.
It is not included in ansible-core
.
To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install amazon.aws
.
You need further requirements to be able to use this module,
see Requirements for details.
To use it in a playbook, specify: amazon.aws.s3_object
.
New in amazon.aws 1.0.0
Synopsis
This module allows the user to manage the objects and directories within S3 buckets. Includes support for creating and deleting objects and directories, retrieving objects as files or strings, generating download links and copying objects that are already stored in Amazon S3.
S3 buckets can be created or deleted using the amazon.aws.s3_bucket module.
Compatible with AWS, DigitalOcean, Ceph, Walrus, FakeS3 and StorageGRID.
When using non-AWS services,
endpoint_url
should be specified.
Note
This module has a corresponding action plugin.
Aliases: aws_s3
Requirements
The below requirements are needed on the host that executes this module.
python >= 3.6
boto3 >= 1.28.0
botocore >= 1.31.0
Parameters
Parameter |
Comments |
---|---|
AWS access key ID. See the AWS documentation for more information about access tokens https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys. The The aws_access_key and profile options are mutually exclusive. The aws_access_key_id alias was added in release 5.1.0 for consistency with the AWS botocore SDK. The ec2_access_key alias has been deprecated and will be removed in a release after 2024-12-01. Support for the |
|
The location of a CA Bundle to use when validating SSL certificates. The |
|
A dictionary to modify the botocore configuration. Parameters can be found in the AWS documentation https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html#botocore.config.Config. |
|
Bucket name. |
|
Enable API compatibility with Ceph RGW. It takes into account the S3 API subset working with Ceph in order to provide the same module behaviour where possible. Requires Choices:
|
|
The content to The parameter value will be treated as a string and converted to UTF-8 before sending it to S3. To send binary data, use the One of |
|
The base64-encoded binary data to Use this if you need to put raw binary data, and don’t forget to encode in base64. One of |
|
The source details of the object to copy. Required if |
|
The name of the source bucket. |
|
key name of the source object. if not specified, all the objects of the |
|
Copy all the keys that begin with the specified prefix. Ignored if Default: |
|
version ID of the source object. |
|
Use a The Choices:
|
|
Enables Amazon S3 Dual-Stack Endpoints, allowing S3 communications using both IPv4 and IPv6. Support for passing Choices:
|
|
KMS key id to use when encrypting objects using Ignored if |
|
URL to connect to instead of the default AWS endpoints. While this can be used to connection to other AWS-compatible services the amazon.aws and community.aws collections are only tested against AWS. The The ec2_url and s3_url aliases have been deprecated and will be removed in a release after 2024-12-01. Support for the |
|
The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied). |
|
Time limit (in seconds) for the URL generated and returned by S3/Walrus when performing a Ignored when Default: |
|
Overrides initial bucket lookups in case bucket or IAM policies are restrictive. This can be useful when a user may have the Choices:
|
|
Specifies the key to start with when using list mode. Object keys are returned in alphabetical order, starting with key after the marker in order. |
|
Switches the module behaviour between
Support for creating and deleting buckets was removed in release 6.0.0. To create and manage the bucket itself please use the amazon.aws.s3_bucket module. Support for Choices:
|
|
Key name of the object inside the bucket. Can be used to create “virtual directories”, see examples. Object key names should not include the leading Support for passing the leading |
|
Force overwrite either locally on the filesystem or remotely with the object/key. Used when Ignored when when Must be a Boolean,
When this is set to When Default: |
|
This option lets the user set the canned permissions on the object/bucket that are created. The permissions that can be set are For a full list of permissions see the AWS documentation https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl. Choices:
Default: |
|
Limits the response to keys that begin with the specified prefix for list mode. Default: |
|
A named AWS profile to use for authentication. See the AWS documentation for more information about named profiles https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html. The The profile option is mutually exclusive with the aws_access_key, aws_secret_key and security_token options. |
|
If If the Tag keys beginning with Choices:
|
|
The AWS region to use. For global services such as IAM, Route53 and CloudFront, region is ignored. The See the Amazon AWS documentation for more information http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region. The Support for the |
|
On recoverable failure, how many times to retry before actually failing. Default: |
|
AWS secret access key. See the AWS documentation for more information about access tokens https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys. The The secret_key and profile options are mutually exclusive. The aws_secret_access_key alias was added in release 5.1.0 for consistency with the AWS botocore SDK. The ec2_secret_key alias has been deprecated and will be removed in a release after 2024-12-01. Support for the |
|
AWS STS session token for use with temporary credentials. See the AWS documentation for more information about access tokens https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys. The The security_token and profile options are mutually exclusive. Aliases aws_session_token and session_token were added in release 3.2.0, with the parameter being renamed from security_token to session_token in release 6.0.0. The security_token, aws_security_token, and access_token aliases have been deprecated and will be removed in a release after 2024-12-01. Support for the |
|
Forces the Boto SDK to use Signature Version 4. Only applies to get modes, Choices:
|
|
The source file path when performing a One of |
|
A dictionary representing the tags to be applied to the resource. If the |
|
Whether the bucket name should be validated to conform to AWS S3 naming rules. On by default, this may be disabled for S3 backends that do not enforce these rules. See the Amazon documentation for more information about bucket naming rules https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html. Choices:
|
|
When set to Setting validate_certs=false is strongly discouraged, as an alternative, consider setting aws_ca_bundle instead. Choices:
|
|
Version ID of the object inside the bucket. Can be used to get a specific version of a file if versioning is enabled in the target bucket. |
Notes
Note
Support for
tags
andpurge_tags
was added in release 2.0.0.In release 5.0.0 the
s3_url
parameter was merged into theendpoint_url
parameter, s3_url remains as an alias forendpoint_url
.For Walrus
endpoint_url
should be set to the FQDN of the endpoint with neither scheme nor path.Support for the
S3_URL
environment variable has been deprecated and will be removed in a release after 2024-12-01, please use theendpoint_url
parameter or theAWS_URL
environment variable.Support for creating and deleting buckets was removed in release 6.0.0.
Caution: For modules, environment variables and configuration files are read from the Ansible ‘host’ context and not the ‘controller’ context. As such, files may need to be explicitly copied to the ‘host’. For lookup and connection plugins, environment variables and configuration files are read from the Ansible ‘controller’ context and not the ‘host’ context.
The AWS SDK (boto3) that Ansible uses may also read defaults for credentials and other settings, such as the region, from its configuration files in the Ansible ‘host’ context (typically
~/.aws/credentials
). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
Examples
- name: Simple PUT operation
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
src: /usr/local/myfile.txt
mode: put
- name: PUT operation from a rendered template
amazon.aws.s3_object:
bucket: mybucket
object: /object.yaml
content: "{{ lookup('template', 'templates/object.yaml.j2') }}"
mode: put
- name: Simple PUT operation in Ceph RGW S3
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
src: /usr/local/myfile.txt
mode: put
ceph: true
endpoint_url: "http://localhost:8000"
- name: Simple GET operation
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
dest: /usr/local/myfile.txt
mode: get
- name: Get a specific version of an object.
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
version: 48c9ee5131af7a716edc22df9772aa6f
dest: /usr/local/myfile.txt
mode: get
- name: PUT/upload with metadata
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
src: /usr/local/myfile.txt
mode: put
metadata:
Content-Encoding: gzip
Cache-Control: no-cache
- name: PUT/upload with custom headers
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
src: /usr/local/myfile.txt
mode: put
headers: '[email protected]'
- name: List keys simple
amazon.aws.s3_object:
bucket: mybucket
mode: list
- name: List keys all options
amazon.aws.s3_object:
bucket: mybucket
mode: list
prefix: /my/desired/
marker: /my/desired/0023.txt
max_keys: 472
- name: GET an object but don't download if the file checksums match. New in 2.0
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
dest: /usr/local/myfile.txt
mode: get
overwrite: different
- name: Delete an object from a bucket
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
mode: delobj
- name: Copy an object already stored in another bucket
amazon.aws.s3_object:
bucket: mybucket
object: /my/desired/key.txt
mode: copy
copy_src:
bucket: srcbucket
object: /source/key.txt
- name: Copy all the objects with name starting with 'ansible_'
amazon.aws.s3_object:
bucket: mybucket
mode: copy
copy_src:
bucket: srcbucket
prefix: 'ansible_'
Return Values
Common return values are documented here, the following are the fields unique to this module:
Key |
Description |
---|---|
Contents of the object as string. Returned: (for getstr operation) Sample: |
|
Number of seconds the presigned url is valid for. Returned: (for geturl operation) Sample: |
|
Message indicating the status of the operation. Returned: always Sample: |
|
List of object keys. Returned: (for list operation) Sample: |
|
Tags of the s3 object. Returned: always Sample: |
|
URL of the object. Returned: (for put and geturl operations) Sample: |