Documentation

1. Preparing for the Ansible Automation Platform Installation

This guide helps you get your Ansible Automation Platform installation up and running as quickly as possible.

At the end of the installation, using your web browser, you can access and fully utilize the Automation Platform.

1.1. Installation and Reference Guide

While this guide covers the basics, you may find that you need the more detailed information available in the Installation and Reference Guide.

You should also review the General Installation Notes before starting the installation.

1.2. Prerequisites and Requirements

For platform information, refer to Platform-specific Installation Notes.

Note

Tower is a full application and the installation process installs several dependencies such as PostgreSQL, Django, NGINX, and others.

It is required that you install Tower on a standalone VM or cloud instance and do not co-locate any other applications on that machine (beyond possible monitoring or logging software). Although Tower and Ansible are written in Python, they are not just simple Python libraries. Therefore, Tower cannot be installed in a Python virtualenv or any similar subsystem; you must install it as described in the installation instructions in this guide. Also, DO NOT change the default alternative for Python 3.

For OpenShift-based deployments, refer to OpenShift Deployment and Configuration.

Ansible Tower has the following requirements:

Starting with Ansible Tower 3.8, you must have valid subscriptions attached before installing and running the Ansible Automation Platform. Even if you already have valid licenses from previous versions, you must still provide your credentials or a subscriptions manifest again upon upgrading to Tower 3.8. See Attaching Subscriptions for detail.

  • Supported Operating Systems:

    • Red Hat Enterprise Linux 8.2 or later 64-bit (x86)

    • Red Hat Enterprise Linux 7.7 or later 64-bit (x86)

    • CentOS 7.7 or later 64-bit (x86)

Note

For Automation Hub, selinux-policy package version greater or equal to 3.13.1-268.el7_9.2 is required. If your setup has rhel-7-server-rpms repository enabled, the _9.2 version will be pulled automatically along with Automation Hub. Also, RHUI subscriptions cannot be used to install Automation Hub.

Note

The next major release of Ansible Tower will not support Red Hat Enterprise Linux 7 or CentOS (any version) as an installation platform.

  • A currently supported version of Mozilla Firefox or Google Chrome

    • Other HTML5 compliant web browsers may work but are not fully tested or supported.

  • 2 CPUs minimum for Automation Platform installations. Refer to the capacity algorithm section of the Ansible Tower User Guide for determining the CPU capacity required for the number of forks in your particular configuration.

  • 4 GB RAM minimum for Automation Platform installations

    • 4 GB RAM (minimum and recommended for Vagrant trial installations)

    • 4 GB RAM (minimum for external standalone PostgreSQL databases)

    • For specific RAM needs, refer to the capacity algorithm section of the Ansible Tower User Guide for determining capacity required based on the number of forks in your particular configuration

  • 20 GB of dedicated hard disk space for Tower service nodes

    • 10 GB of the 20 GB requirement must be dedicated to /var/, where Tower stores its files and working directories

    • The storage volume should be rated for a minimum baseline of 750 IOPS.

  • 20 GB of dedicated hard disk space for nodes containing a database (150 GB+ recommended)

    • The storage volume should be rated for a high baseline IOPS (1000 or more.)

    • All Tower data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook run every hour (24 times a day) across 250, hosts, with 20 tasks will store over 800000 events in the database every week.

    • If not enough space is reserved in the database, old job runs and facts will need cleaned on a regular basis. Refer to Management Jobs in the Ansible Tower Administration Guide for more information

  • 64-bit support required (kernel and runtime)

  • PostgreSQL version 10.x required to run Ansible Tower 3.7 and later. Backup and restore will only work on PostgreSQL versions supported by your current Ansible Tower version.

  • Ansible version 2.9 required to run Ansible Tower versions 3.8 and later

Note

You cannot use versions of PostgreSQL and Ansible older than those stated above and be able to run Ansible Tower 3.7 and later. Both are installed by the install script if they aren’t already present.

  • For Automation Hub: Starting with Ansible Tower 3.8, Automation Hub will act as a content provider for Ansible Tower, which requires both an Ansible Tower deployment and an Automation Hub deployment running alongside each other. Tower and Automation Hub can run on either RHEL 7 or 8, but only Tower (not Automation Hub) is supported on an OpenShift Container Platform (OCP).

  • For Amazon EC2:

    • Instance size of m4.large or larger

    • An instance size of m4.xlarge or larger if there are more than 100 hosts

1.2.1. Additional Notes on Automation Platform Requirements

Actual RAM requirements vary based on how many hosts Tower will manage simultaneously (which is controlled by the forks parameter in the job template or the system ansible.cfg file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks + 2GB reservation for Tower, see the capacity algorithm for further details. If forks is set to 400, 40 GB of memory is recommended.

For the hosts on which we install Ansible Tower, Tower checks whether or not umask is set to 0022. If not, the setup fails. Be sure to set umask=0022 to avoid encountering this error.

A larger number of hosts can of course be addressed, though if the fork number is less than the total host count, more passes across the hosts are required. These RAM limitations are avoided when using rolling updates or when using the provisioning callback system built into Tower, where each system requesting configuration enters a queue and is processed as quickly as possible; or in cases where Tower is producing or deploying images such as AMIs. All of these are great approaches to managing larger environments. For further questions, please contact Ansible via the Red Hat Customer portal at https://access.redhat.com/.

The requirements for systems managed by Ansible Automation Platform are the same as for Ansible at: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#prerequisites

1.2.1.1. Notable PostgreSQL Changes

Automation Platform uses PostgreSQL 10.x, which is an SCL package on RHEL 7 and an app stream on RHEL8. Some changes worth noting when upgrading to PostgreSQL 10.x are:

  • PostgreSQL user passwords will now be hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.

  • You will no longer need to provide a pg_hashed_password in your inventory file at the time of installation because PostgreSQL 10.x can now store the user’s password more securely. If users supply a password in the inventory file for the installer (pg_password), that password will be SCRAM-SHA-256 hashed by PostgreSQL as part of the installation process. DO NOT use special characters in pg_password as it may cause the setup to fail.

  • Since Ansible Tower and Automation Hub are using a Software Collections version of PostgreSQL in 3.8, the rh-postgresql10 scl must be enabled in order to access the database. Administrators can use the awx-manage dbshell command, which will automatically enable the PostgreSQL SCL.

  • If you just need to determine if your Tower instance has access to the database, you can do so with the command, awx-manage check_db.

1.2.1.2. PostgreSQL Configurations

Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Automation Platform installer. When the Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. However, you can adjust these PostgreSQL settings for standalone database server node where ansible_memtotal_mb is the total memory size of the database server:

max_connections == 1024
shared_buffers == ansible_memtotal_mb*0.3
work_mem == ansible_memtotal_mb*0.03
maintenance_work_mem == ansible_memtotal_mb*0.04

Refer to PostgreSQL documentation for more detail on tuning your PostgreSQL server.

1.2.2. Ansible Software Requirements

While Automation Platform depends on Ansible Playbooks and requires the installation of the latest stable version of Ansible before installing Tower, manual installations of Ansible are no longer required.

Upon new installations, Tower installs the latest release package of Ansible 2.9.

If performing a bundled Automation Platform installation, the installation program attempts to install Ansible (and its dependencies) from the bundle for you (refer to Using the Bundled Ansible Automation Platform Installer for more information).

If you choose to install Ansible on your own, the Automation Platform installation program will detect that Ansible has been installed and will not attempt to reinstall it. Note that you must install Ansible using a package manager like yum and that the latest stable version must be installed for Automation Platform to work properly. Ansible version 2.9 is required for Ansible Tower versions 3.8 and later.

For convenience, summaries of those instructions are in the following sections.

1.2.3. Platform-specific Installation Notes

1.2.3.1. Installing Automation Platform on Systems with FIPS Mode Enabled

Ansible Automation Platform can run on systems where FIPS mode is enabled, though there are a few limitations to keep in mind:

  • Only Enterprise Linux 7+ is supported. The standard python that ships with RHEL must be used for Ansible Tower to work in FIPS mode. Using any non-standard, non-system python for Tower is therefore, unsupported.

  • By default, Tower configures PostgreSQL using password-based authentication, and this process relies on the usage of md5 when CREATE USER is run at install time. To run the Tower installer from a FIPS-enabled system, specify pg_password in your inventory file. DO NOT use special characters in pg_password as it may cause the setup to fail:

    pg_password='choose-a-password'
    

    For further detail, see Setting up the Inventory File.

    If you supply a password in the inventory file for the installer (pg_password), that password will be SCRAM-SHA-256 hashed by PostgreSQL as part of the installation process.

  • The ssh-keygen command generates keys in a format (RFC4716) which uses the md5 digest algorithm at some point in the process (as part of a transformation performed on the input passphrase). On a FIPS-enforcing system, md5 is completely disabled, so these types of encrypted SSH keys (RFC4716 private keys protected by a passphrase) will not be usable. When FIPS mode is enabled, any encrypted SSH key you import into Ansible Tower must be a PKCS8-formatted key. Existing AES128 keys can be converted to PKCS8 by running the following openssl command:

    $ openssl pkcs8 -topk8 -v2 aes128 -in <INPUT_KEY> -out <NEW_OUTPUT_KEY>
    
  • Use of Ansible features that use the paramiko library will not be FIPS compliant. This includes setting ansible_connection=paramiko as a transport and using network modules that utilize the ncclient NETCONF library.

  • The TACACS+ protocol uses md5 to obfuscate the content of authorization packets; TACACS+ Authentication is not supported for systems where FIPS mode is enabled.

  • The RADIUS protocol uses md5 to encrypt passwords in Access-Request queries; RADIUS Authentication is not supported for systems where FIPS mode is enabled.

1.2.3.2. Notes for Red Hat Enterprise Linux and CentOS setups

  • PackageKit can frequently interfere with the installation/update mechanism. Consider disabling or removing PackageKit if installed prior to running the setup process.

  • Only the “targeted” SELinux policy is supported. The targeted policy can be set to disabled, permissive, or enforcing.

  • When performing a bundled install, refer to Using the Bundled Ansible Automation Platform Installer for more information.

  • When installing Ansible Tower, you only need to run setup.sh, any repositories needed by Tower are installed automatically.

  • The latest version of Ansible is installed automatically during the setup process. No additional installation or configuration is required.

1.2.3.3. Notes for Ubuntu setups

Ansible Tower no longer supports Ubuntu. Refer to previous versions of the Ansible Automation Platform Installation and Reference Guide for details on Ubuntu.

1.2.3.4. Configuration and Installation on OpenShift

For OpenShift-based deployments, refer to OpenShift Deployment and Configuration.

1.2.3.5. Installing Satellite instances on Tower

Satellite users will need to install the Katello RPM for your Satellite instance on the Tower node prior to installing Tower. This RPM automatically configures Subscription Manager to use Satellite as its content source, and the hostname value gets updated in /etc/rhsm/rhsm.conf.

Note

If you were to install the Katello RPM after installing Tower, Tower would not have access to the rhsm.conf– which it relies on for applying a subscription from Satellite. This is because the Tower installer sets an ACL rule on the rhsm.conf file, therefore, a subscription is unable to be applied if that file does not exist, gets later overwritten, or the user does not have the right permissions to access the file.

For detail on how to register a host with a Satellite server, refer to the Registration section of the Satellite documentation.

1.3. Automation Platform Installation Scenarios

Ansible Automation Platform can be installed using one of the following scenarios:

Single Machine:

  • Standalone Tower with database on the same node as Tower or non-installer managed database. This is a single machine install of Tower - the web frontend, REST API backend, and database are all on a single machine. This is the standard installation of Tower. It also installs PostgreSQL from your OS vendor repository, and configures the Tower service to use that as its database.

    [tower]
    host
    
  • Standalone Tower with an external managed database. This installs the Tower server on a single machine and configures it to talk to a remote instance of PostgreSQL 10 as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS:

    [tower]
    host
    
    [database]
    host2
    
  • Standalone Automation Hub with a database on the same node as Automation Hub or non-installer managed database:

    [automationhub]
    host
    
  • Standalone Automation Hub with an external managed database. This installs the Automation Hub server on a single machine and installs a remote PostgreSQL database via the playbook installer (managed by Automation Platform Installer).

    [automationhub]
    host
    
    [database]
    host2
    

Platform Installation:

Platform installation involves Tower and Automation Hub. The Platform installer allows you to deploy 1 and only 1 Automation Hub per inventory. Given the installer can be used as an Automation Hub standalone installer, you can run the installer any number of times with any number of different inventories if you want to deploy multiple Automation Hub nodes. The 2 options supported for the Platform installation are:

  • Platform (Tower + Automation Hub) with a database on the same node as Tower or non-installer managed database:

    [tower]
    host1
    
    [automationhub]
    host2
    
  • Platform (Tower + Automation Hub) with an external managed database:

    [tower]
    host1
    
    [automationhub]
    host2
    
    [database]
    host3
    

Multi-Machine Cluster

This scenario involves Platform (Clustered Tower + Automation Hub) installation with an external managed database. In this mode, multiple Tower nodes are installed and active. Any node can receive HTTP requests and all nodes can execute jobs. This installs the Platform server on a single machine and configures it to talk to a remote instance of PostgreSQL as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS:

[tower]
host1
host11
host12

[automationhub]
host2

[database]
host3

Note

Running in a cluster setup requires any database that Tower uses to be external–PostgreSQL must be installed on a machine that is not one of the primary or secondary tower nodes. When in a redundant setup, the remote PostgreSQL version requirements is PostgreSQL 10.

For more information on configuring a clustered setup, refer to Clustering.

Note

1). Tower will not configure replication or failover for the database that it uses, although Tower should work with any replication that you have. 2). The database server should be on the same network or in the same datacenter as the Tower server for performance reasons. 3). Tower and Automation Hub can not run on the same node, this is a scenario that is not supported. It means that any deployment of the Platform becomes at least a 2-node deployment topology.

Settings available for an Automation Platform install:

  • automationhub_importer_settings: Dictionary of settings/configuration to pass to galaxy-importer. It will end up in /etc/galaxy-importer/galaxy-importer.cfg

  • automationhub_require_content_approval: Whether or not Automation Hub enforces the approval mechanism before collections are made available

  • automationhub_disable_https: Whether or not Automation Hub should be deployed with TLS enabled

  • automationhub_disable_hsts: Whether or not Automation Hub should be deployed with the HTTP Strict Transport Security (HSTS) web-security policy mechanism enabled

  • automationhub_ssl_validate_certs: Whether or not Automation Hub should validate certificate when requesting itself (default = False) because by default, Platform deploys with self-signed certificates

  • automationhub_ssl_cert: Same as web_server_ssl_cert but for Automation Hub UI and API

  • automationhub_ssl_key: Same as web_server_ssl_key but for Automation Hub UI and API

  • automationhub_backup_collections: Automation Hub provides artifacts in /var/lib/pulp. By default, this is set to true so Tower automatically backs up the artifacts by default. If a partition (e.g., LVM, NFS, CephFS, etc.) was mounted there, an enterprise organization would ensure it is always backed up. If this is the case, you can set automationhub_backup_collections = false and the backup/restore process will not have to backup/restore /var/lib/pulp.

For OpenShift-based deployments, refer to OpenShift Deployment and Configuration.