This guide helps you get your Ansible Tower installation up and running as quickly as possible.
At the end of the installation, using your web browser, you can access and fully utilize Tower.
While this guide covers the basics, you may find that you need the more detailed information available in the Installation and Reference Guide.
You should also review the General Installation Notes before starting the installation.
For platform information, refer to Installation Notes in the Ansible Tower Installation and Reference Guide.
Note
Tower is a full application and the installation process installs several dependencies such as PostgreSQL, Django, NGINX, and others. It is required that you install Tower on a standalone VM or cloud instance and do not co-locate any other applications on that machine (beyond possible monitoring or logging software). Although Tower and Ansible are written in Python, they are not just simple Python libraries. Therefore, Tower cannot be installed in a Python virtualenv or any similar subsystem; you must install it as described in the installation instructions in this guide. For OpenShift-based deployments, refer to OpenShift Deployment and Configuration.
Ansible Tower has the following requirements:
Supported Operating Systems:
Red Hat Enterprise Linux 7.4 or later 64-bit
CentOS 7.4 or later 64-bit
Ubuntu 14.04 LTS 64-bit
Ubuntu 16.04 LTS 64-bit
Note
Support for Ubuntu 14.04 as a Tower platform is deprecated and will be removed in a future release.
A currently supported version of Mozilla Firefox or Google Chrome
Other HTML5 compliant web browsers may work but are not fully tested or supported.
2 CPUs minimum for Tower installations. Refer to the capacity algorithm section of the Ansible Tower User Guide for determining the CPU capacity required for the number of forks in your particular configuration.
4 GB RAM minimum for Tower installations
4 GB RAM (minimum and recommended for Vagrant trial installations)
4 GB RAM (minimum for external standalone Postgres databases)
For specific RAM needs, refer to the capacity algorithm section of the Ansible Tower User Guide for determining capacity required based on the number of forks in your particular configuration
20 GB of dedicated hard disk space for Tower service nodes
10 GB of the 20 GB requirement must be dedicated to
/var/
, where Tower stores its files and working directoriesThe storage volume should be rated for a minimum baseline of 750 IOPS.
20 GB of dedicated hard disk space for nodes containing a database (150 GB+ recommended)
The storage volume should be rated for a high baseline IOPS (1000 or more.)
All Tower data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook run every hour (24 times a day) across 250, hosts, with 20 tasks will store over 800000 events in the database every week.
If not enough space is reserved in the database, old job runs and facts will need cleaned on a regular basis. Refer to Management Jobs in the Ansible Tower Administration Guide for more information
64-bit support required (kernel and runtime)
PostgreSQL version 9.6.X required to run Ansible Tower 3.2 and later
Ansible version 2.2 (at minimum) required to run Ansible Tower versions 3.2 and later
Note
You cannot use versions of PostgreSQL and Ansible older than those stated above and be able to run Ansible Tower 3.2 and later. Both are installed by the install script if they aren’t already present.
For Amazon EC2:
Instance size of m4.large or larger
An instance size of m4.xlarge or larger if there are more than 100 hosts
While other operating systems may technically function, currently only the above list is supported to host an Ansible Tower installation. If you have a firm requirement to run Tower on an unsupported operating system, please contact Ansible via the Red Hat Customer portal at https://access.redhat.com/. Management of other operating systems (nodes) is documented by the Ansible project itself and allows for a wider list.
Actual RAM requirements vary based on how many hosts Tower will manage simultaneously (which is controlled by the forks
parameter in the job template or the system ansible.cfg
file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks + 2GB reservation for Tower, see the capacity algorithm for further details. If forks
is set to 400, 40 GB of memory is recommended.
For the hosts on which we install Ansible Tower, Tower checks whether or not umask
is set to 0022. If not, the setup fails. Be sure to set umask=0022
to avoid encountering this error.
A larger number of hosts can of course be addressed, though if the fork number is less than the total host count, more passes across the hosts are required. These RAM limitations are avoided when using rolling updates or when using the provisioning callback system built into Tower, where each system requesting configuration enters a queue and is processed as quickly as possible; or in cases where Tower is producing or deploying images such as AMIs. All of these are great approaches to managing larger environments. For further questions, please contact Ansible via the Red Hat Customer portal at https://access.redhat.com/.
The requirements for systems managed by Tower are the same as for Ansible at: http://docs.ansible.com/intro_getting_started.html
Optionally, you can configure the Postgres database as separate nodes that are not managed by the Tower installer. When the Tower installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. However, you can adjust these PostgreSQL settings for standalone database server node where ansible_memtotal_mb
is the total memory size of the database server:
max_connections == 1000
shared_buffers == ansible_memtotal_mb*0.3
work_mem == ansible_memtotal_mb*0.3
maintenance_work_mem == ansible_memtotal_mb*0.04
Refer to PostgreSQL documentation for more detail on tuning your PostgresSQL server.
While Ansible Tower depends on Ansible Playbooks and requires the installation of the latest stable version of Ansible before installing Tower, manual installations of Ansible are no longer required.
Beginning with Ansible Tower version 2.3, the Tower installation program attempts to install Ansible as part of the installation process. Previously, Tower required manual installations of the Ansible software release package before running the Tower installation program. Now, Tower attempts to install the latest stable Ansible release package.
If performing a bundled Tower installation, the installation program attempts to install Ansible (and its dependencies) from the bundle for you (refer to Using the Bundled Tower Installation Program for more information).
If you choose to install Ansible on your own, the Tower installation program will detect that Ansible has been installed and will not attempt to reinstall it. Note that you must install Ansible using a package manager like yum
and that the latest stable version must be installed for Ansible Tower to work properly. At minimum, Ansible version 2.2 is required for Ansible Tower versions 3.2 and later.
For convenience, summaries of those instructions are in the following sections.
PackageKit can frequently interfere with the installation/update mechanism. Consider disabling or removing PackageKit if installed prior to running the setup process.
Only the “targeted” SELinux policy is supported. The targeted policy can be set to disabled, permissive, or enforcing.
When performing a bundled install (refer to Using the Bundled Tower Installation Program for more information), Red Hat Enterprise Linux customers must enable the following repositories which are disabled by default:
Red Hat Enterprise Linux 7 users must enable the extras
repositories.
The following steps help you configure access to the repository as well as install Ansible on older versions of Tower.
Configure the EPEL repository and any others needed.
As the root user, for Red Hat Enterprise Linux 7 and CentOS 7
root@localhost:~$ yum install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Note
extras
repository specific for your environment:extras
on CentOS 7
rhel-7-server-extras-rpms
on Red Hat Enterprise Linux 7
rhui-REGION-rhel-server-extras
when running in EC2.
When using the official Red Hat Enterprise Linux 7 marketplace AMI, ensure that the latest rh-amazon-rhui-client
package that allows enabling the optional repository (named rhui-REGION-rhel-server-optional
in EC2) is installed.
Note
Tower is installed using Ansible playbooks; therefore, Ansible is required to complete the installation of Tower.
Beginning with Ansible Tower version 2.3.0, Ansible is installed automatically during the setup process.
If you are using an older version of Tower, prior to version 2.3.0, Ansible can be installed as detailed in the Ansible documentation at: http://docs.ansible.com/intro_installation.html
For convenience, those installation instructions are summarized below.
root@localhost:~$ yum install ansible
The following steps help you configure access to the repository as well as install Ansible on older versions of Tower.
As the root user, configure Ansible PPA:
root@localhost:~$ apt-get install software-properties-common
root@localhost:~$ apt-add-repository ppa:ansible/ansible
Note
Tower is installed using Ansible playbooks; therefore, Ansible is required to complete the installation of Tower.
Beginning with Ansible Tower version 2.3.0, Ansible is installed automatically during the setup process.
If you are using an older version of Tower, prior to version 2.3.0, Ansible can be installed as detailed in the Ansible documentation at: http://docs.ansible.com/intro_installation.html
For convenience, those installation instructions are summarized below.
root@localhost:~$ apt-get update
root@localhost:~$ apt-get install ansible
For OpenShift-based deployments, refer to OpenShift Deployment and Configuration.
Tower can be installed using one of the following scenarios:
Single Machine:
This is a single machine install of Tower - the web frontend, REST API backend, and database are all on a single machine. This is the standard installation of Tower. It also installs PostgreSQL from your OS vendor repository, and configures the Tower service to use that as its database.
Tower with remote DB configuration: This installs the Tower server on a single machine and configures it to talk to a remote instance of PostgreSQL 9.6 as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS.
Tower with a playbook install of a remote Postgres system: This installs the Tower server on a single machine and installs a remote Postgres database via the playbook installer (managed by Tower).
Note
1). Tower will not configure replication or failover for the database that it uses, although Tower should work with any replication that you have. 2). The database server should be on the same network or in the same datacenter as the Tower server for performance reasons.
High Availability Multi-Machine Cluster:
Tower can be installed in a high availability cluster mode. In this mode, multiple Tower nodes are installed and active. Any node can receive HTTP requests and all nodes can execute jobs.
Tower with remote DB configuration: This installs the Tower server on a single machine and configures it to talk to a remote instance of PostgreSQL as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS.
Tower with a playbook install of a remote Postgres system: This installs the Tower server on a single machine and installs a remote Postgres database via the playbook installer (managed by Tower).
For more information on configuring a clustered setup, refer to Clustering.
Note
Running in a cluster setup requires any database that Tower uses to be external–Postgres must be installed on a machine that is not one of the primary or secondary tower nodes. When in a redundant setup, the remote Postgres version requirements is PostgreSQL 9.6.