You must upgrade your Ansible Tower 2.4.4 (or later) system to Ansible Tower 3.0 before you can upgrade beyond Ansible Tower 3.0.
This section covers changes that you should keep in mind as you attempt to upgrade your Ansible Tower Instance
hostvarsof member hosts. After upgrading to Ansible Tower 3.2.3, the variables will be returned inside of their respective groups or inside of the inventory variables. Since
hostvarstake precedence, variables that come from pre-migration updates may take precedence over their newer values, resulting in a variable precedence conflict. Refer to Remediation Steps for Upgrade Variable Precedence Conflict for a workaround.
./configure/as part of the initial setup.
tower_setup_conf.ymlis no long used. Instead, you should now edit the inventory file in the
If you encounter a variable precedence conflict after upgrading to Ansible Tower 3.2.3, you can resolve this conflict by temporarily setting the inventory source’s
overwrite_vars field to
True and running an update. Apply this setting to all inventory sources that show the issue.
Inventories sourced from a project already require
overwrite_vars to be set to
True so they are not affected.
Download and then extract the Ansible Tower installation/upgrade tool: http://releases.ansible.com/ansible-tower/setup/
To install or upgrade, start by editing the inventory file in the
ansible-tower-setup-<tower_version> directory, replacing
<tower_version> with the version number, such as
As you edit your inventory file, there are a few things you must keep in mind:
The contents of the inventory file should be defined in
./inventory, next to the
./setup.sh installer playbook.
For installations and upgrades: If you need to make use of external databases, you must ensure the database sections of your inventory file are properly setup. Edit this file and add your external database information before running the setup script.
For clustered installations: If you are creating a clustered setup, you must replace
localhost with the hostname or IP address of all instances. All nodes/instances must be able to reach any others using this hostname or address. In other words, you cannot use the
localhost ansible_connection: local on one of the nodes AND all of the nodes should use the same format for the host names.
Therefore, this will not work:
[tower] localhost ansible_connection: local hostA hostB.example.com 172.27.0.4
Instead, use these formats:
[tower] hostA hostB hostC
hostA.example.com hostB.example.com hostC.example.com
[tower] 172.27.0.2 172.27.0.3 172.27.0.4
For all standard installations: When performing an installation, you must supply any necessary passwords in the inventory file.
Changes made to the installation process now require that you fill out the all of the password fields in the inventory file. If you need to know where to find the values for these they should be:
admin_password=''<— Tower local admin password
pg_password=''<—- Found in /etc/tower/conf.d/postgres.py
rabbitmq_password=''<—- create a new password here (alpha-numeric with no special characters)
Example Inventory file
Example Single Node Inventory File
[tower] localhost ansible_connection=local [database] [all:vars] admin_password='password' pg_host='' pg_port='' pg_database='awx' pg_username='awx' pg_password='password' rabbitmq_port=5672 rabbitmq_vhost=tower rabbitmq_username=tower rabbitmq_password='password' rabbitmq_cookie=rabbitmqcookie # Needs to be true for fqdns and ip addresses rabbitmq_use_long_name=false # Needs to remain false if you are using localhost
Example Multi Node Cluster Inventory File
[tower] clusternode1.example.com clusternode2.example.com clusternode3.example.com [database] dbnode.example.com [all:vars] ansible_become=true admin_password='password' pg_host='dbnode.example.com' pg_port='5432' pg_database='tower' pg_username='tower' pg_password='password' rabbitmq_port=5672 rabbitmq_vhost=tower rabbitmq_username=tower rabbitmq_password=tower rabbitmq_cookie=rabbitmqcookie # Needs to be true for fqdns and ip addresses rabbitmq_use_long_name=true
Example Inventory file for an external existing database
[tower] node.example.com ansible_connection=local [database] [all:vars] admin_password='password' pg_password='password' rabbitmq_password='password' pg_host='database.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='password'
Example Inventory file for external database which needs installation
[tower] node.example.com ansible_connection=local [database] database.example.com [all:vars] admin_password='password' pg_password='password' rabbitmq_password='password' pg_host='database.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='password'
Once any necessary changes have been made, you are ready to run
Root access to the remote machines is required. With Ansible, this can be achieved in different ways:
- ansible_user=root ansible_ssh_password=”your_password_here” inventory host or group variables
- ansible_user=root ansible_ssh_private_key_file=”path_to_your_keyfile.pem” inventory host or group variables
- ANSIBLE_BECOME_METHOD=’sudo’ ANSIBLE_BECOME=True ./setup.sh
- ANSIBLE_SUDO=True ./setup.sh
Ansible Tower 3.0 simplifies installation and removes the need to run
./configure/ as part of the installation setup. Users of older versions should follow the instructions available in the v.2.4.5 (or earlier) releases of the Tower Documentation available at:
The Tower setup playbook script uses the
inventory file and is invoked as
./setup.sh from the path where you unpacked the Tower installer tarball.
[email protected]:~$ ./setup.sh
The setup script takes the following arguments:
-h– Show this help message and exit
-i INVENTORY_FILE– Path to Ansible inventory file (default:
-e EXTRA_VARS– Set additional Ansible variables as key=value or YAML/JSON (i.e.
-e bundle_install=falseforces an online installation)
-b– Perform a database backup in lieu of installing
-r– Perform a database restore in lieu of installing (a default restore path is used unless EXTRA_VARS are provided with a non-default path, as shown in the code example below)
./setup.sh -e 'restore_backup_file=/path/to/nondefault/location' -r
Please note that a issue was discovered in Tower 3.0.0 and 3.0.1 that prevented proper system backups and restorations.
If you need to back up or restore your Tower v3.0.0 or v3.0.1 installation, use the v3.0.2 installer to do so.