This section covers each component of the upgrading process:
Note
All upgrades should be no more than two major versions behind what you are currently upgrading to. For example, in order to upgrade to Ansible Tower 3.5.x, you must first be on version 3.3.x; i.e., there is no direct upgrade path from version 3.2.x. Refer to the recommended upgrade path article off your customer portal.
In order to run Ansible Tower 3.5 on RHEL 8, you must also have Ansible 2.8 or later installed.
This section covers changes that you should keep in mind as you attempt to upgrade your Ansible Tower Instance
Because Ansible Tower 3.5 runs with Python 3, custom settings files in /etc/tower/conf.d
must be valid Python3 prior to upgrading to Ansible Tower 3.5.
Any custom settings added in /etc/tower/settings.py
must be either set in the Configure Tower User Interface, or moved to a file in /etc/tower/conf.d
, before upgrading to Ansible Tower 3.5.
Ansible Tower 3.0 simplified installation and removed the need to run ./configure/
as part of the initial setup.
The file tower_setup_conf.yml
is no longer used. Instead, you should now edit the inventory file in the /ansible-tower-setup-<tower_version>/
directory.
Earlier version of Tower used MongoDB when setting up an initial database; please note that Ansible Tower 3.0 has replaced the use of MongoDB with PostgreSQL.
Clustered upgrades require special attention to instance and instance groups prior to starting the upgrade. Refer to the Setting up the Inventory File section.
Changes in API authentication were made in Ansible Tower 3.3 to accommodate additional OAuth2 functionality. For more information, refer to Token-Based Authentication in the Ansible Tower Administration Guide.
You may install standalone Tower or use the bundled installer:
if you set up Tower on an environment with a direct Internet access, you can download the standalone Tower installer
if you set up Tower on an environment without direct access to online repositories, or your environment enforces a proxy, you must use the bundled installer
Download and then extract the Ansible Tower installation/upgrade tool: http://releases.ansible.com/ansible-tower/setup/
root@localhost:~$ tar xvzf ansible-tower-setup-latest.tar.gz
root@localhost:~$ cd ansible-tower-setup-<tower_version>
To install or upgrade, start by editing the inventory file in the ansible-tower-setup-<tower_version>
directory, replacing <tower_version>
with the version number, such as 3.4.5
or 3.5.0
directory.
As you edit your inventory file, there are a few things you must keep in mind:
The contents of the inventory file should be defined in ./inventory
, next to the ./setup.sh
installer playbook.
For installations and upgrades: If you need to make use of external databases, you must ensure the database sections of your inventory file are properly setup. Edit this file and add your external database information before running the setup script.
For upgrading an existing cluster: When upgrading a cluster, you may decide that you want to also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file will not be enough to remove them from the cluster. In addition to omitting instances or instance groups from the inventory file, you must also deprovision instances or instance groups before starting the upgrade. Otherwise, omitted instances or instance groups will continue to communicate with the cluster, which can cause issues with tower services during the upgrade.
For clustered installations: If you are creating a clustered setup, you must replace localhost
with the hostname or IP address of all instances. All nodes/instances must be able to reach any others using this hostname or address. In other words, you cannot use the localhost ansible_connection=local
on one of the nodes AND all of the nodes should use the same format for the host names.
Therefore, this will not work:
[tower]
localhost ansible_connection=local
hostA
hostB.example.com
172.27.0.4
Instead, use these formats:
[tower]
hostA
hostB
hostC
OR
hostA.example.com
hostB.example.com
hostC.example.com
OR
[tower]
172.27.0.2
172.27.0.3
172.27.0.4
For all standard installations: When performing an installation, you must supply any necessary passwords in the inventory file.
Note
Changes made to the installation process now require that you fill out all of the password fields in the inventory file. If you need to know where to find the values for these they should be:
admin_password=''
<— Tower local admin password
pg_password=''
<—- Found in /etc/tower/conf.d/postgres.py
rabbitmq_password=''
<—- create a new password here (alpha-numeric with no special characters)
Example Inventory file
For provisioning new nodes: When provisioning new nodes add the nodes to the inventory file with all current nodes, make sure all passwords are included in the inventory file.
For upgrading a single node: When upgrading, be sure to compare your inventory file to the current release version. It is recommended that you keep the passwords in here even when performing an upgrade.
Example Single Node Inventory File
[tower]
localhost ansible_connection=local
[database]
[all:vars]
admin_password='password'
pg_host=''
pg_port=''
pg_database='awx'
pg_username='awx'
pg_password='password'
rabbitmq_port=5672
rabbitmq_vhost=tower
rabbitmq_username=tower
rabbitmq_password='password'
rabbitmq_cookie=rabbitmqcookie
# Needs to be true for fqdns and ip addresses
rabbitmq_use_long_name=false
# Needs to remain false if you are using localhost
Example Multi Node Cluster Inventory File
[tower]
clusternode1.example.com
clusternode2.example.com
clusternode3.example.com
[database]
dbnode.example.com
[all:vars]
ansible_become=true
admin_password='password'
pg_host='dbnode.example.com'
pg_port='5432'
pg_database='tower'
pg_username='tower'
pg_password='password'
rabbitmq_port=5672
rabbitmq_vhost=tower
rabbitmq_username=tower
rabbitmq_password=tower
rabbitmq_cookie=rabbitmqcookie
# Needs to be true for fqdns and ip addresses
rabbitmq_use_long_name=true
Example Inventory file for an external existing database
[tower]
node.example.com ansible_connection=local
[database]
[all:vars]
admin_password='password'
pg_password='password'
rabbitmq_password='password'
pg_host='database.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
Example Inventory file for external database which needs installation
[tower]
node.example.com ansible_connection=local
[database]
database.example.com
[all:vars]
admin_password='password'
pg_password='password'
rabbitmq_password='password'
pg_host='database.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
Once any necessary changes have been made, you are ready to run ./setup.sh
.
Note
Root access to the remote machines is required. With Ansible, this can be achieved in different ways:
ansible_user=root ansible_ssh_pass=”your_password_here” inventory host or group variables
ansible_user=root ansible_ssh_private_key_file=”path_to_your_keyfile.pem” inventory host or group variables
ANSIBLE_BECOME_METHOD=’sudo’ ANSIBLE_BECOME=True ./setup.sh
ANSIBLE_SUDO=True ./setup.sh (Only applies to Ansible 2.7)
The DEFAULT_SUDO
Ansible configuration parameter was removed in Ansible 2.8, which causes the ANSIBLE_SUDO=True ./setup.sh
method of privilege escalation to no longer work. For more information on become
plugins, refer to Understanding Privilege Escalation and the list of become plugins.
Note
Ansible Tower 3.0 simplifies installation and removes the need to run ./configure/
as part of the installation setup. Users of older versions should follow the instructions available in the v.2.4.5 (or earlier) releases of the Tower Documentation available at:
http://docs.ansible.com/
The Tower setup playbook script uses the inventory
file and is invoked as ./setup.sh
from the path where you unpacked the Tower installer tarball.
root@localhost:~$ ./setup.sh
The setup script takes the following arguments:
-h
– Show this help message and exit
-i INVENTORY_FILE
– Path to Ansible inventory file (default: inventory
)
-e EXTRA_VARS
– Set additional Ansible variables as key=value or YAML/JSON (i.e. -e bundle_install=false
forces an online installation)
-b
– Perform a database backup in lieu of installing
-r
– Perform a database restore in lieu of installing (a default restore path is used unless EXTRA_VARS are provided with a non-default path, as shown in the code example below)
./setup.sh -e 'restore_backup_file=/path/to/nondefault/location' -r
Note
Please note that a issue was discovered in Tower 3.0.0 and 3.0.1 that prevented proper system backups and restorations.
If you need to back up or restore your Tower v3.0.0 or v3.0.1 installation, use the v3.0.2 installer to do so.