Ansible Automation Platform can be installed in various ways by choosing the best mode for your environment and making any necessary modifications to the inventory file. For OpenShift-based deployments, you can only deploy Tower on OpenShift. Deployment of Automation Hub on OpenShift is not supported. However, you can deploy Automation Hub in a virtual environment that points to OpenShift. For more detail, refer to OpenShift Deployment and Configuration.
Note
A database server will still be installed even if a node does not have a database. It is a known issue and will be addressed in a future release.
Ansible Automation Platform can be installed using one of the following scenarios:
Single Machine:
Standalone Tower with database on the same node as Tower or non-installer managed database. This is a single machine install of Tower - the web frontend, REST API backend, and database are all on a single machine. This is the standard installation of Tower. It also installs PostgreSQL from your OS vendor repository, and configures the Tower service to use that as its database.
[tower]
host
Standalone Tower with an external managed database. This installs the Tower server on a single machine and configures it to talk to a remote instance of PostgreSQL 10 as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS:
[tower]
host
[database]
host2
Standalone Automation Hub with a database on the same node as Automation Hub or non-installer managed database:
[automationhub]
host
Standalone Automation Hub with an external managed database. This installs the Automation Hub server on a single machine and installs a remote PostgreSQL database via the playbook installer (managed by Automation Platform Installer).
[automationhub]
host
[database]
host2
Platform Installation:
Platform installation involves Tower and Automation Hub. The Platform installer allows you to deploy 1 and only 1 Automation Hub per inventory. Given the installer can be used as an Automation Hub standalone installer, you can run the installer any number of times with any number of different inventories if you want to deploy multiple Automation Hub nodes. The 2 options supported for the Platform installation are:
Platform (Tower + Automation Hub) with a database on the same node as Tower or non-installer managed database:
[tower]
host1
[automationhub]
host2
Platform (Tower + Automation Hub) with an external managed database:
[tower]
host1
[automationhub]
host2
[database]
host3
Multi-Machine Cluster
This scenario involves Platform (Clustered Tower + Automation Hub) installation with an external managed database. In this mode, multiple Tower nodes are installed and active. Any node can receive HTTP requests and all nodes can execute jobs. This installs the Platform server on a single machine and configures it to talk to a remote instance of PostgreSQL as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS:
[tower]
host1
host11
host12
[automationhub]
host2
[database]
host3
Note
Running in a cluster setup requires any database that Tower uses to be external–PostgreSQL must be installed on a machine that is not one of the primary or secondary tower nodes. When in a redundant setup, the remote PostgreSQL version requirements is PostgreSQL 10.
For more information on configuring a clustered setup, refer to Clustering.
Note
1). Tower will not configure replication or failover for the database that it uses, although Tower should work with any replication that you have. 2). The database server should be on the same network or in the same datacenter as the Tower server for performance reasons. 3). Tower and Automation Hub can not run on the same node, this is a scenario that is not supported. It means that any deployment of the Platform becomes at least a 2-node deployment topology.
Settings available for an Automation Platform install:
automationhub_importer_settings
: Dictionary of settings/configuration to pass to galaxy-importer
. It will end up in /etc/galaxy-importer/galaxy-importer.cfg
automationhub_require_content_approval
: Whether or not Automation Hub enforces the approval mechanism before collections are made available
automationhub_disable_https
: Whether or not Automation Hub should be deployed with TLS enabled
automationhub_disable_hsts
: Whether or not Automation Hub should be deployed with the HTTP Strict Transport Security (HSTS) web-security policy mechanism enabled
automationhub_ssl_validate_certs
: Whether or not Automation Hub should validate certificate when requesting itself (default = False) because by default, Platform deploys with self-signed certificates
automationhub_ssl_cert
: Same as web_server_ssl_cert
but for Automation Hub UI and API
automationhub_ssl_key
: Same as web_server_ssl_key
but for Automation Hub UI and API
automationhub_backup_collections
: Automation Hub provides artifacts in /var/lib/pulp
. By default, this is set to true
so Tower automatically backs up the artifacts by default. If a partition (e.g., LVM, NFS, CephFS, etc.) was mounted there, an enterprise organization would ensure it is always backed up. If this is the case, you can set automationhub_backup_collections = false
and the backup/restore process will not have to backup/restore /var/lib/pulp
.
For OpenShift-based deployments, refer to OpenShift Deployment and Configuration.
As you edit your inventory file, there are a few things you must keep in mind:
The contents of the inventory file should be defined in ./inventory
, next to the ./setup.sh
installer playbook.
For installations and upgrades: If you need to make use of external databases, you must ensure the database sections of your inventory file are properly setup. Edit this file and add your external database information before running the setup script.
For Ansible Automation Platform or Automation Hub: Be sure to add an automation hub host in the [automationhub] group (Tower and Automation Hub cannot be installed on the same node).
Tower will not configure replication or failover for the database that it uses, although Tower should work with any replication that you have.
The database server should be on the same network or in the same data center as the Tower server for performance reasons.
For upgrading an existing cluster: When upgrading a cluster, you may decide that you want to also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file will not be enough to remove them from the cluster. In addition to omitting instances or instance groups from the inventory file, you must also deprovision instances or instance groups before starting the upgrade. Otherwise, omitted instances or instance groups will continue to communicate with the cluster, which can cause issues with tower services during the upgrade.
For clustered installations: If you are creating a clustered setup, you must replace localhost
with the hostname or IP address of all instances. All nodes/instances must be able to reach any others using this hostname or address. In other words, you cannot use the localhost ansible_connection=local
on one of the nodes AND all of the nodes should use the same format for the host names.
Therefore, this will not work:
[tower]
localhost ansible_connection=local
hostA
hostB.example.com
172.27.0.4
Instead, use these formats:
[tower]
hostA
hostB
hostC
OR
hostA.example.com
hostB.example.com
hostC.example.com
OR
[tower]
172.27.0.2
172.27.0.3
172.27.0.4
For all standard installations: When performing an installation, you must supply any necessary passwords in the inventory file.
Note
Changes made to the installation process now require that you fill out all of the password fields in the inventory file. If you need to know where to find the values for these they should be:
admin_password=''
<— Tower local admin password
pg_password=''
<—- Found in /etc/tower/conf.d/postgres.py
Warning
Do not use special characters in pg_password
as it may cause the setup to fail.
For provisioning new nodes: When provisioning new nodes add the nodes to the inventory file with all current nodes, make sure all passwords are included in the inventory file.
For upgrading a single node: When upgrading, be sure to compare your inventory file to the current release version. It is recommended that you keep the passwords in here even when performing an upgrade.
[automationhub]
automationhub.acme.org
[all:vars]
automationhub_admin_password='<password>'
automationhub_pg_host=''
automationhub_pg_port=''
automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password='<password>'
automationhub_pg_sslmode='prefer'
# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key
[tower]
tower.acme.org
[automationhub]
automationhub.acme.org
[database]
database-01.acme.org
[all:vars]
admin_password='<password>'
pg_host='database-01.acme.org'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL
# Automation Hub Configuration
#
automationhub_admin_password='<password>'
automationhub_pg_host='database-01.acme.org'
automationhub_pg_port='5432'
automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password='<password>'
automationhub_pg_sslmode='prefer'
# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# Isolated Tower nodes automatically generate an RSA key for authentication;
# To disable this behavior, set this value to false
# isolated_key_generation=true
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key
[tower]
localhost ansible_connection=local
[database]
[all:vars]
admin_password='password'
pg_host=''
pg_port=''
pg_database='awx'
pg_username='awx'
pg_password='password'
Warning
Do not use special characters in pg_password
as it may cause the setup to fail.
[tower]
clusternode1.example.com
clusternode2.example.com
clusternode3.example.com
[database]
dbnode.example.com
[all:vars]
ansible_become=true
admin_password='password'
pg_host='dbnode.example.com'
pg_port='5432'
pg_database='tower'
pg_username='tower'
pg_password='password'
Warning
Do not use special characters in pg_password
as it may cause the setup to fail.
[tower]
node.example.com ansible_connection=local
[database]
[all:vars]
admin_password='password'
pg_password='password'
pg_host='database.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
Warning
Do not use special characters in pg_password
as it may cause the setup to fail.
[tower]
node.example.com ansible_connection=local
[database]
database.example.com
[all:vars]
admin_password='password'
pg_password='password'
pg_host='database.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
Warning
Do not use special characters in pg_password
as it may cause the setup to fail.
Once any necessary changes have been made, you are ready to run ./setup.sh
.
Note
Root access to the remote machines is required. With Ansible, this can be achieved in different ways:
ansible_user=root ansible_ssh_pass=”your_password_here” inventory host or group variables
ansible_user=root ansible_ssh_private_key_file=”path_to_your_keyfile.pem” inventory host or group variables
ANSIBLE_BECOME_METHOD=’sudo’ ANSIBLE_BECOME=True ./setup.sh
ANSIBLE_SUDO=True ./setup.sh (Only applies to Ansible 2.7)
The DEFAULT_SUDO
Ansible configuration parameter was removed in Ansible 2.8, which causes the ANSIBLE_SUDO=True ./setup.sh
method of privilege escalation to no longer work. For more information on become
plugins, refer to Understanding Privilege Escalation and the list of become plugins.
The Tower setup playbook script uses the inventory
file and is invoked as ./setup.sh
from the path where you unpacked the Tower installer tarball.
root@localhost:~$ ./setup.sh
The setup script takes the following arguments:
-h
– Show this help message and exit
-i INVENTORY_FILE
– Path to Ansible inventory file (default: inventory
)
-e EXTRA_VARS
– Set additional Ansible variables as key=value or YAML/JSON (i.e. -e bundle_install=false
forces an online installation)
-b
– Perform a database backup in lieu of installing
-r
– Perform a database restore in lieu of installing (a default restore path is used unless EXTRA_VARS are provided with a non-default path, as shown in the code example below)
./setup.sh -e 'restore_backup_file=/path/to/nondefault/location' -r
After calling ./setup.sh
with the appropriate parameters, Tower is installed on the appropriate machines as has been configured. Setup installs Tower from RPM packages using repositories hosted on ansible.com.
Once setup is complete, use your web browser to access the Tower server and view the Tower login screen. Your Tower server is accessible from port 80 (https://<TOWER_SERVER_NAME>/) but will redirect to port 443 so 443 needs to be available also.
If the installation of Tower fails and you are a customer who has purchased a valid license for Ansible Tower, please contact Ansible via the Red Hat Customer portal at https://access.redhat.com/.
Once installed, if you log into the Tower instance via SSH, the default admin password is provided in the prompt. You can then change it with the following command (as root or as AWX user):
awx-manage changepassword admin
After that, the password you have entered will work as the admin password in the web UI.