Certain Ansible features do not work in Ansible Tower due to the mechanism by which Tower runs playbooks. For information on this, see Best Practices in the Ansible Tower User Guide for more information.
To access Tower nodes behind your load balancer (in traditional cluster Tower installs) via HTTP, refer to the procedure described in the Troubleshooting section of the Ansible Tower Administration Guide.
Tower 3.3 introduces the ability to configure multiple LDAP directories for authentication–up to six. On the settings page for LDAP, there is a “Default” LDAP configuration followed by five-numbered configuration slots. If the “Default” is not populated, Tower will not try to authenticate using the other directory configurations.
If you start an isolated job, then kill the instance managing it, Tower will be unable to determine the status of the job, and manual intervention will be necessary. Contact Ansible via the Red Hat Customer portal at https://access.redhat.com/ for instructions specific to your scenario.
If placing Tower nodes behind some sort of proxy, this may pose a security issue. This approach assumes traffic is always flowing exclusively through your load balancer, and that traffic that circumvents the load balancer is suspect to
X-Forwarded-For header spoofing. See Proxy Support.
When Tower is accessed via hostname only (e.g. https://my-little-tower), trying to read the SAML metadata from /sso/metadata/saml/ generates a
sp_acs_url_invalid server error.
A configuration in which uses SAML when accessing Tower via hostname only instead of an FQDN, is not supported. Doing so will generate an error that is captured in the tower.log file and in the browser with full traceback information.
At this time, Tower does not support running when the operating system is configured to operate in FIPS mode.
Live events status dots are either seen as a red or orange dot at the top of the Tower Dashboard when something goes wrong. They are not seen at all when the system is in a healthy state. If you encounter a red or orange live events status indicator, even when your system seems fine, the following suggestions may offer a solution:
Live event status dots are used for troubleshooting problems with your Tower instance and the
socketio logs can point out anything problematic. You can collect troubleshooting help by running a
sosreport. As root, run the command
sosreport from your system to automatically generate a diagnostic tar file, then contact Ansible’s Support team with the collected information for further assistance.
Starting with Ansible Tower 2.2.0, live events status indicators only appear if Tower detects a problem. In earlier releases, a green status dot was shown to indicate a healthy system.
If you have a vmware instance that uses a self-signed certificate, then you will need to add the followig to the Source Vars configuration of the Cloud Group:
"source_vars": "---\nvalidate_certs: False",
If celery is not cleanly shutdown, it leaves a
/var/lib/awx/beat.db file on disk.
If you observe the traceback in the initial comment, you must manually delete the
/var/lib/awx/beat.db file and restart Tower.
If you encounter a
working... message with a spinner that turns indefinitely while attempting to view the details of a job run from the Jobs page on Firefox, upgrade to Firefox version 62.0 or later.
The following connection error displays in Tower:
This error is the result of Safari silently refusing to establish a connection to a web socket that is using a self-signed certificate. To resolve this issue, you must set Safari to always trust the website upon first visiting it:
If you click Continue without checking the checkbox, this error will persist.
Ansible Tower 3.3 contains bindings for Microsoft Azure compatible with Ansible 2.6. However, these bindings will not work with earlier versions of Ansible. If you are using earlier versions of Ansible in a custom virtual environment, you may need to install different versions of Azure dependencies to use Microsoft Azure modules.
Instances have been reported that
su commands do not work when using an entirely local playbook or a playbook with some local_actions cases. This is likely due to job isolation being enabled. To use
su commands with local playbooks or those with local_actions, job isolation must be disabled. You can disable job isolation through the Jobs tab of the Configure Tower screen by setting the Enable Job Isolation toggle to OFF:
Click Save to save your changes and restart services.
The Job Isolation functionality in Ansible Tower limits which directories on the Tower file system are available for playbooks to see and use during playbook runs. If you are attempting to customize SSH behavior by using a custom SSH configuration in the Tower user’s home directory, this directory must be added to the list of directories exposed by bubblewrap.
For example, to add a custom SSH config in
/var/lib/awx/.ssh/config and make it available for Tower jobs, you can specify the path in the Job Execution Isolation Path field accessed from the Jobs tab of the Configure Tower screen:
If you are using the bundled installer for Ansible Tower, note that only Red Hat Enterprise Linux and CentOS are supported at this time. Ubuntu support has not yet been added to the bundled installer. Ubuntu users can continue to use the unbundled installer.
Once a user who logs in using social authentication has been deleted, the user will not be able to login again or be recreated until the system administrator runs a
cleanup_deleted action with
days=0 to allow users to login again. Once
cleanup_deleted has been run, Tower must be restarted using the
ansible-tower-service restart command. Accounts which have been deleted prior to having the
cleanup_deleted action run will receive a “Your account is inactive” message upon trying to login.
When using inventory from a source control project, individual vaulted variable values are supported. Vaulted files are not currently supported.
The Ansible Tower credential type in Ansible Tower 3.3 does not support use with an OAuth2 Token. Only username + password is currently supported.
If a configuration of a job template is scheduled or added to a workflow with answers from a prompted survey, changing the Job Template survey to supply different variable names may cause the saved configuration to not function. The workaround is to delete the saved schedule configuration/workflow node, and recreate it with answers from the updated survey.