Documentation

10. Troubleshooting, Tips, and Tricks

10.1. Error Logs

Tower server errors are logged in /var/log/tower. Supervisors logs can be found in /var/log/supervisor/. Apache web server errors are logged in the httpd error log. Configure other Tower logging needs in /etc/tower/conf.d/.

Explore client-side issues using the JavaScript console built into most browsers and report any errors to https://access.redhat.com/.

10.2. Problems connecting to your host

If you are unable to run the helloworld.yml example playbook from the Quick Start Guide or other playbooks due to host connection errors, try the following:

  • Can you ssh to your host? Ansible depends on SSH access to the servers you are managing.
  • Are your hostnames and IPs correctly added in your inventory file? (Check for typos.)

10.3. Problems running a playbook

If you are unable to run the helloworld.yml example playbook from the Quick Start Guide or other playbooks due to playbook errors, try the following:

  • Are you authenticating with the user currently running the commands? If not, check how the username has been setup or pass the --user=username or -u username commands to specify a user.
  • Is your YAML file correctly indented? You may need to line up your whitespace correctly. Indentation level is significant in YAML. You can use yamlint to check your playbook. For more information, refer to the YAML primer at: http://docs.ansible.com/YAMLSyntax.html
  • Items beginning with a - are considered list items or plays. Items with the format of key: value operate as hashes or dictionaries. Ensure you don’t have extra or missing - plays.

10.4. Problems when running a job

If you are having trouble running a job from a playbook, you should review the playbook YAML file. When importing a playbook, either manually or via a source control mechanism, keep in mind that the host definition is controlled by Tower and should be set to hosts: all.

10.5. View a listing of all ansible_ variables

Ansible by default gathers “facts” about the machines under its management, accessible in Playbooks and in templates. To view all facts available about a machine, run the setup module as an ad-hoc action:

ansible -m setup hostname

This prints out a dictionary of all facts available for that particular host.

10.6. Locating and configuring the configuration file

While Ansible does not require a configuration file, OS packages often include a default one in /etc/ansible/ansible.cfg for possible customization. You can also install your own copy in ~/.ansible.cfg or keep a copy in a directory relative to your playbook named as ansible.cfg.

To learn which values you can use in this file, refer to the configuration file on github.

Using the defaults are acceptable for starting out, but know that you can configure the default module path or connection type here, as well as other things.

10.7. Playbook Stays in Pending

If you are attempting to run a playbook Job and it stays in the “Pending” state indefinitely, try the following:

  • Run ansible-tower-service restart on the Tower server.
  • Check to ensure that the /var/ partition has more than 1GB of space available. Jobs will not complete with insufficient space on the /var/ partition.
  • Ensure all supervisor services are running via supervisorctl status.
  • Check to see if disabling PRoot solves your issues (and, if so, please report back to Ansible’s Support team to let them know it helped). Configuration for PRoot can be found in the settings.py file.

If you continue to have problems, run sosreport as root on the Tower server, then file a support request with the result.

10.8. Canceling a Tower Job

When issuing a cancel request on a currently running Tower job, Tower issues a SIGINT to the ansible-playbook process. While this does cause Ansible to exit, Ansible is designed to finish tasks before it exits and only does so after the currently running play has completed.

With respect to software dependencies, if a running job is canceled, the job is essentially removed but the dependencies will remain.

10.9. Changing the default timeout for authentication

Create an API settings file (/etc/tower/conf.d/expire.py) with the appropriately defined time variable:

AUTH_TOKEN_EXPIRATION = <seconds>  # default 1800

Create a local_config.js file in /var/lib/awx/public/static/js/local_config.js with any necessary settings.

Warning

When including a local_config.js file with specifically configured variables, it will overwrite Tower’s default config.js file. You should copy the default config.js file over to local_config.js before making changes for specific variables, ensuring that everything Tower needs to call from that file is available.

Tower is designed to look for the local_config.js file first. If this file is not found, it uses the default config.js file included with Tower. If local_config.js is found, it is used instead of config.js.

The variables you can edit within the local_config.js file are as follows:

  • tooltip_delay: {show: 500, hide: 100} – Default number of milliseconds to delay displaying/hiding tooltips
  • debug_mode: false – Enable console logging messages
  • password_length: 8 – Minimum user password length. Set to 0 to not set a limit
  • password_hasLowercase: true – Requires a lowercase letter in the password
  • password_hasUppercase: false – Requires an uppercase letter in the password
  • password_hasNumber: true – Requires a number in the password
  • password_hasSymbol: false – Requires one of these symbols to be in the password: -!$%^&*()_+|~=`{}[]:”;’<>?,./
  • session_timeout: 1800 – Number of seconds before an inactive session is automatically timed out and forced to log in again. This is separate from time out value set in API.

10.10. View Ansible outputs for JSON commands when using Tower

When working with Ansible Tower, you can use the API to obtain the Ansible outputs for commands in JSON format.

To view the Ansible outputs, browse to:

https://<tower server name>/api/v1/jobs/<job_id>/job_events/

10.11. Reusing an external HA database causes installations to fail

Instances have been reported where reusing the external DB during subsequent HA installations causes installation failures.

For example, say that you performed an HA installation. Next, say that you needed to do this again and performed a second HA installation reusing the same external database, only this subsequent installation failed.

When setting up an external HA database which has been used in a prior installation, the HA database must be manually cleared before any additional installations can succeed.