Ansible Tower 3.3 introduces support for container-based clusters running on OpenShift. This section provides a high-level overview of OpenShift and Tower Pod configuration, notably the following:
The Tower OpenShift documentation assumes an understanding of how to use OpenShift at an administrative level and should include some experience maintaining container based infrastructure. The differences are:
cluster-adminrole is required)
An OpenShift install requires the following parameters to be set:
For OpenShift install method, the settings are the same as the traditional Tower install method, except:
The Project will be created if it doesn’t exist but the user given there should have either:
The password should be given on the command line as shown when executing the installer.
The oc command line client should be installed and available and the client version should match the server version.
The secret-key, admin password, and postgresql username and password should be populated in the inventory file prior to running the installer.
./setup_openshift.sh -e openshift_password=$OPENSHIFT_PASSWORD -- -v
Tower uses Bubblewrap (from Project Atomic) as a mechanism to give the (relatively) unprivileged awx user the ability to isolate Ansible processes from each other. There are certain privileges that need to be granted to the container that necessitates running the Tower web and task containers in privileged mode.
Normally Tower examines the system that it runs on in order to determine what its own capacity is for running Jobs and performing background requests. On OpenShift this works differently since pods and containers will tend to coexist on systems. Pods can also migrate between hosts depending on current conditions (for instance, if the OpenShift cluster is being upgraded or is experiencing an outage).
It’s common for Pods and Containers to Request the resources that they need. OpenShift then uses this information to decide Where things run (or even if they can run).
Tower will also use this information to configure its own capacity for how many (and the size of) individual jobs can be run.
Each Tower pod is made up of 4 containers (see diagram), each container is configured with a conservative default, but taken all together they can be somewhat substantial. These defaults are also configurable but it’s helpful to know what effect that has on the Tower cluster.
The two most important values control the CPU and memory allocation for the task execution container. This container is the one that is actually responsible for launching jobs, as such these values directly control how many and what size jobs can run. The settings can be changed in the inventory and here are the default values:
This is the amount of CPU to dedicate, the value of 1500 refers to how OpenShift itself views CPU requests (see https://docs.OpenShift.com/container-platform/3.9/dev_guide/compute_resources.html#dev-cpu-requests) (for value meanings see: https://docs.OpenShift.com/container-platform/3.9/dev_guide/compute_resources.html#dev-compute-resources)
1500 is 1500 millicores which translates to roughly 1.5 CPU Cores.
This value is used to configure the Tower capacity in the following way:
((task_cpu_request/ 1000) * 4)
Which is to say that, by default, Tower in OpenShift (when configured entirely for cpu-based algorithm) can run at most 6 simultaneous forks.
The other value that can be tuned:
task_mem_request=2 - This is the amount of memory to dedicate (in gigabytes).
This value is used to configure the Tower capacity in the following way
((task_mem_request * 1024) / 100)
Which is to say that, by default, Tower can run at most 40 simultaneous forks when configured for mem-based algorithm.
For the default resource requests, see
All together the default requested resources for a single Tower pod total to:
The OpenShift instances that you want to run Tower on should at least match that. If the defaults are changed then the system will need to be updated to match the new requirements.
If other Pods are running on the OpenShift instance or the systems are too small to meet these requirements then Tower may not be able to run anywhere. Refer to Capacity Algorithm for more detail.
There are two methods for configuring the Tower PostgreSQL database for Tower running in Openshift:
PersistentVolumeClaimand providing it the Tower install playbook inventory file as
If you are installing Tower for demo/evaluation purposes you may set
openshift_pg_emptydir=true and OpenShift will create a temporary volume for use by the pod.
This volume is temporary for demo/evaluation purposes only, and will be deleted when the pod is stopped.
You must backup and restore into the same version before upgrading. The process for backup and restore resembles that of traditional Tower. From the root of the installer directory, run:
./setup_openshift.sh -b # Backup ./setup_openshift.sh -r # Restore
configmap will be recreated from values in the inventory file. The inventory file is included in backup tarball.
Prior to performing an upgrade, you must backup and restore into the same version before upgrading. To upgrade a Tower deployment in OpenShift, download the most recent installer from http://releases.ansible.com/ansible-tower/setup_openshift. Expect some downtime, just as traditional Tower installations.
Tower supports migration from traditional setup to a setup in OpenShift, as outlined below:
inventoryfile and change
pg_portto point to the upgraded Tower database from your traditional Tower setup.
It is possible to override the base container image to build custom virtual environments (virtualenvs). Overriding the base container is used for customization and custom virtualenv support or for local mirroring. If you want to use custom virtual environments with Tower deployed in OpenShift, you will need to customize the container image used by Tower.
Here is a Dockerfile that can be used as an example. This installs Ansible 2.3 into a custom virtual environment:
FROM registry.access.redhat.com/ansible-tower/ansible-tower:3.3.0 USER root RUN mkdir -p /var/lib/awx/venv/ansible2.3 RUN virtualenv --system-site-packages /var/lib/awx/venv/ansible2.3 RUN cp -a /var/lib/awx/venv/ansible/lib64/python2.7/site-packages/* /var/lib/awx/venv/ansible2.3/lib64/python2.7/site-packages/ RUN sh -c ". /var/lib/awx/venv/ansible2.3/bin/activate ; pip install ansible==22.214.171.124"
If you need to install other python dependencies (such as those for custom modules) you can add additional RUN commands to the docker file that activate the virtual environment and call
Once the image is built, make sure that image is in your registry and that the OpenShift cluster and installer have access to it.
Override the following variables in
group_vars/all in the OpenShift installer to point to the image you have pushed to your registry:
kubernetes_web_image: registry.example.com/my-custom-tower kubernetes_task_image: registry.example.com/my-custom-tower
The image must be tagged with 3.3.0 or pass version variable to installer to override.
When hosting all images in a local registry, such as offline installs, you’ll need to include these other images:
kubernetes_rabbitmq_image: registry.example.com/ansible-tower-messaging kubernetes_memcached_image: registry.example.com/ansible-tower-memcached
If mirroring the vanilla Red Hat images:
kubernetes_web_image: registry.example.com/ansible-tower kubernetes_task_image: registry.example.com/ansible-tower