Getting Started

A project in Ansible Container is called an app. It contains all of the source and files for use inside of your project, as well as the Ansible Container build and orchestration instructions. The Ansible Container files are contained in the ansible/ directory.

Dipping a Toe In - Starting from Scratch

Ansible Container provides a convenient way to start your app by simply running ansible-container init from within your project directory, which creates:


Other ansible-container subcommands enable the container development workflow:

  • ansible-container build initiates the build process. It uses an Ansible Container Builder container and also runs instances of your base container images as specified in container.yml. The Builder container runs the playbook main.yml against them, committing the results as new images. Ansible communicates with the other containers through the container engine, not through SSH.
  • ansible-container run orchestrates containers from your built images together as described by the container.yml file. In your development environment, the container.yml can specify overrides to make development faster without having to rebuild images for every code change.
  • ansible-container push uploads your built images to a container registry of your choice.
  • ansible-container shipit generates an Ansible Role to orchestrate containers from your built images in production container platforms, like Kubernetes or Red Hat OpenShift.

So what goes into the files that make this work?


The container.yml file is very similar to the Docker Compose version 1 schema. Much like Docker Compose, this file describes the orchestration of your app. Ansible Container uses this file to determine what images to build, what containers to run and connect, and what images to push to your repository. Additionally, when Ansible Container generates an Ansible role to ship and orchestrate your images in the cloud, this file describes the configuration that role ensures is met.

By way of an example, consider the below container.yml file:

version: "2"
    image: "ubuntu:trusty"
      - "80:80"
    command: ["/usr/bin/dumb-init", "/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
        - "DEBUG=1"

Things to note:

  1. In this example the schema is set to version 2. (Version 2 support was add in version 0.3.0.)
  2. Each of the containers you wish to orchestrate should be under the services key.
  3. For supported service keys, see Ansible Container Compose Specification.
  4. The image you specify should be the base image that your containers will start from. Ansible Container will use your playbook to build upon this base image.
  5. You may optionally specify a dev_overrides section. During build and in generating the Ansible roles to deploy your app to the cloud, this section will be ignored. However, when running your containers locally for your development environment, you may use this section to override settings from your production configuration. For instance, a Javascript developer may wish to use Gulp and BrowserSync to dynamically rebuild assets while development is taking place, versus rebuilding the entire container for every code change. Thus that developer may wish to include dev_overrides that run a BrowserSync server for those assets, whereas in production Gulp would build those assets and exit.


The main.yml file contains the playbook to use in building your containers. Ansible Container automatically provides an inventory with the hosts defined in your container.yml file. All Ansible features are available, and the ansible/ directory is the proper place to put any roles or modules that your playbook requires.

For convenience the environment variable ANSIBLE_CONTAINER=1 is set in any containers where main.yml executes. This may be useful in roles or includes where task execution needs to be conditional. For example:

- name: Only say hello when running via Ansible Container
  command: echo "Hello!"
  when: ansible_env.ANSIBLE_CONTAINER is defined

Visit Ansible Roles for best practices around writing and using roles within Ansible Container.


Share your app on Ansible Galaxy. Provide the requested information in meta.yml, and then log into Galaxy and use the import feature to let the world know about your project.


Running Ansible inside of your build container may have Python library dependencies that your modules require. Use the requirements.txt file to specify those dependencies. This file follows the standard pip format for Python dependencies. When your Ansible build container is created, these dependencies are installed prior to executing the playbook.


If your playbook has role dependencies, and you want the roles automatically installed from Galaxy or directly from version control, add them to requirements.yml. For more information about requirements.yml see Installing Roles From a File.


Set Ansible configuration settings within the build container. For more information see Configuration File.

Real World Usage - Starting from a Working Base Setup

Most of the time, when you’re starting a new app, you’re probably using a fairly standard set of components that all link together to form a working system. For example, if you’re starting a new Wordpress app, you will likely want a container for Apache, one for MySQL, one for Memcache, and one for Wordpress itself. Ansible Container enables you to bootstrap a new app based on such skeletons, hosted on Ansible Galaxy.

Let’s look at a working example. A basic Django application would have the Django application server, a static files server, a PostgreSQL database, and static assets compiled from sources using Gulp and Node.js. To pull the skeleton from Ansible Galaxy and bootstrap a new app based on it, run:

ansible-container init ansible.django-gulp-nginx

From here, you can even build and run this app, even though it doesn’t do a whole lot.

ansible-container build
ansible-container run

To take a deeper dive into what the skeleton app offers, it requires looking into the ansible/ directory, where we find the application orchestration and build instructions.


The most recent version of ansible.django-gulp-nginx requires Ansible Container version 0.3.0+. Depending on the version of Ansible Container available from PyPi, you may need to clone the Ansible Container repo, and run from source. For help, view the Installation Guide.


As explained above, the container.yml file, like a Docker Compose file, describes the orchestration of the containers in your app for both development and production environments. In this app, we have Django application server, a PostgreSQL database server, an nginx web server, and a Gulp-based static asset compiler.

This container.yml file has an additional top-level key called defaults, mapping variables to some sane default values:

  POSTGRES_DB: django
  DJANGO_ROOT: /django
  DJANGO_USER: django
  DJANGO_VENV: /venv
  NODE_USER: node
  NODE_HOME: /node

These variables can be substituted into the services and registries sections of the file using Jinja2 syntax, just like Ansible Core, abstracting out runtime constants for easy tweaking.

The Django service runs with the self-reloading development server for the development environment while running with the Gunicorn WSGI server for production:

  image: centos:7
    DATABASE_URL: "pgsql://{{ POSTGRES_USER }}:{{ POSTGRES_PASSWORD [email protected]:5432/{{ POSTGRES_DB }}"
    - "{{ DJANGO_PORT }}"
  working_dir: "{{ DJANGO_ROOT }}"
    - postgresql
  user: "{{ DJANGO_USER }}"
  command: ['/usr/bin/dumb-init', '{{ DJANGO_VENV }}/bin/gunicorn', '-w', '2', '-b', '{{ DJANGO_PORT }}', 'example.wsgi:application']
    command: ['/usr/bin/dumb-init', '{{ DJANGO_VENV }}/bin/python', '', 'runserver', '{{ DJANGO_PORT }}']
      - "$PWD:{{ DJANGO_ROOT }}"
      runAsUser: 1000

This container image uses Centos 7 as its base. For 12-factor compliance, the Django container sets the database server DSN in an environment variable. In development, the app’s source is exported into the container as a volume so that changes to the code can be detected and instantly integrated into the development container, however in production, the full Django project’s code is part of the container’s filesystem. Note that in both development and production, Yelp’s dumb-init is used for PID 1 management, which is an excellent practice.

The Gulp service exists to compile our static asset sources into minified and unified distributable assets, but in development, like with Django, we want Gulp to run a self-reloading webserver, recompiling when the developer changes any of the source files:

  image: centos:7
  user: {{ NODE_USER }}
  command: /bin/false
    working_dir: "{{ NODE_HOME }}"
    command: ['/usr/bin/dumb-init', '{{ NODE_ROOT }}/node_modules/.bin/gulp']
      - "80:{{ GULP_DEV_PORT }}"
      - "$PWD:{{ NODE_HOME }}"
      - django
      state: absent

In production, this container doesn’t run, so we use /bin/false as its production command and specify in its options that we don’t even include it when using shipit to Kubernetes. However we expect that during development, Gulp will use BrowserSync to serve and recompile the static assets. That server will be expected to proxy web requests to the Django application server in development as well, so we link the containers to make that possible.

Conversely, the Nginx server runs in production but does not in development orchestration:

  image: centos:7
    - "80:{{ DJANGO_PORT }}"
  user: 'nginx'
    - django
  command: ['/usr/bin/dumb-init', 'nginx', '-c', '/etc/nginx/nginx.conf']
    ports: []
    command: '/bin/false'
      runAsUser: 997

In development, Gulp’s webserver listens on port 80 and proxies requests to Django, whereas in production we want Nginx to have that functionality.

Finally, we set up a PostgreSQL database server using a stock image from Docker Hub:

  image: postgres:9.4
    - "5432"
    - '/var/lib/postgresql/data'

You can use distribution base images like CentOS, Ubuntu, or Fedora for the build process to customize, or you can use pre-built base images from a container registry like Docker Hub without modification.


The PostgreSQL container came from a pre-built image, but Ansible Container needs to build the other services for use. The main.yml playbook applies a different Ansible role to each container:

- hosts: django
    - django-gunicorn
- hosts: gulp
    - gulp-static
- hosts: nginx
    - role: ansible.nginx-container
        - /tmp/django/static/
        - /tmp/gulp/node/dist/

The first two of these roles come bundled with the app and can be found in the ansible/roles/ directory. The third one, ansible.nginx-container, is a reference to a role hosted on Ansible Galaxy, and we make that role a dependency for build in requirements.yml. Because the containers described by the included roles are so closely tied to the source code in the project, it’s appropriate that they’re bundled with this app skeleton whereas the j00bar.nginx-container role is independent of the source code in the project, making it a reusable piece for any number of apps.

Visit Ansible Roles for best practices around writing and using roles within Ansible Container.

ansible-container install

As your project evolves and grows, you will likely find the need to bolt on additional services. Fortunately, Ansible Container comes ready to help.

Let’s say that your Django app now needs a Redis service. You can add on additional role-derived services to your app from Ansible Galaxy using the install subcommand.

$ ansible-container install j00bar.redis-container

Ansible Container spins up its builder container and goes out to Ansible Galaxy to grab this container-enabled role. It then makes changes to the three key files in your project’s ansible/ directory:

  1. The role j00bar.redis-container is added to your ansible/requirements.yml for Ansible Container to grab at build-time. The role’s content does not get added to your project.
  2. A new service for redis is automatically added to your ansible/container.yml, complete with the knobs and dials that can be adjusted at container run-time using environment variables. As this container does not have any runtime-adjustable configuration, there isn’t an environment key in the service description.
  3. A new play for the container is automatically added to your ansible/main.yml, invoking the role. The play includes all of the build-time variables for the role and their default values, for convenient tweaking.


You’ll have to manually add the new redis service to the links key in your django service to allow the Django container to talk to the Redis container, as well as define an additional environment variable if you wish to access the Redis container in a 12-factor compliant way.

Now, you can run:

$ ansible-container build

... to recreate your app, and this time, you’ll find a newly built Redis container image all ready to go.

Managing the Application Lifecycle

Ansible Container can manage the lifecycle of an application from development through cloud deployment. For a hands-on walk through of creating, testing, and deploying a sample application, visit our demo site.