Documentation

8. Using Operator

Ansible Automation Platform Operator is meant to provide a more Kubernetes-native installation method for the automation controller via a controller Custom Resource Definition (CRD). It is intended to be deployed in your Kubernetes cluster(s) and can manage one or more controller instances in any namespace. The AWX role builds and maintains an automated controller instance inside of Kubernetes. See defaults/main.yml for all the role variables that you can override.

8.1. Install a controller instance

For illustration purposes, an awx-operator can be deployed on a minikube cluster. Due to different OS and hardware environments, refer to the official minikube documentation for further detail.

  1. Spin up a minikube:

$ minikube start --addons=ingress --cpus=4 --cni=flannel --install-addons=true \
--kubernetes-version=stable --memory=6g
😄  minikube v1.20.0 on Fedora 34
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=4, Memory=6144MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Flannel (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Once minikube is deployed, check if the node(s) and kube-apiserver communication is working as expected.

$ kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
minikube   Ready    control-plane,master   6m28s   v1.20.2

$ kubectl get pods -A
NAMESPACE       NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx   ingress-nginx-admission-create-tjk94        0/1     Completed   0          6m4s
ingress-nginx   ingress-nginx-admission-patch-r4pl6         0/1     Completed   0          6m4s
ingress-nginx   ingress-nginx-controller-5d88495688-sbtp9   1/1     Running     0          6m4s
kube-system     coredns-74ff55c5b-2wz6n                     1/1     Running     0          6m4s
kube-system     etcd-minikube                               1/1     Running     0          6m13s
kube-system     kube-apiserver-minikube                     1/1     Running     0          6m13s
kube-system     kube-controller-manager-minikube            1/1     Running     0          6m13s
kube-system     kube-flannel-ds-amd64-lw7lv                 1/1     Running     0          6m3s
kube-system     kube-proxy-lcxx7                            1/1     Running     0          6m3s
kube-system     kube-scheduler-minikube                     1/1     Running     0          6m13s
kube-system     storage-provisioner                         1/1     Running     1          6m17s
  1. Deploy an AWX Operator into your cluster. Go to https://github.com/ansible/awx-operator/releases and make note of the latest release. Replace <TAG> in the URL https://raw.githubusercontent.com/ansible/awx-operator/<TAG>/deploy/awx-operator.yaml in the command below with the version you are deploying.

$ kubectl apply -f https://raw.githubusercontent.com/ansible/awx-operator/<TAG>/deploy/awx-operator.yaml
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
clusterrole.rbac.authorization.k8s.io/awx-operator created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator created
serviceaccount/awx-operator created
deployment.apps/awx-operator created

Allow a few minutes for the operator to run.

$ kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
awx-operator-7dbf9db9d7-z9hqx   1/1     Running   0          50s
  1. Create a file named awx-demo.yml with the suggested content below. The metadata.name you provide will be the name of the resulting controller deployment. If you deploy more than one controller instance to the same namespace, be sure to use unique names.

---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx-demo
spec:
  service_type: nodeport
  ingress_type: none
  hostname: awx-demo.example.com
  1. Use kubectl to create the tower instance in your cluster:

$ kubectl apply -f awx-demo.yml
awx.awx.ansible.com/awx-demo created

After a few minutes, the new controller instance will be deployed. View the operator pod logs in order to determine where the installation process is at. This can be done by running the following command: kubectl logs -f deployments/awx-operator.

$ kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME                        READY   STATUS    RESTARTS   AGE
awx-demo-77d96f88d5-pnhr8   4/4     Running   0          3m24s
awx-demo-postgres-0         1/1     Running   0          3m34s

$ kubectl get svc -l "app.kubernetes.io/managed-by=awx-operator"
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
awx-demo-postgres   ClusterIP   None           <none>        5432/TCP       4m4s
awx-demo-service    NodePort    10.109.40.38   <none>        80:31006/TCP   3m56s
  1. Once deployed, the controller instance will be accessible by the command minikube service awx-demo-service --url.

You just completed the most basic installation of a controller instance via this operator. For an example using the nginx controller in minukube, see the demo here.

8.2. Configure admin user account

By default, the admin user is admin and the password is available in the <resourcename>-admin-password secret. To retrieve the admin password, run kubectl get secret <resourcename>-admin-password -o jsonpath="{.data.password}" | base64 --decode.

You can configure the following three variables associated with the admin user account creation:

Name

Description

Default

admin_user

Name of the admin user

admin

admin_email

Email of the admin user

test@example.com

admin_password_secret

Secret that contains the admin user password

Empty string

注釈

The admin_password_secret must be a Kubernetes secret and not your text clear password.

If admin_password_secret is not provided, the operator will look for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator will generate a password and create a secret from it named <resourcename>-admin-password.

The secret that is expected to be passed is formatted as follow:

---
apiVersion: v1
kind: Secret
metadata:
  name: <resourcename>-admin-password
  namespace: <target namespace>
stringData:
  password: mysuperlongpassword

8.3. Configure network and TLS

Service Type. Specify the service to be used for your controller service. The service_type supported options are:

  • ClusterIP (default if none specified)

  • LoadBalancer

  • NodePort

By default, the service_labels variable is an empty string, but you can configure it to add custom labels for any service_type:

---
spec:
  ...
  service_type: ClusterIP
  service_labels: |
    environment: testing

Load Balancer. When service_type=LoadBalancer, you can customize the following variables:

Name

Description

Default

loadbalancer_annotations

LoadBalancer annotations

Empty string

loadbalancer_protocol

Protocol to use for Loadbalancer ingress

http

loadbalancer_port

Port used for Loadbalancer ingress

80

  ---
  spec:
    ...
  service_type: LoadBalancer
  loadbalancer_protocol: https
  loadbalancer_port: 443
  loadbalancer_annotations: |
environment: testing
  service_labels: |
environment: testing

When setting up a load balancer for HTTPS, you are required to set the loadbalancer_port to move the port away from 80.

The HTTPS Load Balancer also uses SSL termination at the load balancer level and will offload traffic to the controller over HTTP.

Ingress Type. If ingress_type is not specified, it will default to none and nothing ingress-wise will be created. The ingress_type supported options are:

  • none

  • ingress

  • route

To toggle between these options, you can add the following to your controller CRD:

For none:

---
spec:
  ...
  ingress_type: none

For ingress:

You can configure the following variables when ingress_type=ingress, as shown in the example that follows. The ingress type creates an ingress resource as described in the Kubernetes documentation, which can be shared with many other ingress controllers listed here.

Name

Description

Default

ingress_annotations

Ingress annotations

Empty string

ingress_tls_secret

Secret that contains the TLS information

Empty string

hostname

Define the FQDN

{{ meta.name }}.example.com

---
spec:
  ...
  ingress_type: ingress
  hostname: awx-demo.example.com
  ingress_annotations: |
    environment: testing

For route:

You can configure the following variables when ingress_type=route, as shown in the example that follows.

Name

Description

Default

route_host

Common name the route answers for

<instance-name>-<namespace>-<routerCanonicalHostname>

route_tls_termination_mechanism

TLS Termination mechanism (Edge, Passthrough)

Edge

route_tls_secret

Secret that contains the TLS information

Empty string

---
spec:
...
ingress_type: route
route_host: awx-demo.example.com
route_tls_termination_mechanism: Passthrough
route_tls_secret: custom-route-tls-secret-name

8.4. Configure database

Ansible Automation Platform supports an external PostgreSQL service and a managed PostgreSQL service provided by the AWX operator.

8.4.1. External PostgreSQL service

In order for the controller instance to rely on an external database, the custom resource needs to know about the connection details. Those connection details should be stored as a secret and either specified as postgres_configuration_secret at the CR spec level, or simply be present on the namespace under the name <resourcename>-postgres-configuration.

The secret is formatted as follows:

---
apiVersion: v1
kind: Secret
metadata:
  name: <resourcename>-postgres-configuration
  namespace: <target namespace>
stringData:
  host: <external ip or url resolvable by the cluster>
  port: <external port, this usually defaults to 5432>
  database: <desired database name>
  username: <username to connect as>
  password: <password to connect with>
  sslmode: prefer
  type: unmanaged
type: Opaque

You can set a specific username, password, port, or database, but still have the database managed by the operator. In this case, when creating the postgres-configuration secret, the type: managed field must be added.

注釈

The variable sslmode is valid for external databases only. The supported values are: prefer, disable, allow, require, verify-ca, and verify-full.

8.4.2. Migrate data from older controller instance

If you have an installation of Ansible Tower or an older installation of automation controller, in order for data to properly work with the AWX operator, you can migrate that data, but you must provide some information via secrets.

It is strongly recommended to backup your database prior to performing this procedure. See Backup and restore operator.

8.4.2.1. Create secrets

To create secrets used for migrating old data to the new platform:

  1. Find your old secret key in the inventory file you used to deploy the controller in releases prior to Ansible Automation Platform version 2.0.

apiVersion: v1
kind: Secret
metadata:
  name: <resourcename>-secret-key
  namespace: <target-namespace>
stringData:
  secret_key: <old-secret-key>
type: Opaque

The value for <resourcename> must match the name of the controller object you are creating. In the example below, it is awx. Alternatively, the name of your secret_key secret and old postgresql configuration secret can be specified in the spec of the AWX object as such:

secret_key_secret: my-secret-key-secret-name
old_postgres_configuration_secret: my-custom-old-pg-secret-name

The secret name only has to be named in the <resourcename>-secret-key format to be automatically picked up if not supplied in the spec.

  1. Reformat the old database credentials to reflect the following format:

---
apiVersion: v1
kind: Secret
metadata:
  name: <resourcename>-old-postgres-configuration
  namespace: <target namespace>
stringData:
  host: <external ip or url resolvable by the cluster>
  port: <external port, this usually defaults to 5432>
  database: <desired database name>
  username: <username to connect as>
  password: <password to connect with>
type: Opaque

For host, a URL resolvable by the cluster could look something like postgresql.<namespace>.svc.cluster.local, where <namespace> is filled in with the namespace of the controller deployment you are migrating data from.

If your controller deployment is already using an external database server or its database is otherwise not managed by the controller deployment, you can instead create the same secret as above but omit the old- from the name. In the next section, pass it in through postgres_configuration_secret instead, omitting the _old_ from the key and ensuring the value matches the name of the secret. This will make the controller pick up on the existing database and apply any pending migrations. It is strongly recommended to backup your database beforehand.

The PostgreSQL pod for the old deployment is used when streaming data to the new PostgreSQL pod. If your PostgreSQL pod has a custom label, you can pass that via the postgres_label_selector variable to make sure the PostgreSQL pod can be found.

8.4.2.2. Deploy the controller

When you apply your controller object, you must specify the name to the database secret you created above in order to deploy the controller:

apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  old_postgres_configuration_secret: <resourcename>-old-postgres-configuration
  ...

8.4.3. Managed PostgreSQL service

If you do not have access to an external PostgreSQL service, the AWX operator can deploy one for you along side the controller instance itself.

You can configure the following variables for the managed PostgreSQL service, as shown in the example that follows:

Name

Description

Default

postgres_image

Path of the image to pull

postgres:12

postgres_resource_requirements

PostgreSQL container resource requirements

Empty object

postgres_storage_requirements

PostgreSQL container storage requirements

requests: {storage: 8Gi}

postgres_storage_class

PostgreSQL PV storage class

Empty string

postgres_data_path

PostgreSQL data path

/var/lib/postgresql/data/pgdata

  ---
  spec:
  ...
  postgres_resource_requirements:
requests:
  cpu: 500m
  memory: 2Gi
limits:
  cpu: 1
  memory: 4Gi
  postgres_storage_requirements:
requests:
  storage: 8Gi
limits:
  storage: 50Gi
  postgres_storage_class: fast-ssd

Note: If postgres_storage_class is not defined, PostgreSQL will store its data on a volume using the default storage class for your cluster. PV storage class refers to persistent volumes.

Advanced options are available to further configure your controller image. Refer to Advanced Configurations for AWX Operator section of this guide.

8.5. Upgrade the operator

To upgrade the controller, it is recommended to upgrade the awx-operator to the version that maps to the desired version of the controller. To find the version of the controller that will be installed by the awx-operator by default, check the version specified in the image_version variable in the roles/installer/defaults/main.yml file for that particular release.

Apply the awx-operator.yml for that release to upgrade the operator, and also upgrade your controller deployment.

8.6. Backup and restore operator

The operator uses custom resources to handle backups and restores for awx-operator deployments.

8.6.1. Backup role

The purpose of this role is to create a backup of your controller deployment which includes:

  • custom deployment specific values in the spec section of the custom resource object

  • backup of the PostgreSQL database

  • secret_key, admin_password, and broadcast_websocket secrets

  • database configuration

This role assumes you are authenticated with an Openshift or Kubernetes cluster:

  • The awx-operator has been deployed to the cluster

  • The controller is deployed to via the operator

To backup the operator:

  1. Deploy the controller with the awx-operator (Refer to the previous sections in this chapter for detail)

  2. Create a file named backup-awx.yml with the following contents:

---
apiVersion: awx.ansible.com/v1beta1
kind: AWXBackup
metadata:
  name: awxbackup-2021-04-22
  namespace: my-namespace
spec:
  deployment_name: mytower

The deployment_name above is the name of the controller deployment you intend to backup from. The namespace is the one containing the controller deployment that will be backed up.

  1. Use kubectl to create the backup object in your cluster:

$ kubectl apply -f backup-awx.yml

The resulting pvc will contain a backup tar that can be used to restore to a new deployment. Future backups will also be stored in separate tars on the same pvc.

8.6.1.1. Backup role variables

A custom, pre-created pvc can be used by setting the following variables.

backup_pvc: 'awx-backup-volume-claim'

If no pvc or storage class is provided, the cluster's default storage class will be used to create the pvc.

This role will automatically create a pvc using a storage class if provided:

backup_storage_class: 'standard'
backup_storage_requirements: '20Gi'

By default, the backup pvc will be created in the same namespace the awxbackup object is created in. If you want your backup to be stored in a specific namespace, you can do so by specifying backup_pvc_namespace. Keep in mind that you will need to provide the same namespace when restoring.

backup_pvc_namespace: 'custom-namespace'

If a custom PostgreSQL configuration secret was used when deploying the controller, it will automatically be used by the backup role. To check the name of this secret, look at the postgresConfigurationSecret status on your controller object.

The PostgreSQL pod for the old deployment is used when backing up data to the new PostgreSQL pod. If your PostgreSQL pod has a custom label, you can pass that via the postgres_label_selector variable to make sure the PostgreSQL pod can be found.

You can test this role directly by creating and running the following playbook with the appropriate variables:

---
- name: Backup AWX
  hosts: localhost
  gather_facts: false
  roles:
    - backup

8.6.2. Restore role

The purpose of this role is to restore your controller deployment from an existing PVC backup. The backup includes:

  • custom deployment specific values in the spec section of the custom resource object

  • backup of the PostgreSQL database

  • secret_key, admin_password, and broadcast_websocket secrets

  • database configuration

This role assumes you are authenticated with an Openshift or Kubernetes cluster:

  • The awx-operator has been deployed to the cluster

  • The controller is deployed to via the operator

  • An AWX backup is available on a PVC in your cluster (see Backup role)

To restore the operator:

  1. Create a file named estore-awx.yml with the following contents:

---
apiVersion: awx.ansible.com/v1beta1
kind: AWXRestore
metadata:
  name: restore1
  namespace: my-namespace
spec:
  deployment_name: mytower
  backup_name: awxbackup-2021-04-22
  backup_pvc_namespace: 'old-awx-namespace'

The deployment_name above is the name of the controller deployment you intend to create and restore to. The namespace specified is the namespace the resulting controller deployment will be in.

  1. The namespace you specified must be pre-created:

kubectl create ns my-namespace
  1. Use kubectl to create the restore object in your cluster:

$ kubectl apply -f restore-awx.yml

This will create a new deployment and restore your backup to it.

警告

The value for the admin_password_secret will replace the password for the admin_user user (by default, this is the admin user).

8.6.2.1. Restore role variables

The name of the backup directory can be found as a status on your AWXBackup object. This can be found in your cluster's console, or with the client as shown below:

$ kubectl get awxbackup awxbackup1 -o jsonpath="{.items[0].status.backupDirectory}"
/backups/tower-openshift-backup-2021-04-02-03:25:08

backup_dir: '/backups/tower-openshift-backup-2021-04-02-03:25:08'

The name of the PVC can also be found by looking at the backup object:

$ kubectl get awxbackup awxbackup1 -o jsonpath="{.items[0].status.backupClaim}"
awx-backup-volume-claim

backup_pvc: 'awx-backup-volume-claim'

By default, the backup pvc will be created in the same namespace the awxbackup object is created in. This namespace must be specified using the backup_pvc_namespace variable:

backup_pvc_namespace: 'custom-namespace'

If a custom PostgreSQL configuration secret was used when deploying the controller, it must be set:

postgres_configuration_secret: 'awx-postgres-configuration'

If the awxbackup object no longer exists, it is still possible to restore from the backup it created, by specifying the pvc name and the back directory:

backup_pvc: myoldtower-backup-claim
backup_dir: /backups/tower-openshift-backup-2021-04-02-03:25:08

You can test this role directly by creating and running the following playbook with the appropriate variables:

---
- name: Restore AWX
  hosts: localhost
  gather_facts: false
  roles:
    - restore