How to validate OpenShift using Ansible

You might have seen my previous article about the OpenShift troubleshooting guide. With this blog post I want to show how to validate that an OpenShift container platform is fully functional. This is less of an issue when you do a fresh install but becomes more important when you apply changes or do in-place upgrades your cluster. And because we all like automation; I show how to do this with Ansible in an automated way.

Let’s jump right into it and look at the different steps. Here the links to the Ansible role and playbook for more details:

  • Prepare workspace – create a temp directories and copy admin.kubeconfig there.
  • Check node state – run command to check for Not Ready nodes:
  • oc get nodes --no-headers=true | grep -v ' Ready' | true
    
  • Check node scheduling – run command to check for nodes where scheduling is disabled:
  • oc get nodes --no-headers=true | grep 'SchedulingDisabled' | awk '{ print $1 }'
    
  • Check master certificates – check validity of master API, controller and etcd certificates:
  • cat /etc/origin/master/ca.crt | openssl x509 -text | grep -i Validity -A2
    cat /etc/origin/master/master.server.crt | openssl x509 -text | grep -i Validity -A2
    cat /etc/origin/master/admin.crt | openssl x509 -text | grep -i Validity -A2
    cat /etc/etcd/ca.crt | openssl x509 -text | grep -i Validity -A2
    
  • Check nodes certificates – check validity of worker node certificate. This needs to run on all compute and infra nodes, not masters:
  • cat /etc/origin/node/server.crt | openssl x509 -text | grep -i Validity -A2
    
  • Check etcd health – run command for cluster health check:
  • /usr/bin/etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt -C https://{{ hostname.stdout }}:2379 cluster-health
    
  • Check important default projects for failed pods – look out for failed pods:
  • oc get pods -o wide --no-headers=true -n {{ item }} | grep -v " Running\|Completed" || true
    
  • Check registry health – run GET docker-registry healthz path and expect http 200:
  • curl -kv https://{{ registry_ip.stdout }}/healthz
    
  • Check SkyDNS resolution – try to resolve internal hostnames:
  • nslookup docker-registry.default.svc.cluster.local
    nslookup docker-registry.default.svc
    
  • Check upstream DNS resolution – try and resolve cluster external dns names.
  • Create test project
  • Run persistent volume test – create busybox container and claim volume
    1. apply imagestream, deploymentconfig and persistent volume claim configuration
    2. synchronise testfile to container pv
    3. check content of testfile
    4. delete testfile
  • Run test build – run the following steps to create multiple application pods on the OpenShift cluster:
    1. apply buildconfig, imagestream, deploymentconfig, service and route configuration
    2. check pods are running
    3. get route hostnames
    4. connect to routes and show output
    5. trigger new-build and check new build is created
    6. check if pods are running
    7. connect to routes and show output
  • Delete test project
  • Delete workspace folder – at the end the validation the role deletes the temporary folder and all its contents.

Next we need to run the playbook and see the output below:

PLAY [Check OpenShift cluster installation] ***********************************************************************************************************************

TASK [check : make temp directory] ********************************************************************************************************************************
changed: [master1]

TASK [check : create temp directory] ******************************************************************************************************************************
ok: [master1]

TASK [check : create template folder] *****************************************************************************************************************************
changed: [master1]

TASK [check : create test file folder] ****************************************************************************************************************************
changed: [master1]

TASK [check : get hostname] ***************************************************************************************************************************************
changed: [master1]

TASK [check : copy admin config] **********************************************************************************************************************************
ok: [master1]

TASK [check : check for not ready nodes] **************************************************************************************************************************
changed: [master1]

TASK [check : not ready nodes] ************************************************************************************************************************************
ok: [master1] => {
    "notready.stdout_lines": []
}

TASK [check : check for scheduling disabled nodes] ****************************************************************************************************************
changed: [master1]

TASK [check : scheduling disabled nodes] **************************************************************************************************************************
ok: [master1] => {
    "schedulingdisabled.stdout_lines": []
}

TASK [check : validate master certificates] ***********************************************************************************************************************
changed: [master1] => (item=cat /etc/origin/master/ca.crt | openssl x509 -text | grep -i Validity -A2)
changed: [master1] => (item=cat /etc/origin/master/master.server.crt | openssl x509 -text | grep -i Validity -A2)
changed: [master1] => (item=cat /etc/origin/master/admin.crt | openssl x509 -text | grep -i Validity -A2)
changed: [master1] => (item=cat /etc/etcd/ca.crt | openssl x509 -text | grep -i Validity -A2)

TASK [check : ca certificate] *************************************************************************************************************************************
ok: [master1] => {
    "msg": [
        [
            "        Validity",
            "            Not Before: Jan 31 17:13:52 2019 GMT",
            "            Not After : Jan 30 17:13:53 2024 GMT"
        ],
        [
            "        Validity",
            "            Not Before: Jan 31 17:13:53 2019 GMT",
            "            Not After : Jan 30 17:13:54 2021 GMT"
        ],
        [
            "        Validity",
            "            Not Before: Jan 31 17:13:53 2019 GMT",
            "            Not After : Jan 30 17:13:54 2021 GMT"
        ],
        [
            "        Validity",
            "            Not Before: Jan 31 17:11:09 2019 GMT",
            "            Not After : Jan 30 17:11:09 2024 GMT"
        ]
    ]
}

TASK [check : check etcd state] ***********************************************************************************************************************************
changed: [master1]

TASK [check : show etcd state] ************************************************************************************************************************************
ok: [master1] => {
    "etcdstate.stdout_lines": [
        "member 335450512aab5650 is healthy: got healthy result from https://172.26.7.132:2379",
        "cluster is healthy"
    ]
}

TASK [check : check default openshift-infra and logging projects for failed pods] *********************************************************************************
ok: [master1] => (item=default)
ok: [master1] => (item=kube-system)
ok: [master1] => (item=kube-service-catalog)
ok: [master1] => (item=openshift-logging)
ok: [master1] => (item=openshift-infra)
ok: [master1] => (item=openshift-console)
ok: [master1] => (item=openshift-web-console)
ok: [master1] => (item=openshift-monitoring)
ok: [master1] => (item=openshift-node)
ok: [master1] => (item=openshift-sdn)

TASK [check : failed failedpods] **********************************************************************************************************************************
ok: [master1] => {
    "msg": [
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        []
    ]
}

TASK [check : get container registry ip] **************************************************************************************************************************
changed: [master1]

TASK [check : check container registry health] ********************************************************************************************************************
ok: [master1]

TASK [check : check internal SysDNS resolution for cluster.local] *************************************************************************************************
changed: [master1] => (item=docker-registry.default.svc.cluster.local)
changed: [master1] => (item=docker-registry.default.svc)

TASK [check : check external DNS upstream resolution] *************************************************************************************************************
changed: [master1] => (item=www.google.com)
changed: [master1] => (item=www.google.co.uk)
changed: [master1] => (item=www.google.de)

TASK [check : create test project] ********************************************************************************************************************************
changed: [master1]

TASK [check : run test persistent volume] *************************************************************************************************************************
included: /var/jenkins_home/workspace/openshift/ansible/roles/check/tasks/pv.yml for master1

TASK [check : create sequence number list for pv] *****************************************************************************************************************
ok: [master1] => (item=1)
ok: [master1] => (item=2)
ok: [master1] => (item=3)
ok: [master1] => (item=4)
ok: [master1] => (item=5)

TASK [check : copy build templates] *******************************************************************************************************************************
changed: [master1] => (item={u'dest': u'busybox.yml', u'src': u'busybox.j2'})
changed: [master1] => (item={u'dest': u'pv.yml', u'src': u'pv.j2'})

TASK [check : copy testfile] **************************************************************************************************************************************
changed: [master1]

TASK [check : create pvs] *****************************************************************************************************************************************
ok: [master1]

TASK [check : deploy busybox pod] *********************************************************************************************************************************
changed: [master1]

TASK [check : check if all pods are running] **********************************************************************************************************************
FAILED - RETRYING: check if all pods are running (10 retries left).
changed: [master1] => (item=busybox)

TASK [check : get busybox pod name] *******************************************************************************************************************************
changed: [master1]

TASK [check : sync testfile to pod] *******************************************************************************************************************************
changed: [master1]

TASK [check : check testfile in pv] *******************************************************************************************************************************
changed: [master1]

TASK [check : delete testfile in pv] ******************************************************************************************************************************
changed: [master1]

TASK [check : run test build] *************************************************************************************************************************************
included: /var/jenkins_home/workspace/openshift/ansible/roles/check/tasks/build.yml for master1

TASK [check : create sequence number list for hello openshift] ****************************************************************************************************
ok: [master1] => (item=0)
ok: [master1] => (item=1)
ok: [master1] => (item=2)
ok: [master1] => (item=3)
ok: [master1] => (item=4)
ok: [master1] => (item=5)
ok: [master1] => (item=6)
ok: [master1] => (item=7)
ok: [master1] => (item=8)
ok: [master1] => (item=9)

TASK [check : create pod list] ************************************************************************************************************************************
ok: [master1] => (item=[u'0', {u'svc': u'http'}])
ok: [master1] => (item=[u'1', {u'svc': u'http'}])
ok: [master1] => (item=[u'2', {u'svc': u'http'}])
ok: [master1] => (item=[u'3', {u'svc': u'http'}])
ok: [master1] => (item=[u'4', {u'svc': u'http'}])
ok: [master1] => (item=[u'5', {u'svc': u'http'}])
ok: [master1] => (item=[u'6', {u'svc': u'http'}])
ok: [master1] => (item=[u'7', {u'svc': u'http'}])
ok: [master1] => (item=[u'8', {u'svc': u'http'}])
ok: [master1] => (item=[u'9', {u'svc': u'http'}])

TASK [check : copy build templates] *******************************************************************************************************************************
changed: [master1] => (item={u'dest': u'hello-openshift.yml', u'src': u'hello-openshift.j2'})

TASK [check : deploy hello openshift pods] ************************************************************************************************************************
changed: [master1]

TASK [check : check if all pods are running] **********************************************************************************************************************
FAILED - RETRYING: check if all pods are running (10 retries left).
changed: [master1] => (item={u'name': u'hello-http-0'})
FAILED - RETRYING: check if all pods are running (10 retries left).
changed: [master1] => (item={u'name': u'hello-http-1'})
changed: [master1] => (item={u'name': u'hello-http-2'})
changed: [master1] => (item={u'name': u'hello-http-3'})
changed: [master1] => (item={u'name': u'hello-http-4'})
changed: [master1] => (item={u'name': u'hello-http-5'})
changed: [master1] => (item={u'name': u'hello-http-6'})
changed: [master1] => (item={u'name': u'hello-http-7'})
changed: [master1] => (item={u'name': u'hello-http-8'})
changed: [master1] => (item={u'name': u'hello-http-9'})

TASK [check : get hello openshift pod hostnames] ******************************************************************************************************************
changed: [master1]

TASK [check : convert check_route string to json] *****************************************************************************************************************
ok: [master1]

TASK [check : set query to get pod hostname] **********************************************************************************************************************
ok: [master1]

TASK [check : get hostname list] **********************************************************************************************************************************
ok: [master1]

TASK [check : connect to route via curl] **************************************************************************************************************************
changed: [master1] => (item=hello-http-0-test.paas.domain.com)
changed: [master1] => (item=hello-http-1-test.paas.domain.com)
changed: [master1] => (item=hello-http-2-test.paas.domain.com)
changed: [master1] => (item=hello-http-3-test.paas.domain.com)
changed: [master1] => (item=hello-http-4-test.paas.domain.com)
changed: [master1] => (item=hello-http-5-test.paas.domain.com)
changed: [master1] => (item=hello-http-6-test.paas.domain.com)
changed: [master1] => (item=hello-http-7-test.paas.domain.com)
changed: [master1] => (item=hello-http-8-test.paas.domain.com)
changed: [master1] => (item=hello-http-9-test.paas.domain.com)

TASK [check : set json query] *************************************************************************************************************************************
ok: [master1]

TASK [check : show route http response] ***************************************************************************************************************************
ok: [master1] => {
    "msg": [
        "hello-http-0",
        "hello-http-1",
        "hello-http-2",
        "hello-http-3",
        "hello-http-4",
        "hello-http-5",
        "hello-http-6",
        "hello-http-7",
        "hello-http-8",
        "hello-http-9"
    ]
}

TASK [check : trigger new build] **********************************************************************************************************************************
changed: [master1]

TASK [check : check new build is created] *************************************************************************************************************************
changed: [master1]

TASK [check : check if all pods with new build are running] *******************************************************************************************************
FAILED - RETRYING: check if all pods with new build are running (10 retries left).
FAILED - RETRYING: check if all pods with new build are running (9 retries left).
changed: [master1] => (item={u'name': u'hello-http-0'})
changed: [master1] => (item={u'name': u'hello-http-1'})
changed: [master1] => (item={u'name': u'hello-http-2'})
changed: [master1] => (item={u'name': u'hello-http-3'})
changed: [master1] => (item={u'name': u'hello-http-4'})
changed: [master1] => (item={u'name': u'hello-http-5'})
changed: [master1] => (item={u'name': u'hello-http-6'})
changed: [master1] => (item={u'name': u'hello-http-7'})
changed: [master1] => (item={u'name': u'hello-http-8'})
changed: [master1] => (item={u'name': u'hello-http-9'})

TASK [check : connect to route via curl] **************************************************************************************************************************
changed: [master1] => (item=hello-http-0-test.paas.domain.com)
changed: [master1] => (item=hello-http-1-test.paas.domain.com)
changed: [master1] => (item=hello-http-2-test.paas.domain.com)
changed: [master1] => (item=hello-http-3-test.paas.domain.com)
changed: [master1] => (item=hello-http-4-test.paas.domain.com)
changed: [master1] => (item=hello-http-5-test.paas.domain.com)
changed: [master1] => (item=hello-http-6-test.paas.domain.com)
changed: [master1] => (item=hello-http-7-test.paas.domain.com)
changed: [master1] => (item=hello-http-8-test.paas.domain.com)
changed: [master1] => (item=hello-http-9-test.paas.domain.com)

TASK [check : show route http response] ***************************************************************************************************************************
ok: [master1] => {
    "msg": [
        "hello-http-0",
        "hello-http-1",
        "hello-http-2",
        "hello-http-3",
        "hello-http-4",
        "hello-http-5",
        "hello-http-6",
        "hello-http-7",
        "hello-http-8",
        "hello-http-9"
    ]
}

TASK [check : delete test project] ********************************************************************************************************************************
changed: [master1]

TASK [check : delete temp directory] ******************************************************************************************************************************
ok: [master1]

PLAY RECAP ********************************************************************************************************************************************************
master1                : ok=52   changed=30   unreachable=0    failed=0

The cluster validation playbook successfully finished without errors and this is just a simple way to do a basic check of your OpenShift platform. Check out the OpenShift documentation about environment_health_checks.

Deploy OpenShift 3.11 Container Platform on Google Cloud Platform using Terraform

Over the past few days I have converted the OpenShift 3.11 infrastructure on Amazon AWS to run on Google Cloud Platform. I have kept the similar VPC network layout and instances to run OpenShift.

Before you start you need to create a project on Google Cloud Platform, then continue to create the service account and generate the private key and download the credential as JSON file.

Create the new project:

Create the service account:

Give the service account compute admin and storage object creator permissions:

Then create a storage bucket for the Terraform backend state and assign the correct bucket permission to the terraform service account:

Bucket permissions:

To start, clone my openshift-terraform github repository and checkout the google-dev branch:

git clone https://github.com/berndonline/openshift-terraform.git
cd ./openshift-terraform/ && git checkout google-dev

Add your previously downloaded credentials json file:

cat << EOF > ./credentials.json
{
  "type": "service_account",
  "project_id": "<--your-project-->",
  "private_key_id": "<--your-key-id-->",
  "private_key": "-----BEGIN PRIVATE KEY-----

...

}
EOF

There are a few things you need to modify in the main.tf and variables.tf before you can start:

...
terraform {
  backend "gcs" {
    bucket    = "<--your-bucket-name-->"
    prefix    = "openshift-311"
    credentials = "credentials.json"
  }
}
...
...
variable "gcp_region" {
  description = "Google Compute Platform region to launch servers."
  default     = "europe-west3"
}
variable "gcp_project" {
  description = "Google Compute Platform project name."
  default     = "<--your-project-name-->"
}
variable "gcp_zone" {
  type = "string"
  default = "europe-west3-a"
  description = "The zone to provision into"
}
...

Add the needed environment variables to apply changes to CloudFlare DNS:

export TF_VAR_email='<-YOUR-CLOUDFLARE-EMAIL-ADDRESS->'
export TF_VAR_token='<-YOUR-CLOUDFLARE-TOKEN->'
export TF_VAR_domain='<-YOUR-CLOUDFLARE-DOMAIN->'
export TF_VAR_htpasswd='<-YOUR-OPENSHIFT-DEMO-USER-HTPASSWD->'

Let’s start creating the infrastructure and verify afterwards the created resources on GCP.

terraform init && terraform apply -auto-approve

VPC and public and private subnets in region europe-west3:

Created instances:

Created load balancers for master and infra nodes:

Copy the ssh key and ansible-hosts file to the bastion host from where you need to run the Ansible OpenShift playbooks.

scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./helper_scripts/id_rsa [email protected]$(terraform output bastion):/home/centos/.ssh/
scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./inventory/ansible-hosts  [email protected]$(terraform output bastion):/home/centos/ansible-hosts

I recommend waiting a few minutes as the cloud-init script prepares the bastion host. Afterwards continue with the pre and install playbooks. You can connect to the bastion host and run the playbooks directly.

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-pre.yml -i ~/ansible-hosts"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-install.yml -i ~/ansible-hosts"

After the installation is completed, continue to create your project and applications:

When you are finished with the testing, run terraform destroy.

terraform destroy -force 

Please share your feedback and leave a comment.

Part three: Ansible URI module and PUT or POST

This will be the last part of my short series on the Ansible URI module and this time I will explain and show examples about when to use PUT or POST when interacting with REST APIs. I make use of the JSON_QUERY filter which I have explained in my previous article.

What is the difference between POST and PUT?

  • PUT – The PUT method is idempotent and needs the universal unique identifier (uuid) to update an API object. Example PUT /api/service/{{ object-uuid }}. The HTTP return code is 200.

  • POST – Is not idempotent and used to create an API object and an unique identifier is not needed for this. In this case the uuid is server-side generated.  Example POST /api/service/. The HTTP return code is 201.

I am again using the example from AVI Network Software Load Balancers and their REST API.

---
password: 123
api_version: 17.2.13
openshift:
  name: openshift-cloud-provider
openshift_cloud_json: "{{ lookup('template','openshift_cloud_json.j2') }}"

(Optional) Set ansible_host variable to IP address. I have had issues in the past using the DNS name and the task below overrides the variable with the IP address of the host:

- block:
  - name: Resolve hostname
    shell: dig +short "{{ ansible_host }}"
    changed_when: false
    register: dig_output
  
  - name: Set ansible_host to IP address
    set_fact:
      ansible_host: "{{ dig_output.stdout }}"
  when: ( inventory_hostname == groups ["controller"][0] )

Let’s start creating an object using POST and afterwards updating the existing object using PUT. The problem with POST is, that it is not idempotent so we need to first check if the object exists before creating it. We need to do this because creating the same object twice could be an issue:

- block: 
  - name: Avi | OpenShift | Check cloud config
    uri:
      url: "https://{{ ansible_host }}/api/cloud/?name={{ openshift.name }}" 
      method: GET 
      user: "{{ username }}" 
      password: "{{ password }}" 
      return_content: yes 
      body_format: json 
      force_basic_auth: yes 
      validate_certs: false 
      status_code: 200 
      timeout: 180 
      headers:
        X-Avi-Version: "{{ api_version }}" 
    register: check

  - name: Avi | OpenShift | Create cloud config
    uri:
      url: "https://{{ ansible_host }}/api/cloud/" 
      method: POST 
      user: "{{ username }}" 
      password: "{{ password }}" 
      return_content: yes 
      body: "{{ openshift_cloud_json }}" 
      body_format: json 
      force_basic_auth: yes 
      validate certs: false 
      status_code: 201 
      timeout: 180 
      headers:
        X-Avi-Version: "{{ api_version }}"
    when: check.json.count == 0 
  when: ( inventory_hostname == groups ["controller"][0] ) and update_config is undefined

Let’s continue with the example and using PUT to update the configuration of an existing object. To do this you need to define a extra variable update_config=true for the tasks below to be executed:

- block: 
  - name: Avi | OpenShift | Check cloud config
    uri:
      url: "https://{{ ansible_host }}/api/cloud/" 
      method: GET 
      user: "{{ username }}" 
      password: "{{ password }}" 
      return_content: yes 
      body_format: json 
      force_basic_auth: yes 
      validate_certs: false 
      status_code: 200 
      timeout: 180 
      headers:
        X-Avi-Version: "{{ api_version }}" 
    register: check

  - name: Avi | Set_fact for OpenShift name 
    set_fact:
      openshift_cloud_name: "[?name=='{{ openshift.name }}').uuid"
      
  - name: Avi | Set_fact for OpenShift uuid
    set_fact:
      openshift_cloud_uuid: "{{ check.json.results | json_query(penshift_cloud_name) }}" 
      
  - name: Avi | OpenShift | Update cloud config
    uri:
      url: "https://{{ ansible_host }}/api/cloud/{{ openshift_cloud_uuid [0] }}" 
      method: PUT 
      user: "{{ username }}" 
      password: "{{ password }}" 
      return_content: yes 
      body: "{{ openshift_cloud_json }}" 
      body_format: json 
      force_basic_auth: yes 
      validate_certs: false 
      status_code: 200 
      timeout: 180 
      headers:
        X-Avi-Version: "{{ api_version }}" 
    when: ( inventory_hostname == groups ("controller"][0] ) and update_config is defined

Here you find the links to the other articles about Ansible URI module:

Please share your feedback and leave a comment.

Part two: Ansible URI module and json_query filter

In my previous article I tried to explain how to use the Ansible URI Module and using the Jinja2 template engine to generate the JSON content. In part two I want to explain how to use the json_query filter. I will use the example with AVI Networks Load Balancers but this can be with any device with an REST API.

First we need to get the output from two objects, for both we don’t know the UUIDs and the first two tasks are to collect the configuration from the API using GET and register the output:

- block:
  - name: Avi | Get OpenShift cloud configuration
    uri:
      url: "https://{{ ansible_host }}/api/cloud/"
      method: GET
      user: "{{ avi_username }}"
      password: "{{ avi_password }}"
      return_content: yes
      force_basic_auth: yes
      validate_certs: false
      status_code: 200
      timeout: 180
      headers:
        X-Avi-Version: "{{ api_version }}"
    register: openshift_cloud 
   
  - name: Avi | Get OpenShift Service Engine group
    uri:
      url: "https://{{ ansible_host }}/api/serviceenginegroup/"
      method: GET
      user: "{{ avi_username }}"
      password: "{{ avi_password }}"
      return_content: yes
      force_basic_auth: yes
      validate_certs: false
      status_code: 200
      timeout: 180
      headers:
        X-Avi-Version: "{{ api_version }}"
    register: openshift_segroup
  when: '( inventory_hostname == groups["controller"][0] )'

The two variables openshift_cloud and openshift_segroup contain JSON content with all configuration details. For the OpenShift cloud object I don’t know the UUID, the only reference is the object name “OpenShift Cloud” which I know because I had previously created the object. I am using the Ansible module Set_Fact for specifying the query and writing the output into a new variable openshift_cloud_uuid:

- block:
  - name: Avi | set_fact for OpenShift cloud query
    set_fact:
      openshift_cloud_query: "[?name=='OpenShift Cloud'].uuid"
  
  - name: Avi | set_fact for OpenShift UUID
    set_fact:
      openshift_cloud_uuid: "{{ openshift_cloud.json.results | json_query(openshift_cloud_query) }}"
  when: '( inventory_hostname == groups["controller"][0] )' 

We now have the openshift_cloud_uuid of the OpenShift cloud configuration so let’s continue with the second object of the Service Engine group which is trickier because I don’t know the UUID or the object name. The Service Engine group was automatically set-up in the background when the OpenShift cloud object got created but I know the reference to the OpenShift cloud object and I use the json_query filter and set_fact again:

- block:
  - name: Avi | set_fact for Service Engine group query
    set_fact:
      openshift_segroup_query: "[?cloud_ref=='https://{{ ansible_host }}/api/cloud/{{ openshift_cloud_uuid[0] }}'].uuid"
  
  - name: Avi | set_fact for Service Engine group UUID
    set_fact:
      openshift_segroup_uuid: "{{ openshift_segroup.json.results | json_query(openshift_segroup_query) }}"
  when: '( inventory_hostname == groups["controller"][0] )'

Right now we know the openshift_cloud_uuid and the openshift_segroup_uuid, we use this to load a new Jinja2 template to update the Service Engine group object. See below the Jinja2 template openshift_segroup_json.j2:

{
  ...
  "name": "Default-Group",
  "tenant_ref": "https://{{ ansible_host }}/api/tenant/admin",
  "cloud_ref": "https://{{ ansible_host }}/api/cloud/{{ openshift_cloud_uuid[0] }}",
  ...
  YOUR CHANGES
  ...
}

The last part of this exercise is to load the j2 template and push the json content against the API to update the object using PUT:

- block:
  - name: Avi | set_fact to load Service Engine group json template
    set_fact:
      openshift_segroup_json: "{{ lookup('template', 'openshift_segroup_json.j2') }}"
  
  - name: Avi| Update OpenShift Service Engine group configuration
    uri:
      url: "https://{{ ansible_host }}/api/serviceenginegroup/{{ openshift_segroup_uuid[0] }}"
      method: PUT
      user: "{{ avi_username }}"
      password: "{{ avi_password }}"
      return_content: yes
      force_basic_auth: yes
      validate_certs: false
      body: "{{ openshift_segroup_json }}"
      body_format: json
      status_code: 200
      timeout: 180
      headers:
        X-Avi-Version: "{{ api_version }}"
  when: '( inventory_hostname == groups["controller"][0] )'

I hope this article is helpful on how to use the Ansible URI module and the json_query filter to extract information and update an API object. Please share your feedback and leave a comment.

Here you find the links to the other articles about Ansible URI module:

Build Ansible Tower Container

After creating my Jenkins container I thought it would be fun to run Ansible Tower in a container so I created a simple Dockerfile. First you need find out the latest Ansible Tower version: https://releases.ansible.com/ansible-tower/setup/ and update the version variable in the Dockerfile.

Here is my Dockerfile:

...
ARG ANSIBLE_TOWER_VER=3.3.1-1
...

The passwords can be changed in the inventory file:

...
[all:vars]
admin_password='<-your-password->'
...
pg_password='<-your-password->'
...
rabbitmq_password='<-your-password->'
...

Let’s start by building the container:

git clone https://github.com/berndonline/ansible-tower-docker.git && cd ansible-tower-docker/
docker build -t berndonline/ansible-tower .

The docker build will take a few minutes, just wait and look out for errors you might have in the build:

[email protected]:~$ git clone https://github.com/berndonline/ansible-tower-docker.git
Cloning into 'ansible-tower-docker'...
remote: Enumerating objects: 17, done.
remote: Counting objects: 100% (17/17), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 17 (delta 4), reused 14 (delta 4), pack-reused 0
Unpacking objects: 100% (17/17), done.
[email protected]:~$ cd ansible-tower-docker/
[email protected]:~/ansible-tower-docker$ docker build -t berndonline/ansible-tower .
Sending build context to Docker daemon  87.04kB
Step 1/31 : FROM ubuntu:16.04
16.04: Pulling from library/ubuntu
7b8b6451c85f: Pull complete
ab4d1096d9ba: Pull complete
e6797d1788ac: Pull complete
e25c5c290bde: Pull complete
Digest: sha256:e547ecaba7d078800c358082088e6cc710c3affd1b975601792ec701c80cdd39
Status: Downloaded newer image for ubuntu:16.04
 ---> a51debf7e1eb
Step 2/31 : USER root
 ---> Running in cf5d606130cc
Removing intermediate container cf5d606130cc
 ---> d5b11ed84885
Step 3/31 : WORKDIR /opt
 ---> Running in 1e6703cec6db
Removing intermediate container 1e6703cec6db
 ---> 045cf04ebc1d
Step 4/31 : ARG ANSIBLE_TOWER_VER=3.3.1-1
 ---> Running in 6d65bfe370d4
Removing intermediate container 6d65bfe370d4
 ---> d75c246c3a5c
Step 5/31 : ARG PG_DATA=/var/lib/postgresql/9.6/main
 ---> Running in e8856051aa92
Removing intermediate container e8856051aa92
 ---> 02e6d7593df8

...

PLAY [Install Tower isolated node(s)] ******************************************
skipping: no hosts matched

PLAY RECAP *********************************************************************
localhost                  : ok=125  changed=64   unreachable=0    failed=0

The setup process completed successfully.
Setup log saved to /var/log/tower/setup-2018-11-21-20:21:37.log
Removing intermediate container ad6401292444
 ---> 8f1eb28f16cb
Step 27/31 : ADD entrypoint.sh /entrypoint.sh
 ---> 8503e666ce9c
Step 28/31 : RUN chmod +x /entrypoint.sh
 ---> Running in 8b5ca24a320a
Removing intermediate container 8b5ca24a320a
 ---> 60810dc2a4e3
Step 29/31 : VOLUME ["${PG_DATA}", "${AWX_PROJECTS}","/certs"]
 ---> Running in d836e5455bd5
Removing intermediate container d836e5455bd5
 ---> 3968430a1814
Step 30/31 : EXPOSE 80
 ---> Running in 9a72815e365b
Removing intermediate container 9a72815e365b
 ---> 3613ced2a80c
Step 31/31 : ENTRYPOINT ["/entrypoint.sh", "ansible-tower"]
 ---> Running in 4611a90aff1a
Removing intermediate container 4611a90aff1a
 ---> ce89ea0753d4
Successfully built ce89ea0753d4
Successfully tagged berndonline/ansible-tower:latest

Continue to create a Docker Volume container to store the Postgres database:

sudo docker create -v /var/lib/postgresql/9.6/main --name tower-data berndonline/ansible-tower /bin/true

Start the Ansible Tower Docker container:

sudo docker run -d -p 32456:80 --volumes-from tower-data --name ansible-tower --privileged --restart berndonline/ansible-tower

Afterwards you can connect to http://<your-ip-address>:32456/ and import your Tower license. Ansible provides a free 10 node license which you can request here: https://www.ansible.com/license.

The Ansible Tower playbook installs an Nginx reverse proxy and you can enable SSL by setting the variable nginx_disable_https to false in the inventory file, and publish the container via 443 instead of 80.

Please share your feedback and leave a comment.