How to delegate Ansible host variables with set_fact

I ran into an interesting issues about making an service account token on OpenShift accessible by another group of nodes when running a playbook. When you run an oc command and register the output, you face the issue that the registered variable is stored under hostvars of the node name.

Normally you can access hostvars from other nodes like you see below:

"{{ hostvars['hostname']['variable-name'] }}"

I came up with something different and more flexible, instead of accessing hostvars[‘hostname’][‘variable-name’] I am delegating the variable to a group of nodes and make the variable more easily accessible there:

---
- hosts: avi-controller:masters
  gather_facts: false

  pre_tasks:
    - block:
      - name: Get OpenShift token
        command: "oc sa get-token <serveraccount-name> -n <project-name> --config=/etc/origin/master/admin.kubeconfig"
        register: token

      - name: Set serviceaccount token variable and delegate
        set_fact:
          serviceaccount_token: "{{ token.stdout }}"
        delegate_to: "{{ item }}"
        delegate_facts: true
        with_items: "{{ groups['avi-controller'] }}"
      when: ( inventroy_hostname == groups["masters"][0] )
 
 roles:
    - { role: "config", when: "'avi-controller' in group_names" }

In the following Ansible role after pre tasks, you are able to access the variable serviceaccount_token on any member of the group “avi-controller” and use with the rest of your automation code.

If you like this article, please share your feedback and leave a comment.

Deploy OpenShift using Jenkins Pipeline and Terraform

I wanted to make my life a bit easier and created a simple Jenkins pipeline to spin-up the AWS instance and deploy OpenShift. Read my previous article: Deploying OpenShift 3.11 Container Platform on AWS using Terraform. You will see in between steps which require input to stop the pipeline, and that keep the OpenShift cluster running without destroying it directly after installing OpenShift. Also check out my blog post I wrote about running Jenkins in a container with Ansible and Terraform.

The Jenkins pipeline requires a few environment variables for the credentials to access AWS and CloudFlare. You need to create the necessary credentials beforehand and they get loaded when the pipeline starts.

Here are the pipeline steps which are self explanatory:

pipeline {
    agent any
    environment {
        AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
        AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
        TF_VAR_email = credentials('TF_VAR_email')
        TF_VAR_token = credentials('TF_VAR_token')
        TF_VAR_domain = credentials('TF_VAR_domain')
        TF_VAR_htpasswd = credentials('TF_VAR_htpasswd')
    }
    stages {
        stage('Prepare workspace') {
            steps {
                sh 'rm -rf *'
                git branch: 'aws-dev', url: 'https://github.com/berndonline/openshift-terraform.git'
                sh 'ssh-keygen -b 2048 -t rsa -f ./helper_scripts/id_rsa -q -N ""'
                sh 'chmod 600 ./helper_scripts/id_rsa'
                sh 'terraform init'
            }
        }
        stage('Run terraform apply') {
            steps {
                input 'Run terraform apply?'
            }
        }
        stage('terraform apply') {
            steps {
                sh 'terraform apply -auto-approve'
            }
        }
        stage('OpenShift Installation') {
            steps {
                sh 'sleep 600'
                sh 'scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./helper_scripts/id_rsa [email protected]$(terraform output bastion):/home/centos/.ssh/'
                sh 'scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./inventory/ansible-hosts  [email protected]$(terraform output bastion):/home/centos/ansible-hosts'
                sh 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-pre.yml -i ~/ansible-hosts"'
                sh 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-install.yml -i ~/ansible-hosts"'
            }
        }        
        stage('Run terraform destroy') {
            steps {
                input 'Run terraform destroy?'
            }
        }
        stage('terraform destroy') {
            steps {
                sh 'terraform destroy -force '
            }
        }
    }
}

Let’s trigger the pipeline and look at the progress of the different steps.

The first step preparing the workspace is very quick and the pipeline is waiting for an input to run terraform apply:

Just click on proceed to continue:

After the AWS and CloudFlare resources are created with Terraform, it continues with the next step installing OpenShift 3.11 on the AWS instances:

By this point the OpenShift installation is completed.

You can continue and login to the console-paas.. and continue doing your testing on OpenShift.

Terraform not only created all the AWS resources it also configured the necessary CNAME on CloudFlare DNS to point to the AWS load balancers.

Once you are finished with your OpenShift testing you can go back into Jenkins pipeline and commit to destroy the environment again:

Running terraform destroy:

The pipeline completed successfully:

I hope this was in interesting post and let me know if you like it and want to see more of these. I am planning some improvements to integrate a validation step in the pipeline, to create a project and build, and deploy container on OpenShift automatically.

Please share your feedback and leave a comment.

Build Jenkins Container with Terraform and Ansible

I thought it might be interesting to show how to build a Docker container running Jenkins and tools like Terraform and Ansible. I am planning to use a Jenkins pipeline to deploy my OpenShift 3.11 example on AWS using Terraform and Ansible but more about this in the next post.

I am using the source Dockerfile from Jenkins and modified it, and added Ansible and Terraform: https://github.com/jenkinsci/docker. Below you see a few variables you might need to change depending on the version you are trying to use or where to place the volume mount. Have a look here for the latest Jenkins version: https://updates.jenkins-ci.org/download/war/.

Here is my Dockerfile:

...
ARG JENKINS_HOME=/var/jenkins_home
...
ENV TERRAFORM_VERSION=0.11.10
... 
ARG JENKINS_VERSION=2.151
ENV JENKINS_VERSION $JENKINS_VERSION
...
ARG JENKINS_SHA=a4335cc626c1f64da61a20174af654283d171b255a928bbacb6402a315e213d7
...

Let’s start and clone my Jenkins Docker repository  and run docker build:

git clone https://github.com/berndonline/jenkins-docker.git && cd ./jenkins-docker/
docker build -t berndonline/jenkins .

The docker build will take a few minutes, just wait and look out for error you might have with the build:

[email protected]:~/jenkins-docker$ docker build -t berndonline/jenkins .
Sending build context to Docker daemon  141.3kB
Step 1/51 : FROM openjdk:8-jdk
8-jdk: Pulling from library/openjdk
54f7e8ac135a: Pull complete
d6341e30912f: Pull complete
087a57faf949: Pull complete
5d71636fb824: Pull complete
9da6b28682cf: Pull complete
203f1094a1e2: Pull complete
ee38d9f85cf6: Pull complete
7f692fae02b6: Pull complete
eaa976dc543c: Pull complete
Digest: sha256:94bbc3357f995dd37986d8da0f079a9cd4b99969a3c729bad90f92782853dea7
Status: Downloaded newer image for openjdk:8-jdk
 ---> c14ba9d23b3a
Step 2/51 : USER root
 ---> Running in c78f75ca3d5a
Removing intermediate container c78f75ca3d5a
 ---> f2c6bb7538ea
Step 3/51 : RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
 ---> Running in 4cc857e12f50
Ign:1 http://deb.debian.org/debian stretch InRelease
Get:2 http://security.debian.org/debian-security stretch/updates InRelease [94.3 kB]
Get:3 http://deb.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:4 http://deb.debian.org/debian stretch Release [118 kB]
Get:5 http://security.debian.org/debian-security stretch/updates/main amd64 Packages [459 kB]
Get:6 http://deb.debian.org/debian stretch Release.gpg [2434 B]
Get:7 http://deb.debian.org/debian stretch-updates/main amd64 Packages [5152 B]
Get:8 http://deb.debian.org/debian stretch/main amd64 Packages [7089 kB]
Fetched 7859 kB in 1s (5540 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...

...

Step 49/51 : ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/jenkins.sh"]
 ---> Running in 28da7c4bf90a
Removing intermediate container 28da7c4bf90a
 ---> f380f1a6f06f
Step 50/51 : COPY plugins.sh /usr/local/bin/plugins.sh
 ---> 82871f0df0dc
Step 51/51 : COPY install-plugins.sh /usr/local/bin/install-plugins.sh
 ---> feea9853af70
Successfully built feea9853af70
Successfully tagged berndonline/jenkins:latest
[email protected]:~/jenkins-docker$

The Docker container is successfully build:

[email protected]:~/jenkins-docker$ docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
berndonline/jenkins         latest              cd1742c317fa        6 days ago          1.28GB

Let’s start the Docker container:

docker run -d -v /var/jenkins_home:/var/jenkins_home -p 32771:8080 -p 32770:50000 berndonline/jenkins

Quick check that the container is successfully created:

[email protected]:~/jenkins-docker$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                               NAMES
7073fa9c0cd4        berndonline/jenkins   "/sbin/tini -- /usr/…"   5 days ago          Up 7 seconds        0.0.0.0:32771->8080/tcp, 0.0.0.0:32770->50000/tcp   jenkins

Afterwards you can connect to http://<your-ip-address>:32771/ and do the initial Jenkins configuration, like changing admin password and install needed plugins. I recommend putting an Nginx reverse proxy with SSL infront to secure Jenkins properly.

So what about updates or changing the configuration? – Pretty easy; because we are using a Docker bind mount to /var/jenkins_home/, all the Jenkins related data is stored on the local file system of your server and you can re-create or re-build the container at anytime.

I hope you like this article about how to create your down Jenkins Docker container. In my next post I will create a very simple Jenkins pipeline to deploy OpenShift 3.11 on AWS using Terraform.

Please share your feedback and leave a comment.

Deploy OpenShift 3.11 Container Platform on AWS using Terraform

I have done a few changes on my Terraform configuration for OpenShift 3.11 on Amazon AWS. I have downsized the environment because I didn’t needed that many nodes for a quick test setup. I have added CloudFlare DNS to automatically create CNAME for the AWS load balancers on the DNS zone. I have also added an AWS S3 Bucket for storing the backend state. You can find the new Terraform configuration on my Github repository: https://github.com/berndonline/openshift-terraform/tree/aws-dev

From OpenShift 3.10 and later versions the environment variables changes and I modified the ansible-hosts template for the new configuration. You can see the changes in the hosts template: https://github.com/berndonline/openshift-terraform/blob/aws-dev/helper_scripts/ansible-hosts.template.txt

OpenShift 3.11 has changed a few things and put an focus on an Cluster Operator console which is pretty nice and runs on Kubernetes 1.11. I recommend reading the release notes for the 3.11 release for more details: https://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_release_notes.html

I don’t wanted to get into too much detail, just follow the steps below and start with cloning my repository, and choose the dev branch:

git clone -b aws-dev https://github.com/berndonline/openshift-terraform.git
cd ./openshift-terraform/
ssh-keygen -b 2048 -t rsa -f ./helper_scripts/id_rsa -q -N ""
chmod 600 ./helper_scripts/id_rsa

You need to modify the cloudflare.tf and add your CloudFlare API credentials otherwise just delete the file. The same for the S3 backend provider, you find the configuration in the main.tf and it can be removed if not needed.

CloudFlare and Amazon AWS credentials can be added through environment variables:

export AWS_ACCESS_KEY_ID='<-YOUR-AWS-ACCESS-KEY->'
export AWS_SECRET_ACCESS_KEY='<-YOUR-AWS-SECRET-KEY->'
export TF_VAR_email='<-YOUR-CLOUDFLARE-EMAIL-ADDRESS->'
export TF_VAR_token='<-YOUR-CLOUDFLARE-TOKEN->'
export TF_VAR_domain='<-YOUR-CLOUDFLARE-DOMAIN->'
export TF_VAR_htpasswd='<-YOUR-OPENSHIFT-DEMO-USER-HTPASSWD->'

Run terraform init and apply to create the environment.

terraform init && terraform apply -auto-approve

Copy the ssh key and ansible-hosts file to the bastion host from where you need to run the Ansible OpenShift playbooks.

scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./helper_scripts/id_rsa [email protected]$(terraform output bastion):/home/centos/.ssh/
scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./inventory/ansible-hosts  [email protected]$(terraform output bastion):/home/centos/ansible-hosts

I recommend waiting a few minutes as the AWS cloud-init script prepares the bastion host. Afterwards continue with the pre and install playbooks. You can connect to the bastion host and run the playbooks directly.

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-pre.yml -i ~/ansible-hosts"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-install.yml -i ~/ansible-hosts"

If for whatever reason the cluster deployment fails, you can run the uninstall playbook to bring the nodes back into a clean state and start from the beginning and run deploy_cluster.

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./openshift-ansible/playbooks/adhoc/uninstall.yml -i ~/ansible-hosts"

Here are some screenshots of the new cluster console:

Let’s create a project and import my hello-openshift.yml build configuration:

Successful completed the build and deployed the hello-openshift container:

My example hello openshift application:

When you are finished with the testing, run terraform destroy.

terraform destroy -force 

Please share your feedback and leave a comment.

Automate Ansible AWX configuration using Tower-CLI

Some time has gone by since my article about Getting started with Ansible AWX (Open Source Tower version) , and I wanted to continue focusing on AWX and show how to automate the configuration of an AWX Tower server.

Before we configure AWX we should install the tower-cli. You can find more information about the Tower CLI here: https://github.com/ansible/tower-cli. I also recommend having a look at the tower-cli documentation: https://tower-cli.readthedocs.io/en/latest/

sudo pip install ansible-tower-cli

The tower-cli is very useful when you want to monitor the running jobs. The web console is not that great when it comes to large playbook and is pretty slow at showing the running job state. See below the basic configuration before you start using the tower-cli:

[email protected]:~$ tower-cli config host 94.130.51.22
Configuration updated successfully.
[email protected]:~$ tower-cli login admin
Password:
{
 "id": 1,
 "type": "o_auth2_access_token",
 "url": "/api/v2/tokens/1/",
 "created": "2018-09-15T17:41:23.942572Z",
 "modified": "2018-09-15T17:41:23.955795Z",
 "description": "Tower CLI",
 "user": 1,
 "refresh_token": null,
 "application": null,
 "expires": "3018-01-16T17:41:23.937872Z",
 "scope": "write"
}
Configuration updated successfully.
[email protected]:~$ 

But now let’s continue and show how we can use the tower-cli to configure and monitor Ansible AWX Tower.

Create a project:

tower-cli project create --name "My Project" --description "My project description" --organization "Default" --scm-type "git" --scm-url "https://github.com/ansible/ansible-tower-samples"

Create an inventory:

tower-cli inventory create --name "My Inventory" --organization "Default"

Add hosts to an inventory:

tower-cli host create --name "localhost" --inventory "My Inventory" --variables "ansible_connection: local"

Create credentials:

tower-cli credential create --name "My Credential" --credential-type "Machine" --user "admin"

Create a Project Job Template:

tower-cli job_template create --name "My Job Template" --project "My Project" --inventory "My Inventory" --job-type "run" --credential "My Credential" --playbook "hello_world.yml" --verbosity "default"

After we successfully created everything let’s now run the job template and monitor the output via the tower-cli:

tower-cli job launch --job-template "My Job Template"
tower-cli job monitor <ID>

Command line output:

[email protected]:~$ tower-cli job launch --job-template "My Job Template"
Resource changed.
== ============ =========================== ======= =======
id job_template           created           status  elapsed
== ============ =========================== ======= =======
26           15 2018-10-12T12:22:48.599748Z pending 0.0
== ============ =========================== ======= =======
[email protected]:~$ tower-cli job monitor 26
------Starting Standard Out Stream------


PLAY [Hello World Sample] ******************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [Hello Message] ***********************************************************
ok: [localhost] => {
    "msg": "Hello World!"
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0

------End of Standard Out Stream--------
Resource changed.
== ============ =========================== ========== =======
id job_template           created             status   elapsed
== ============ =========================== ========== =======
26           15 2018-10-12T12:22:48.599748Z successful 8.861
== ============ =========================== ========== =======
[email protected]:~$

With the tower-cli commands we can write a simple playbook using the Ansible Shell module.

Playbook site.yml:

---
- hosts: localhost
  gather_facts: 'no'

  tasks:
    - name: Add tower project
      shell: |
        tower-cli project create \
        --name "My Project" \
        --description "My project description" \
        --organization "Default" \
        --scm-type "git" \
        --scm-url "https://github.com/ansible/ansible-tower-samples"

    - name: Add tower inventory
      shell: |
        tower-cli inventory create \
        --name "My Inventory" \
        --organization "Default"

    - name: Add host to inventory
      shell: |
        tower-cli host create \
        --name "localhost" \
        --inventory "My Inventory" \
        --variables "ansible_connection: local"
    
    - name: Add credential
      shell: |
        tower-cli credential create \
        --name "My Credential" \
        --credential-type "Machine" \
        --user "admin"
        
    - name: wait 15 seconds to pull project SCM content
      wait_for: timeout=15
      delegate_to: localhost
 
    - name: Add job template
      shell: |
        tower-cli job_template create \
        --name "My Job Template" \
        --project "My Project" \
        --inventory "My Inventory" \
        --job-type "run" \
        --credential "My Credential" \
        --playbook "hello_world.yml" \
        --verbosity "default"

Let’s run the playbook:

[email protected]:~/awx-provision$ ansible-playbook site.yml

PLAY [localhost] **************************************************************************************************************************************************

TASK [Add tower project] ******************************************************************************************************************************************
changed: [localhost]

TASK [Add tower inventory] ****************************************************************************************************************************************
changed: [localhost]

TASK [Add host to inventory] **************************************************************************************************************************************
changed: [localhost]

TASK [Add credential] *********************************************************************************************************************************************
changed: [localhost]

TASK [wait 15 seconds to pull project SCM content] ****************************************************************************************************************
ok: [localhost -> localhost]

TASK [Add job template] *******************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ********************************************************************************************************************************************************
localhost : ok=6 changed=5 unreachable=0 failed=0

[email protected]:~/awx-provision$

If you like this article, please share your feedback and leave a comment.