Typhoon Kubernetes Distribution

I stumbled across a very interesting Kubernetes distribution called Typhoon which runs a self-hosted control-plane using Bootkube running on CoreOS. Typhoon uses Terraform to deploy the required instances on various cloud providers or on bare-metal servers. I really like the concept of a minimal Kubernetes distribution and a simple bootstrap to deploy a full featured cluster in a few minutes. Check out the official Typhoon website or their Github repository for more information.

To install Typhoon I followed the documentation, everything is pretty simple with a bit of Terraform knowledge. Here’s my Github repository with my cluster configuration: https://github.com/berndonline/typhoon-kubernetes/tree/aws

Before you start you need to install Terraform v0.11.x and terraform-provider-ct, and setup a AWS Route53 domain for the Kubernetes cluster.

I created a new subdomain on Route53 and configured delegation on CloudFlare for the domain.

Let’s checkout the configuration, first the cluster.tf which I have modified slightly because I use Jenkins to deploy the Kubernetes cluster.

module "aws-cluster" {
  source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.13.3"

  providers = {
    aws = "aws.default"
    local = "local.default"
    null = "null.default"
    template = "template.default"
    tls = "tls.default"
  }

  # AWS
  cluster_name = "typhoon"
  dns_zone     = "${var.dns}"
  dns_zone_id  = "${var.dns_id}"

  # configuration
  ssh_authorized_key = "${var.ssh_key}"
  asset_dir          = "./.secrets/clusters/typhoon"

  # optional
  worker_count = 2
  worker_type  = "t3.small"
}

In the provider.tf I have only added S3 to be used for the Terraform backend state but otherwise I’ve left the defaults.

provider "aws" {
  version = "~> 1.13.0"
  alias   = "default"

  region  = "eu-west-1"
}

terraform {
  backend "s3" {
    bucket = "techbloc-terraform-data"
    key    = "openshift-311"
    region = "eu-west-1"
  }
}

...

I added a variables.tf file for the DNS and SSH variables.

variable "dns" {
}
variable "dns_id" {
}
variable "ssh_key" {
}

Let’s have a quick look at my simple Jenkins pipeline to deploy Typhoon Kubernetes. Apart from installing Kubernetes I am deploying the Nginx Ingress controller and  Heapster addons for the cluster. I’ve also added an example application I have used previously after deploying the cluster.

pipeline {
    agent any
    environment {
        AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
        AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
        TF_VAR_dns = credentials('TF_VAR_dns')
        TF_VAR_dns_id = credentials('TF_VAR_dns_id')
        TF_VAR_ssh_key = credentials('TF_VAR_ssh_key')
    }
    stages {
        stage('Prepare workspace') {
            steps {
                sh 'rm -rf *'
                git branch: 'aws', url: 'https://github.com/berndonline/typhoon-kubernetes.git'
                sh 'terraform init'
            }
        }
        stage('terraform apply') {
            steps {
                sshagent (credentials: ['fcdca8fa-aab9-3846-832f-4756392b7e2c']) {
                    sh 'terraform apply -auto-approve'
                    sh 'sleep 30'
                }
            }
        }
        stage('deploy nginx-ingress and heapster') {
            steps {    
                sh 'kubectl apply -R -f ./nginx-ingress/ --kubeconfig=./.secrets/clusters/typhoon/auth/kubeconfig'
                sh 'kubectl apply -R -f ./heapster/ --kubeconfig=./.secrets/clusters/typhoon/auth/kubeconfig'
                sh 'sleep 30'
            }
        }
        stage('deploy example application') {
            steps {    
                sh 'kubectl apply -f ./example/hello-kubernetes.yml --kubeconfig=./.secrets/clusters/typhoon/auth/kubeconfig'
            }
        }
        stage('Run terraform destroy') {
            steps {
                input 'Run terraform destroy?'
            }
        }
        stage('terraform destroy') {
            steps {
                sshagent (credentials: ['fcdca8fa-aab9-3846-832f-4756392b7e2c']) {
                    sh 'terraform destroy -force'
                }
            }
        }
    }
}

Let’s start the Jenkins pipeline:

Let’s check if I can access the hello-kubernetes application. For everyone who is interested, this is the link to the Github repository for the hello-kubernetes example application I have used.

I really like the Typhoon Kubernetes distribution and the work that went into it to create a easy way for everyone to install a Kubernetes cluster and start using it in a few minutes. I also find the way they’ve used Terraform and Bootkube to deploy the platform on CoreOS very inspiring and it gave me some ideas how I can make use of it for production clusters.

I actually like CoreOS and the easy bootstrapping with Terraform and Bootkube which I have not used before, I’ve always deployed OpenShift/Kubernetes on either RedHat or CentOS with Ansible, and find it a very interesting way to deploy a Kubernetes platform.

How to backup OpenShift with Heptio Velero(Ark)

I have found an interesting open source tool called Heptio Velero previously known as Heptio Ark which is able to backup Kubernetes and OpenShift container platforms. The tool mainly does this via the API and backup namespace objects and additionally is able to create snapshots for PVs on Azure, AWS and GCP.

The user uses the ark command line utility to create and restore backups.

The installation on Velero is super simple, just follow the steps below:

# Download and extract the latest Velero release from github
wget https://github.com/heptio/velero/releases/download/v0.10.1/ark-v0.10.1-linux-amd64.tar.gz
tar -xzf ark-v0.10.1-linux-amd64.tar.gz -c ./velero/

# Move the ark binary to somewhere in your PATH
mv ./velero/ark /usr/sbin/

# The last two commands create namespace and applies configuration
oc create -f ./velero/config/common/00-prereqs.yaml
oc create -f ./velero/config/minio/

You can expose Minio to access the web console from the outside.

# Create route
oc expose service minio

# View access and secret key to login via the web console
oc describe deployment.apps/minio | grep -i Environment -A2
    Environment:
      MINIO_ACCESS_KEY:  minio
      MINIO_SECRET_KEY:  minio123

Here a few command options on how to backup objects:

# Create a backup for any object that matches the app=pod label selector:
ark backup create <backup-name> --selector <key>=<value> 

# Alternatively if you want to backup all objects except those matching the label backup=ignore:
ark backup create <backup-name> --selector 'backup notin (ignore)'

# Create regularly scheduled backups based on a cron expression using the app=pod label selector:
ark schedule create <backup-name> --schedule="0 1 * * *" --selector <key>=<value>

# Create a backup for a namespace:
ark backup create <backup-name> --include-namespaces <namespace-name>

Let’s do a backup and restore tests; I have created a new OpenShift project with a simple hello-openshift build- and deployment-config:

[[email protected] ~]# ark backup create mybackup --include-namespaces myapplication
Backup request "mybackup" submitted successfully.
Run `ark backup describe mybackup` or `ark backup logs mybackup` for more details.
[[email protected] ~]# ark backup get
NAME          STATUS      CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
mybackup      Completed   2019-02-08 17:14:09 +0000 UTC   29d       default            

Once the backup has completed we can delete the project.

[[email protected] ~]# oc delete project myapplication
project.project.openshift.io "myapplication" deleted

Now let’s restore the project namespace from the previous created backup:

[[email protected] ~]# ark restore create --from-backup mybackup
Restore request "mybackup-20190208171745" submitted successfully.
Run `ark restore describe mybackup-20190208171745` or `ark restore logs mybackup-20190208171745` for more details.
[[email protected] ~]# ark restore get
NAME                         BACKUP        STATUS       WARNINGS   ERRORS    CREATED                         SELECTOR
mybackup-20190208171745      mybackup      InProgress   0          0         2019-02-08 17:17:45 +0000 UTC   
[[email protected] ~]# ark restore get
NAME                         BACKUP        STATUS      WARNINGS   ERRORS    CREATED                         SELECTOR
mybackup-20190208171745      mybackup      Completed   1          0         2019-02-08 17:17:45 +0000 UTC   

The project is back in the state it was when we created the backup.

[[email protected] ~]# oc get pods
NAME                     READY     STATUS    RESTARTS   AGE
hello-app-http-1-qn8jj   1/1       Running   0          2m
[[email protected] ~]# curl -k --insecure https://hello-app-http-myapplication.aio.hostgate.net/
Hello OpenShift!

There are a few issues around the restore which I have seen and I want to explain, I’m not sure if these are related to OpenShift in general or just the latest 3.11 version. The secrets for the builder account are missing or didn’t restore correctly and cannot be used.

[[email protected] ~]# oc get build
NAME                 TYPE      FROM         STATUS                               STARTED   DURATION
hello-build-http-1   Docker    Dockerfile   New (CannotRetrieveServiceAccount)
hello-build-http-2   Docker    Dockerfile   New
[[email protected] ~]# oc get events | grep Failed
1m          1m           2         hello-build-http.15816e39eefb637d         BuildConfig                                     Warning   BuildConfigInstantiateFailed   buildconfig-controller                                error instantiating Build from BuildConfig myapplication/hello-build-http (0): Error resolving ImageStreamTag hello-openshift-source:latest in namespace myapplication: imagestreams.image.openshift.io "hello-openshift-source" not found
1m          1m           6         hello-build-http.15816e39f446207f         BuildConfig                                     Warning   BuildConfigInstantiateFailed   buildconfig-controller                                error instantiating Build from BuildConfig myapplication/hello-build-http (0): Error resolving ImageStreamTag hello-openshift-source:latest in namespace myapplication: unable to find latest tagged image
1m          1m           1         hello-build-http.15816e3a49f21411         BuildConfig                                     Warning   BuildConfigInstantiateFailed   buildconfig-controller                                error instantiating Build from BuildConfig myapplication/hello-build-http (0): builds.build.openshift.io "hello-build-http-1" already exists
[[email protected] ~]# oc get secrets | grep builder
builder-token-5q646        kubernetes.io/service-account-token   4         5m

# OR
[[email protected] ~]# oc get build
NAME                 TYPE      FROM         STATUS                        STARTED   DURATION
hello-build-http-1   Docker    Dockerfile   Pending (MissingPushSecret)
hello-build-http-2   Docker    Dockerfile   New
[[email protected] ~]# oc get events | grep FailedMount
15m         19m          10        hello-build-http-1-build.15816cc22f35795c   Pod                                             Warning   FailedMount                    kubelet, ip-172-26-12-32.eu-west-1.compute.internal   MountVolume.SetUp failed for volume "builder-dockercfg-k55f6-push" : secrets "builder-dockercfg-k55f6" not found
15m         17m          2         hello-build-http-1-build.15816cdec9dc561a   Pod                                             Warning   FailedMount                    kubelet, ip-172-26-12-32.eu-west-1.compute.internal   Unable to mount volumes for pod "hello-build-http-1-build_myapplication(4c2f1113-2bb5-11e9-8a6b-0a007934f01e)": timeout expired waiting for volumes to attach or mount for pod "myapplication"/"hello-build-http-1-build". list of unmounted volumes=[builder-dockercfg-k55f6-push]. list of unattached volumes=[buildworkdir docker-socket crio-socket builder-dockercfg-k55f6-push builder-dockercfg-m6d2v-pull builder-token-sjvw5]
13m         13m          1         hello-build-http-1-build.15816d1e3e65ad2a   Pod                                             Warning   FailedMount                    kubelet, ip-172-26-12-32.eu-west-1.compute.internal   Unable to mount volumes for pod "hello-build-http-1-build_myapplication(4c2f1113-2bb5-11e9-8a6b-0a007934f01e)": timeout expired waiting for volumes to attach or mount for pod "myapplication"/"hello-build-http-1-build". list of unmounted volumes=[buildworkdir docker-socket crio-socket builder-dockercfg-k55f6-push builder-dockercfg-m6d2v-pull builder-token-sjvw5]. list of unattached volumes=[buildworkdir docker-socket crio-socket builder-dockercfg-k55f6-push builder-dockercfg-m6d2v-pull builder-token-sjvw5]
[[email protected] ~]# oc get secrets | grep builder
NAME                       TYPE                                  DATA      AGE
builder-dockercfg-m6d2v    kubernetes.io/dockercfg               1         5m
builder-token-4chx4        kubernetes.io/service-account-token   4         5m
builder-token-sjvw5        kubernetes.io/service-account-token   4         5m

The deployment config seems to be disconnected and doesn’t know the state of the running pod:

[[email protected] ~]# oc get dc
NAME             REVISION   DESIRED   CURRENT   TRIGGERED BY
hello-app-http   0          1         0         config,image(hello-openshift:latest)
[[email protected] ~]#

Here are the steps to recover out of this situation:

# First cancel all builds - the restore seems to have triggered a new build:
[[email protected] ~]# oc cancel-build $(oc get build --no-headers | awk '{ print $1 }')
build.build.openshift.io/hello-build-http-1 marked for cancellation, waiting to be cancelled
build.build.openshift.io/hello-build-http-2 marked for cancellation, waiting to be cancelled
build.build.openshift.io/hello-build-http-1 cancelled
build.build.openshift.io/hello-build-http-2 cancelled

# Delete all builds otherwise you will get later a problem because of duplicate name:
[[email protected] ~]# oc delete build $(oc get build --no-headers | awk '{ print $1 }')
build.build.openshift.io "hello-build-http-1" deleted
build.build.openshift.io "hello-build-http-2" deleted

# Delete the project builder account - this triggers openshift to re-create the builder
[[email protected] ~]# oc delete sa builder
serviceaccount "builder" deleted
[[email protected] ~]# oc get secrets | grep builder
builder-dockercfg-vwckw    kubernetes.io/dockercfg               1         24s
builder-token-dpgj9        kubernetes.io/service-account-token   4         24s
builder-token-lt7z2        kubernetes.io/service-account-token   4         24s

# Start the build and afterwards do a rollout for the deployment config:
[[email protected] ~]# oc start-build hello-build-http
build.build.openshift.io/hello-build-http-3 started
[[email protected] ~]# oc rollout latest dc/hello-app-http
deploymentconfig.apps.openshift.io/hello-app-http rolled out

After doing all this your build- and deployment-config is back synchronised.

[[email protected] ~]# oc get dc
NAME             REVISION   DESIRED   CURRENT   TRIGGERED BY
hello-app-http   3          1         1         config,image(hello-openshift:latest)

My feedback about Heptio Velero(Ark); apart from the restore issues with the build- and deployment-config, I find the tool great especially in scenarios where I accidently deleted a namespace or for DR where I need to recover a whole cluster. What makes the tool worth it, is actually the possibility to create snapshots from PV disks on your cloud provider.

Check out the official documentation from Heptio for more information and if you like this article please leave a comment.

OpenShift Networking and Network Policies

This article is about OpenShift networking in general but I also want to look at the Kubernetes CNI feature NetworkPolicy in a bit more detail. The latest OpenShift version 3.11 comes with three SDN deployment models:

  • ovs-subnet – This creates a single large vxlan between all the namespace and everyone is able to talk to each other.
  • ovs-multitenant – As the name already says this separates the namespaces into separate vxlan’s and only resources within the namespace are able to talk to each other. You have the possibility to join or making namespaces global.
  • ovs-networkpolicy – The newest SDN deployment method for OpenShift to enabling micro-segmentation to control the communication between pods and namespaces.
  • ovs-ovn – Next generation SDN for OpenShift but not yet officially released for OpenShift. For more information visit the OpenvSwitch Github repository ovn-kubernetes.

Here an overview of the common ovs-multitenant software defined network:

On an OpenShift node the tun0 interfaces owns the default gateway and is forwarding traffic to external endpoints outside the OpenShift platform or routing internal traffic to the openvswitch overlay. Both openvswitch and iptables are central components which are very important for the networking  on the platform.

Read the official OpenShift documentation managing networking or configuring the SDN for more information.

NetworkPolicy in Action

Let me first explain the example I use to test NetworkPolicy. We will have one hello-openshift pod behind service, and a busybox pod for testing the internal communication. I will create a default ingress deny policy and specifically allow tcp port 8080 to my hello-openshift pod. I am not planning to restrict the busybox pod with an egress policy, so all egress traffic is allowed.

Here you find the example yaml files to replicate the layout: busybox.yml and hello-openshift.yml

Short recap about Kubernetes service definition, they are just simple iptables entries and for this reason you cannot restrict them with NetworkPolicy.

[[email protected] ~]# iptables-save | grep 172.30.231.77
-A KUBE-SERVICES ! -s 10.128.0.0/14 -d 172.30.231.77/32 -p tcp -m comment --comment "myproject/hello-app-http:web cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 172.30.231.77/32 -p tcp -m comment --comment "myproject/hello-app-http:web cluster IP" -m tcp --dport 80 -j KUBE-SVC-LFWXBQW674LJXLPD
[[email protected] ~]#

When you install OpenShift with ovs-networkpolicy, the default policy allows all traffic within a namespace. Let’s do a first test without a custom NetworkPolicy rule to see if I am able to connect to my hello-app-http service.

[[email protected] ~]# oc exec busybox-1-wn592 -- wget -S --spider http://hello-app-http
Connecting to hello-app-http (172.30.231.77:80)
  HTTP/1.1 200 OK
  Date: Tue, 19 Feb 2019 13:59:04 GMT
  Content-Length: 17
  Content-Type: text/plain; charset=utf-8
  Connection: close

[[email protected] ~]#

Now we add a default ingress deny policy to the namespace:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-all-ingress
spec:
  podSelector:
  ingress: []

After applying the default deny policy you are not able to connect to the hello-app-http service. The connection is timing out because no flows entries are defined yet in the OpenFlow table:

[[email protected] ~]# oc exec busybox-1-wn592 -- wget -S --spider http://hello-app-http
Connecting to hello-app-http (172.30.231.77:80)
wget: can't connect to remote host (172.30.231.77): Connection timed out
command terminated with exit code 1
[[email protected] ~]#

Let’s add a new policy and allow tcp port 8080 and specifying a podSelector to match all pods with the label “role: web”.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-tcp8080
spec:
  podSelector:
    matchLabels:
      role: web
  ingress:
  - ports:
    - protocol: TCP
      port: 8080

This alone doesn’t do anything, you still need to patch the deployment config and add the label “role: web” to your deployment config metadata information.

oc patch dc/hello-app-http --patch '{"spec":{"template":{"metadata":{"labels":{"role":"web"}}}}}'

To rollback the previous changes simply use the ‘oc rollback dc/hello-app-http’ command.

Now let’s check the openvswitch flow table and you will see that a new flow got added with the destination of my hello-openshift pod 10.128.0.103 on port 8080.

Afterwards we try again to connect to my hello-app-http service and you see that we get a succesful connect:

[[email protected] ~]# oc exec ovs-q4p8m -n openshift-sdn -- ovs-ofctl -O OpenFlow13 dump-flows br0 | grep '10.128.0.103.*8080'
 cookie=0x0, duration=221.251s, table=80, n_packets=15, n_bytes=1245, priority=150,tcp,reg1=0x2dfc74,nw_dst=10.128.0.103,tp_dst=8080 actions=output:NXM_NX_REG2[]
[[email protected] ~]#
[[email protected] ~]# oc exec busybox-1-wn592 -- wget -S --spider http://hello-app-http
Connecting to hello-app-http (172.30.231.77:80)
  HTTP/1.1 200 OK
  Date: Tue, 19 Feb 2019 14:21:57 GMT
  Content-Length: 17
  Content-Type: text/plain; charset=utf-8
  Connection: close

[[email protected] ~]#

The hello openshift container publishes two tcp ports 8080 and 8888, so finally let’s try to connect to the pod IP address on port 8888, and we will find out that I am not able to connect, the reason is that I only allowed 8080 in the policy.

[[email protected] ~]# oc exec busybox-1-wn592 -- wget -S --spider http://10.128.0.103:8888
Connecting to 10.128.0.103:8888 (10.128.0.103:8888)
wget: can't connect to remote host (10.128.0.103): Connection timed out
command terminated with exit code 1
[[email protected] ~]#

There are great posts on the RedHat OpenShift blog which you should checkout networkpolicies-and-microsegmentation and openshift-and-network-security-zones-coexistence-approaches. Otherwise I can recommend having a look at Ahmet Alp Balkan Github repository about Kubernetes network policy recipes, where you can find some good examples.

Host and Container Monitoring with SysDig

After my previous articles about troubleshooting and to validate OpenShift using Ansible, I wanted to continue and show how SysDig is helping you to identify potentials issues on your nodes or container platform before they occur.

The open source version is a simple but very powerful tool to inspect your linux host via the command line but it has no capabilities to centrally monitor or store capture information. The enterprise version provides these capabilities like a web console and centrally stores metrics, it is also able to trigger remote captures without the need to connect to the host.

Sysdig Open Source

Let’s install sysdig open source, here the official SysDig installation guide.

# Host install
curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | sudo bash

# Alternatively the container based install
yum -y install kernel-devel-$(uname -r)
docker pull sysdig/sysdig
docker run -i -t --name sysdig --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/sysdig

The csysdig command is nice and user friendly menu driven interface to see real-time system call information of your host. To collect information from Kubernetes or OpenShift please use the option [-kK] like seen in the example below:

csysdig -k https://localhost:8443 -K /etc/origin/master/admin.crt:/etc/origin/master/admin.key

For more information about how to use csysdig please have a look at the manual or watch the short Youtube video.

The main sysdig command is showing output directly in the terminal session and you are able to apply filters (chisels) to more granularly see the system calls. Like with csysdig, the option [-kK] enabled Kubernetes integration:

sysdig -k https://localhost:8443 -K /etc/origin/master/admin.crt:/etc/origin/master/admin.key

Here some useful commands to inspect Kubernetes or OpenShift events:

# Monitor Kubernetes namespace ip communication:
sudo sysdig -A -s8192 "fd.type in (ipv4, ipv6) and (k8s.ns.name=<-NAMESPACE-NAME->)" -k https://localhost:8443 -K /etc/origin/master/admin.crt:/e/origin/master/admin.key

# Monitor namespace and pod name, the 2nd command filters to only show GET requests:
sudo sysdig -A -s8192 "fd.type in (ipv4, ipv6) and (k8s.ns.name=<-NAMESPACE-NAME-> and k8s.pod.name=<-POD-NAME->)" -k https://localhost:8443 -K /etc/origin/master/admin.crt:/etc/origin/master/admin.key
sudo sysdig -A -s8192 "fd.type in (ipv4, ipv6) and (k8s.ns.name=<-NAMESPACE-NAME-> and k8s.pod.name=<-POD-NAME->) and evt.buffer contai GET" -k https://localhost:8443 -K /etc/origin/master/admin.crt:/etc/origin/master/admin.key 

# Monitor ns and pod names and apply chisel echo_fds:
sudo sysdig -A -s8192 "fd.type in (ipv4, ipv6) and (k8s.ns.name=<-NAMESPACE-NAME-> and k8s.pod.name=<-POD-NAME->)" -c echo_fds -k https://localhost:8443 -K /etc/origin/master/admin.crt:/etc/origin/master/admin.key

SysDig example

This capture is an http request between an busybox pod (name: busybox-2-hjhq8 ip: 10.128.0.81) via service (name: hello-app-http ip: 172.30.43.111) to the hello-openshift pod (name: hello-app-http-1-8v57x ip: 10.128.0.77) in the namespace myproject. I use a simple “wget -S –spider http://hello-app-http/” to simulate the request:

# Command to capture ip communication in myproject namespace including dnsmasq and wget processes:
sudo sysdig -s2000 -A -pk "fd.type in (ipv4, ipv6) and (k8s.ns.name=myproject or proc.name=dnsmasq) or proc.name=wget" -k https://localhost:8443 -K /etc/origin/master/admin.crt:/etc/origin/master/admin.key

# Output:
70739 19:36:51.401062017 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < socket fd=3(<4>)
70741 19:36:51.401062878 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > connect fd=3(<4>)
70748 19:36:51.401072194 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < connect res=0 tuple=10.128.0.81:44993->172.26.11.254:53
70749 19:36:51.401074599 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > sendto fd=3(<4u>10.128.0.81:44993->172.26.11.254:53) size=60 tuple=NULL
71083 19:36:51.401575859 0  (host) dnsmasq (20933:20933) > recvmsg fd=6(<4u>172.26.11.254:53)
71087 19:36:51.401582008 0  (host) dnsmasq (20933:20933) < recvmsg res=60 size=60 data= hello-app-httpmyprojectsvcclusterlocal tuple=10.128.0.81:44993->172.26.11.254:53
71088 19:36:51.401584101 0  (host) dnsmasq (20933:20933) > ioctl fd=6(<4u>10.128.0.81:44993->172.26.11.254:53) request=8910 argument=7FFE208E30C0
71089 19:36:51.401586692 0  (host) dnsmasq (20933:20933) < ioctl res=0
71108 19:36:51.401623408 0  (host) dnsmasq (20933:20933) < socket fd=58(<4>)
71109 19:36:51.401624563 0  (host) dnsmasq (20933:20933) > fcntl fd=58(<4>) cmd=4(F_GETFL)
71110 19:36:51.401625584 0  (host) dnsmasq (20933:20933) < fcntl res=2(/dev/null)
71111 19:36:51.401626259 0  (host) dnsmasq (20933:20933) > fcntl fd=58(<4>) cmd=5(F_SETFL)
71112 19:36:51.401626825 0  (host) dnsmasq (20933:20933) < fcntl res=0(/dev/null)
71113 19:36:51.401627787 0  (host) dnsmasq (20933:20933) > bind fd=58(<4>)
71129 19:36:51.401680355 0  (host) dnsmasq (20933:20933) < bind res=0 addr=0.0.0.0:22969
71130 19:36:51.401681698 0  (host) dnsmasq (20933:20933) > sendto fd=58(<4u>0.0.0.0:22969) size=60 tuple=0.0.0.0:22969->127.0.0.1:53
71131 19:36:51.401715726 0  (host) dnsmasq (20933:20933) < sendto res=60 data=
hello-app-httpmyprojectsvcclusterlocal
71469 19:36:51.402632442 1  (host) dnsmasq (20933:20933) > recvfrom fd=58(<4u>127.0.0.1:53->127.0.0.1:22969) size=5131
71474 19:36:51.402636604 1  (host) dnsmasq (20933:20933) < recvfrom res=114 data=
hello-app-httpmyprojectsvcclusterlocal)<*nsdns)
hostmaster)\`tp :< tuple=127.0.0.1:53->0.0.0.0:22969
71479 19:36:51.402643363 1  (host) dnsmasq (20933:20933) > sendmsg fd=6(<4u>10.128.0.81:44993->172.26.11.254:53) size=114 tuple=172.26.11.254:53->10.128.0.81:44993
71492 19:36:51.402666311 1  (host) dnsmasq (20933:20933) < sendmsg res=114 data=
hello-app-httpmyprojectsvcclusterlocal)<*nsdns)
hostmaster)\`tp :<
71493 19:36:51.402668199 1  (host) dnsmasq (20933:20933) > close fd=58(<4u>127.0.0.1:53->127.0.0.1:22969)
71494 19:36:51.402669009 1  (host) dnsmasq (20933:20933) < close res=0
80786 19:36:51.430143868 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < sendto res=60 data= hello-app-httpmyprojectsvcclusterlocal 80793 19:36:51.430153453 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > recvfrom fd=3(<4u>10.128.0.81:44993->172.26.11.254:53) size=512
80794 19:36:51.430158626 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < recvfrom res=114 data=
hello-app-httpmyprojectsvcclusterlocal)<*nsdns)
hostmaster)\`tp :< tuple=NULL 80795 19:36:51.430160257 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > close fd=3(<4u>10.128.0.81:44993->172.26.11.254:53)
80796 19:36:51.430161712 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < close res=0
80835 19:36:51.430260103 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < socket fd=3(<4>)
80838 19:36:51.430261013 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > connect fd=3(<4>)
80840 19:36:51.430269080 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < connect res=0 tuple=10.128.0.81:41405->172.26.11.254:53
80841 19:36:51.430271011 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > sendto fd=3(<4u>10.128.0.81:41405->172.26.11.254:53) size=60 tuple=NULL
80874 19:36:51.430433333 1  (host) dnsmasq (20933:20933) > recvmsg fd=6(<4u>10.128.0.81:44993->172.26.11.254:53)
80879 19:36:51.430439631 1  (host) dnsmasq (20933:20933) < recvmsg res=60 size=60 data= hello-app-httpmyprojectsvcclusterlocal tuple=10.128.0.81:41405->172.26.11.254:53
80881 19:36:51.430454839 1  (host) dnsmasq (20933:20933) > ioctl fd=6(<4u>10.128.0.81:41405->172.26.11.254:53) request=8910 argument=7FFE208E30C0
80885 19:36:51.430457716 1  (host) dnsmasq (20933:20933) < ioctl res=0
80895 19:36:51.430493317 1  (host) dnsmasq (20933:20933) < socket fd=58(<4>)
80896 19:36:51.430494522 1  (host) dnsmasq (20933:20933) > fcntl fd=58(<4>) cmd=4(F_GETFL)
80897 19:36:51.430495527 1  (host) dnsmasq (20933:20933) < fcntl res=2(/dev/null)
80898 19:36:51.430496189 1  (host) dnsmasq (20933:20933) > fcntl fd=58(<4>) cmd=5(F_SETFL)
80899 19:36:51.430496769 1  (host) dnsmasq (20933:20933) < fcntl res=0(/dev/null)
80900 19:36:51.430497538 1  (host) dnsmasq (20933:20933) > bind fd=58(<4>)
80913 19:36:51.430551876 1  (host) dnsmasq (20933:20933) < bind res=0 addr=0.0.0.0:64640
80914 19:36:51.430553226 1  (host) dnsmasq (20933:20933) > sendto fd=58(<4u>0.0.0.0:64640) size=60 tuple=0.0.0.0:64640->127.0.0.1:53
80922 19:36:51.430581962 1  (host) dnsmasq (20933:20933) < sendto res=60 data=
:=hello-app-httpmyprojectsvcclusterlocal
81032 19:36:51.430806106 1  (host) dnsmasq (20933:20933) > recvfrom fd=58(<4u>127.0.0.1:53->127.0.0.1:64640) size=5131
81035 19:36:51.430809074 1  (host) dnsmasq (20933:20933) < recvfrom res=76 data= :=hello-app-httpmyprojectsvcclusterlocal+o tuple=127.0.0.1:53->0.0.0.0:64640
81040 19:36:51.430818116 1  (host) dnsmasq (20933:20933) > sendmsg fd=6(<4u>10.128.0.81:41405->172.26.11.254:53) size=76 tuple=172.26.11.254:53->10.128.0.81:41405
81051 19:36:51.430840305 1  (host) dnsmasq (20933:20933) < sendmsg res=76 data=
hello-app-httpmyprojectsvcclusterlocal+o
81052 19:36:51.430842129 1  (host) dnsmasq (20933:20933) > close fd=58(<4u>127.0.0.1:53->127.0.0.1:64640)
81053 19:36:51.430842956 1  (host) dnsmasq (20933:20933) < close res=0
84676 19:36:51.436248790 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < sendto res=60 data= hello-app-httpmyprojectsvcclusterlocal 84683 19:36:51.436254334 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > recvfrom fd=3(<4u>10.128.0.81:41405->172.26.11.254:53) size=512
84684 19:36:51.436256892 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < recvfrom res=76 data= hello-app-httpmyprojectsvcclusterlocal+o tuple=NULL 84685 19:36:51.436264998 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > close fd=3(<4u>10.128.0.81:41405->172.26.11.254:53)
84686 19:36:51.436265743 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < close res=0
85420 19:36:51.437492301 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < socket fd=3(<4>)
85421 19:36:51.437493337 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > connect fd=3(<4>)
86222 19:36:51.438494771 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < connect res=0 tuple=10.128.0.81:39656->172.30.43.111:80
86226 19:36:51.438497506 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > fcntl fd=3(<4t>10.128.0.81:39656->172.30.43.111:80) cmd=4(F_GETFL)
86228 19:36:51.438498484 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < fcntl res=2(/dev/pts/1)
86229 19:36:51.438499943 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > ioctl fd=3(<4t>10.128.0.81:39656->172.30.43.111:80) request=5401 argument=7FFDBF5E434C
86233 19:36:51.438501658 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < ioctl res=-25(ENOTTY) 86242 19:36:51.438509833 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > write fd=3(<4t>10.128.0.81:39656->172.30.43.111:80) size=105
86285 19:36:51.438557309 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < write res=105 data= GET / HTTP/1.1 Host: hello-app-http.myproject.svc.cluster.local User-Agent: Wget Connection: close 86291 19:36:51.438561615 1 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > read fd=3(<4t>10.128.0.81:39656->172.30.43.111:80) size=4096
107714 19:36:51.478518400 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11185:7) < accept fd=6(<4t>10.128.0.81:39656->10.128.0.77:8080) tuple=10.128.0.81:39656->10.128.0.77:8080 queuepct=0 queuelen=0 queuemax=128
107772 19:36:51.478636516 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11185:7) > read fd=6(<4t>10.128.0.81:39656->10.128.0.77:8080) size=4096
107773 19:36:51.478640241 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11185:7) < read res=105 data= GET / HTTP/1.1 Host: hello-app-http.myproject.svc.cluster.local User-Agent: Wget Connection: close 107857 19:36:51.478817861 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11185:7) > write fd=6(<4t>10.128.0.81:39656->10.128.0.77:8080) size=153
107869 19:36:51.478870349 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11185:7) < write res=153 data= HTTP/1.1 200 OK Date: Sun, 10 Feb 2019 19:36:51 GMT Content-Length: 17 Content-Type: text/plain; charset=utf-8 Connection: close Hello OpenShift! 107886 19:36:51.478892928 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11185:7) > close fd=6(<4t>10.128.0.81:39656->10.128.0.77:8080)
107887 19:36:51.478893676 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11185:7) < close res=0
107899 19:36:51.478998208 0 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < read res=153 data= HTTP/1.1 200 OK Date: Sun, 10 Feb 2019 19:36:51 GMT Content-Length: 17 Content-Type: text/plain; charset=utf-8 Connection: close Hello OpenShift! 108908 19:36:51.480114626 0 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) > close fd=3(<4t>10.128.0.81:39656->172.30.43.111:80)
108910 19:36:51.480115482 0 busybox-2-hjhq8 (4d84d98d46f1) wget (84856:26) < close res=0
112966 19:36:51.488041049 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11183:6) < accept fd=6(<4t>10.128.0.1:55052->10.128.0.77:8080) tuple=10.128.0.1:55052->10.128.0.77:8080 queuepct=0 queuelen=0 queuemax=128
113001 19:36:51.488096304 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11183:6) > read fd=6(<4t>10.128.0.1:55052->10.128.0.77:8080) size=4096
113002 19:36:51.488098693 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11183:6) < read res=0 data= 113005 19:36:51.488105730 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11183:6) > close fd=6(<4t>10.128.0.1:55052->10.128.0.77:8080)
113006 19:36:51.488106302 0 hello-app-http-1-8v57x (5145dc0ea61e) hello-openshift (11183:6) < close res=0

Below a list of some more useful sysdig cli examples:

# Sysdig Chisels and Filters:
sudo sysdig -cl

# To find out more information about a particular chisel:
sudo sysdig -i lscontainers

# To view a list of available field classes, fields and their description:
sudo sysdig -l

# Create and write sysdig trace files, 2nd option sets byte limit for trace file:
sudo sysdig -w mytrace.scap
sudo sysdig -s 8192 -w trace.scap 

# Read sysdig trace files, 2nd option read and filter based on proc.name:
sudo sysdig -r trace.scap
sudo sysdig -r trace.scap proc.name=dnsmasq

# Monitor linux processes:
sudo sysdig -c ps

# Monitor linux processes by CPU utilisation:
sudo sysdig -c topprocs_cpu

# Monitor network connections:
sudo sysdig -c netstat
sudo sysdig -c topconns
sudo sysdig -c topprocs_net

# Monitor system file i/o:
sudo sysdig -c echo_fds
sudo sysdig -c topprocs_file

# Troubleshoot system performance:
sudo sysdig -c bottlenecks

# Monitor process execution time
sudo sysdig -c proc_exec_time 

# Monitor network i/o performance
sudo sysdig -c netlower 1

# Watch log entries
sudo sysdig -c spy_logs

# Monitor http requests:
sudo sysdig -c httplog    
sudo sysdig -c httptop [Print Top HTTP Requests] 

SysDig Monitor Enterprise

The paid enterprise version provides a web console to centrally access metrics and events from your fleet of monitored nodes.

You can run SysDig enterprise directly on OpenShift as DaemonSet and deploy the agent to all nodes in the cluster. For more detailed information about Kubernetes or OpenShift installation, read the official documentation.

oc adm new-project sysdig-agent --node-selector='app=sysdig-agent'
oc project sysdig-agent
oc label node --all "app=sysdig-agent"
oc create serviceaccount sysdig-agent
oc adm policy add-scc-to-user privileged -n sysdig-agent -z sysdig-agent
oc adm policy add-cluster-role-to-user cluster-reader -n sysdig-agent -z sysdig-agent

wget https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/master/agent_deploy/kubernetes/sysdig-agent-daemonset-v2.yaml
wget https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/master/agent_deploy/kubernetes/sysdig-agent-configmap.yaml
oc create secret generic sysdig-agent --from-literal=access-key=<-YOUR-ACCESS-KEY->

# Edit sysdig-agent-daemonset-v2.yaml to uncomment the line: serviceAccount: sysdig-agent and edit sysdig-agent-configmap.yaml to uncomment the line: new_k8s: true
# This allows kube-state-metrics to be automatically detected, monitored, and displayed in Sysdig Monitor. 
# Edit sysdig-agent-configmap.yaml to uncomment the line: k8s_cluster_name: and add your cluster name.

oc create -f sysdig-agent-daemonset-v2.yaml
oc create -f sysdig-agent-configmap.yaml

SysDig is a great tool to monitor and even further provides you the possibility to troubleshoot in depth your linux hosts and container platforms.

How to validate OpenShift using Ansible

You might have seen my previous article about the OpenShift troubleshooting guide. With this blog post I want to show how to validate that an OpenShift container platform is fully functional. This is less of an issue when you do a fresh install but becomes more important when you apply changes or do in-place upgrades your cluster. And because we all like automation; I show how to do this with Ansible in an automated way.

Let’s jump right into it and look at the different steps. Here the links to the Ansible role and playbook for more details:

  • Prepare workspace – create a temp directories and copy admin.kubeconfig there.
  • Check node state – run command to check for Not Ready nodes:
  • oc get nodes --no-headers=true | grep -v ' Ready' | true
    
  • Check node scheduling – run command to check for nodes where scheduling is disabled:
  • oc get nodes --no-headers=true | grep 'SchedulingDisabled' | awk '{ print $1 }'
    
  • Check master certificates – check validity of master API, controller and etcd certificates:
  • cat /etc/origin/master/ca.crt | openssl x509 -text | grep -i Validity -A2
    cat /etc/origin/master/master.server.crt | openssl x509 -text | grep -i Validity -A2
    cat /etc/origin/master/admin.crt | openssl x509 -text | grep -i Validity -A2
    cat /etc/etcd/ca.crt | openssl x509 -text | grep -i Validity -A2
    
  • Check nodes certificates – check validity of worker node certificate. This needs to run on all compute and infra nodes, not masters:
  • cat /etc/origin/node/server.crt | openssl x509 -text | grep -i Validity -A2
    
  • Check etcd health – run command for cluster health check:
  • /usr/bin/etcdctl --cert-file /etc/etcd/peer.crt --key-file /etc/etcd/peer.key --ca-file /etc/etcd/ca.crt -C https://{{ hostname.stdout }}:2379 cluster-health
    
  • Check important default projects for failed pods – look out for failed pods:
  • oc get pods -o wide --no-headers=true -n {{ item }} | grep -v " Running\|Completed" || true
    
  • Check registry health – run GET docker-registry healthz path and expect http 200:
  • curl -kv https://{{ registry_ip.stdout }}/healthz
    
  • Check SkyDNS resolution – try to resolve internal hostnames:
  • nslookup docker-registry.default.svc.cluster.local
    nslookup docker-registry.default.svc
    
  • Check upstream DNS resolution – try and resolve cluster external dns names.
  • Create test project
  • Run persistent volume test – create busybox container and claim volume
    1. apply imagestream, deploymentconfig and persistent volume claim configuration
    2. synchronise testfile to container pv
    3. check content of testfile
    4. delete testfile
  • Run test build – run the following steps to create multiple application pods on the OpenShift cluster:
    1. apply buildconfig, imagestream, deploymentconfig, service and route configuration
    2. check pods are running
    3. get route hostnames
    4. connect to routes and show output
    5. trigger new-build and check new build is created
    6. check if pods are running
    7. connect to routes and show output
  • Delete test project
  • Delete workspace folder – at the end the validation the role deletes the temporary folder and all its contents.

Next we need to run the playbook and see the output below:

PLAY [Check OpenShift cluster installation] ***********************************************************************************************************************

TASK [check : make temp directory] ********************************************************************************************************************************
changed: [master1]

TASK [check : create temp directory] ******************************************************************************************************************************
ok: [master1]

TASK [check : create template folder] *****************************************************************************************************************************
changed: [master1]

TASK [check : create test file folder] ****************************************************************************************************************************
changed: [master1]

TASK [check : get hostname] ***************************************************************************************************************************************
changed: [master1]

TASK [check : copy admin config] **********************************************************************************************************************************
ok: [master1]

TASK [check : check for not ready nodes] **************************************************************************************************************************
changed: [master1]

TASK [check : not ready nodes] ************************************************************************************************************************************
ok: [master1] => {
    "notready.stdout_lines": []
}

TASK [check : check for scheduling disabled nodes] ****************************************************************************************************************
changed: [master1]

TASK [check : scheduling disabled nodes] **************************************************************************************************************************
ok: [master1] => {
    "schedulingdisabled.stdout_lines": []
}

TASK [check : validate master certificates] ***********************************************************************************************************************
changed: [master1] => (item=cat /etc/origin/master/ca.crt | openssl x509 -text | grep -i Validity -A2)
changed: [master1] => (item=cat /etc/origin/master/master.server.crt | openssl x509 -text | grep -i Validity -A2)
changed: [master1] => (item=cat /etc/origin/master/admin.crt | openssl x509 -text | grep -i Validity -A2)
changed: [master1] => (item=cat /etc/etcd/ca.crt | openssl x509 -text | grep -i Validity -A2)

TASK [check : ca certificate] *************************************************************************************************************************************
ok: [master1] => {
    "msg": [
        [
            "        Validity",
            "            Not Before: Jan 31 17:13:52 2019 GMT",
            "            Not After : Jan 30 17:13:53 2024 GMT"
        ],
        [
            "        Validity",
            "            Not Before: Jan 31 17:13:53 2019 GMT",
            "            Not After : Jan 30 17:13:54 2021 GMT"
        ],
        [
            "        Validity",
            "            Not Before: Jan 31 17:13:53 2019 GMT",
            "            Not After : Jan 30 17:13:54 2021 GMT"
        ],
        [
            "        Validity",
            "            Not Before: Jan 31 17:11:09 2019 GMT",
            "            Not After : Jan 30 17:11:09 2024 GMT"
        ]
    ]
}

TASK [check : check etcd state] ***********************************************************************************************************************************
changed: [master1]

TASK [check : show etcd state] ************************************************************************************************************************************
ok: [master1] => {
    "etcdstate.stdout_lines": [
        "member 335450512aab5650 is healthy: got healthy result from https://172.26.7.132:2379",
        "cluster is healthy"
    ]
}

TASK [check : check default openshift-infra and logging projects for failed pods] *********************************************************************************
ok: [master1] => (item=default)
ok: [master1] => (item=kube-system)
ok: [master1] => (item=kube-service-catalog)
ok: [master1] => (item=openshift-logging)
ok: [master1] => (item=openshift-infra)
ok: [master1] => (item=openshift-console)
ok: [master1] => (item=openshift-web-console)
ok: [master1] => (item=openshift-monitoring)
ok: [master1] => (item=openshift-node)
ok: [master1] => (item=openshift-sdn)

TASK [check : failed failedpods] **********************************************************************************************************************************
ok: [master1] => {
    "msg": [
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        [],
        []
    ]
}

TASK [check : get container registry ip] **************************************************************************************************************************
changed: [master1]

TASK [check : check container registry health] ********************************************************************************************************************
ok: [master1]

TASK [check : check internal SysDNS resolution for cluster.local] *************************************************************************************************
changed: [master1] => (item=docker-registry.default.svc.cluster.local)
changed: [master1] => (item=docker-registry.default.svc)

TASK [check : check external DNS upstream resolution] *************************************************************************************************************
changed: [master1] => (item=www.google.com)
changed: [master1] => (item=www.google.co.uk)
changed: [master1] => (item=www.google.de)

TASK [check : create test project] ********************************************************************************************************************************
changed: [master1]

TASK [check : run test persistent volume] *************************************************************************************************************************
included: /var/jenkins_home/workspace/openshift/ansible/roles/check/tasks/pv.yml for master1

TASK [check : create sequence number list for pv] *****************************************************************************************************************
ok: [master1] => (item=1)
ok: [master1] => (item=2)
ok: [master1] => (item=3)
ok: [master1] => (item=4)
ok: [master1] => (item=5)

TASK [check : copy build templates] *******************************************************************************************************************************
changed: [master1] => (item={u'dest': u'busybox.yml', u'src': u'busybox.j2'})
changed: [master1] => (item={u'dest': u'pv.yml', u'src': u'pv.j2'})

TASK [check : copy testfile] **************************************************************************************************************************************
changed: [master1]

TASK [check : create pvs] *****************************************************************************************************************************************
ok: [master1]

TASK [check : deploy busybox pod] *********************************************************************************************************************************
changed: [master1]

TASK [check : check if all pods are running] **********************************************************************************************************************
FAILED - RETRYING: check if all pods are running (10 retries left).
changed: [master1] => (item=busybox)

TASK [check : get busybox pod name] *******************************************************************************************************************************
changed: [master1]

TASK [check : sync testfile to pod] *******************************************************************************************************************************
changed: [master1]

TASK [check : check testfile in pv] *******************************************************************************************************************************
changed: [master1]

TASK [check : delete testfile in pv] ******************************************************************************************************************************
changed: [master1]

TASK [check : run test build] *************************************************************************************************************************************
included: /var/jenkins_home/workspace/openshift/ansible/roles/check/tasks/build.yml for master1

TASK [check : create sequence number list for hello openshift] ****************************************************************************************************
ok: [master1] => (item=0)
ok: [master1] => (item=1)
ok: [master1] => (item=2)
ok: [master1] => (item=3)
ok: [master1] => (item=4)
ok: [master1] => (item=5)
ok: [master1] => (item=6)
ok: [master1] => (item=7)
ok: [master1] => (item=8)
ok: [master1] => (item=9)

TASK [check : create pod list] ************************************************************************************************************************************
ok: [master1] => (item=[u'0', {u'svc': u'http'}])
ok: [master1] => (item=[u'1', {u'svc': u'http'}])
ok: [master1] => (item=[u'2', {u'svc': u'http'}])
ok: [master1] => (item=[u'3', {u'svc': u'http'}])
ok: [master1] => (item=[u'4', {u'svc': u'http'}])
ok: [master1] => (item=[u'5', {u'svc': u'http'}])
ok: [master1] => (item=[u'6', {u'svc': u'http'}])
ok: [master1] => (item=[u'7', {u'svc': u'http'}])
ok: [master1] => (item=[u'8', {u'svc': u'http'}])
ok: [master1] => (item=[u'9', {u'svc': u'http'}])

TASK [check : copy build templates] *******************************************************************************************************************************
changed: [master1] => (item={u'dest': u'hello-openshift.yml', u'src': u'hello-openshift.j2'})

TASK [check : deploy hello openshift pods] ************************************************************************************************************************
changed: [master1]

TASK [check : check if all pods are running] **********************************************************************************************************************
FAILED - RETRYING: check if all pods are running (10 retries left).
changed: [master1] => (item={u'name': u'hello-http-0'})
FAILED - RETRYING: check if all pods are running (10 retries left).
changed: [master1] => (item={u'name': u'hello-http-1'})
changed: [master1] => (item={u'name': u'hello-http-2'})
changed: [master1] => (item={u'name': u'hello-http-3'})
changed: [master1] => (item={u'name': u'hello-http-4'})
changed: [master1] => (item={u'name': u'hello-http-5'})
changed: [master1] => (item={u'name': u'hello-http-6'})
changed: [master1] => (item={u'name': u'hello-http-7'})
changed: [master1] => (item={u'name': u'hello-http-8'})
changed: [master1] => (item={u'name': u'hello-http-9'})

TASK [check : get hello openshift pod hostnames] ******************************************************************************************************************
changed: [master1]

TASK [check : convert check_route string to json] *****************************************************************************************************************
ok: [master1]

TASK [check : set query to get pod hostname] **********************************************************************************************************************
ok: [master1]

TASK [check : get hostname list] **********************************************************************************************************************************
ok: [master1]

TASK [check : connect to route via curl] **************************************************************************************************************************
changed: [master1] => (item=hello-http-0-test.paas.domain.com)
changed: [master1] => (item=hello-http-1-test.paas.domain.com)
changed: [master1] => (item=hello-http-2-test.paas.domain.com)
changed: [master1] => (item=hello-http-3-test.paas.domain.com)
changed: [master1] => (item=hello-http-4-test.paas.domain.com)
changed: [master1] => (item=hello-http-5-test.paas.domain.com)
changed: [master1] => (item=hello-http-6-test.paas.domain.com)
changed: [master1] => (item=hello-http-7-test.paas.domain.com)
changed: [master1] => (item=hello-http-8-test.paas.domain.com)
changed: [master1] => (item=hello-http-9-test.paas.domain.com)

TASK [check : set json query] *************************************************************************************************************************************
ok: [master1]

TASK [check : show route http response] ***************************************************************************************************************************
ok: [master1] => {
    "msg": [
        "hello-http-0",
        "hello-http-1",
        "hello-http-2",
        "hello-http-3",
        "hello-http-4",
        "hello-http-5",
        "hello-http-6",
        "hello-http-7",
        "hello-http-8",
        "hello-http-9"
    ]
}

TASK [check : trigger new build] **********************************************************************************************************************************
changed: [master1]

TASK [check : check new build is created] *************************************************************************************************************************
changed: [master1]

TASK [check : check if all pods with new build are running] *******************************************************************************************************
FAILED - RETRYING: check if all pods with new build are running (10 retries left).
FAILED - RETRYING: check if all pods with new build are running (9 retries left).
changed: [master1] => (item={u'name': u'hello-http-0'})
changed: [master1] => (item={u'name': u'hello-http-1'})
changed: [master1] => (item={u'name': u'hello-http-2'})
changed: [master1] => (item={u'name': u'hello-http-3'})
changed: [master1] => (item={u'name': u'hello-http-4'})
changed: [master1] => (item={u'name': u'hello-http-5'})
changed: [master1] => (item={u'name': u'hello-http-6'})
changed: [master1] => (item={u'name': u'hello-http-7'})
changed: [master1] => (item={u'name': u'hello-http-8'})
changed: [master1] => (item={u'name': u'hello-http-9'})

TASK [check : connect to route via curl] **************************************************************************************************************************
changed: [master1] => (item=hello-http-0-test.paas.domain.com)
changed: [master1] => (item=hello-http-1-test.paas.domain.com)
changed: [master1] => (item=hello-http-2-test.paas.domain.com)
changed: [master1] => (item=hello-http-3-test.paas.domain.com)
changed: [master1] => (item=hello-http-4-test.paas.domain.com)
changed: [master1] => (item=hello-http-5-test.paas.domain.com)
changed: [master1] => (item=hello-http-6-test.paas.domain.com)
changed: [master1] => (item=hello-http-7-test.paas.domain.com)
changed: [master1] => (item=hello-http-8-test.paas.domain.com)
changed: [master1] => (item=hello-http-9-test.paas.domain.com)

TASK [check : show route http response] ***************************************************************************************************************************
ok: [master1] => {
    "msg": [
        "hello-http-0",
        "hello-http-1",
        "hello-http-2",
        "hello-http-3",
        "hello-http-4",
        "hello-http-5",
        "hello-http-6",
        "hello-http-7",
        "hello-http-8",
        "hello-http-9"
    ]
}

TASK [check : delete test project] ********************************************************************************************************************************
changed: [master1]

TASK [check : delete temp directory] ******************************************************************************************************************************
ok: [master1]

PLAY RECAP ********************************************************************************************************************************************************
master1                : ok=52   changed=30   unreachable=0    failed=0

The cluster validation playbook successfully finished without errors and this is just a simple way to do a basic check of your OpenShift platform. Check out the OpenShift documentation about environment_health_checks.