Getting started with OpenShift 4.0 Container Platform

I had a first look at OpenShift 4.0 and I wanted to share some information from what I have seen so far. The installation of the cluster is super easy and RedHat did a lot to improve the overall experience of the installation process to the previous OpenShift v3.x Ansible based installation and moving towards ephemeral cluster deployments.

There are a many changes under the hood and it’s not as obvious as Bootkube for the self-hosted/healing control-plane, MachineSets and the many internal operators to install and manage the OpenShift components ( api serverscheduler, controller manager, cluster-autoscalercluster-monitoringweb-consolednsingressnetworkingnode-tuning, and authentication ).

For the OpenShift 4.0 developer preview you need an RedHat account because you require a pull-secret for the cluster installation. For more information please visit: https://cloud.openshift.com/clusters/install

First we need to download the openshift-installer binary:

wget https://github.com/openshift/installer/releases/download/v0.16.1/openshift-install-linux-amd64
mv openshift-install-linux-amd64 openshift-install
chmod +x openshift-install

Then we create the install-configuration, it is required that you already have AWS account credentials and an Route53 DNS domain set-up:

$ ./openshift-install create install-config
INFO Platform aws
INFO AWS Access Key ID *********
INFO AWS Secret Access Key [? for help] *********
INFO Writing AWS credentials to "/home/centos/.aws/credentials" (https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html)
INFO Region eu-west-1
INFO Base Domain paas.domain.com
INFO Cluster Name cluster1
INFO Pull Secret [? for help] *********

Let’s look at the install-config.yaml

apiVersion: v1beta4
baseDomain: paas.domain.com
compute:
- name: worker
  platform: {}
  replicas: 3
controlPlane:
  name: master
  platform: {}
  replicas: 3
metadata:
  creationTimestamp: null
  name: ew1
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineCIDR: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: eu-west-1
pullSecret: '{"auths":{...}'

Now we can continue to create the OpenShift v4 cluster which takes around 30mins to complete. At the end of the openshift-installer you see the auto-generate credentials to connect to the cluster:

$ ./openshift-install create cluster
INFO Consuming "Install Config" from target directory
INFO Creating infrastructure resources...
INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster1.paas.domain.com:6443...
INFO API v1.12.4+0ba401e up
INFO Waiting up to 30m0s for the bootstrap-complete event...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.cluster1.paas.domain.com:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO Run 'export KUBECONFIG=/home/centos/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p jMTSJ-F6KYy-mVVZ4-QVNPP' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster1.paas.domain.com
INFO Login to the console with user: kubeadmin, password: jMTSJ-F6KYy-mVVZ4-QVNPP

The web-console has a very clean new design which I really like in addition to all the great improvements.

Under administration -> cluster settings you can explore the new auto-upgrade functionality of OpenShift 4.0:

You choose the new version to upgrade and everything else happens in the background which is a massive improvement to OpenShift v3.x where you had to run the ansible installer for this.

In the background the cluster operator upgrades the different platform components one by one.

Slowly you will see that the components move to the new build version.

Finished cluster upgrade:

You can only upgrade from one version 4.0.0-0.9 to the next version 4.0.0-0.10. It is not possible to upgrade and go straight from x-0.9 to x-0.11.

But let’s deploy the Google Hipster Shop example and expose the frontend-external service for some more testing:

oc login -u kubeadmin -p jMTSJ-F6KYy-mVVZ4-QVNPP https://api.cluster1.paas.domain.com:6443 --insecure-skip-tls-verify=true
oc new-project myproject
oc create -f https://raw.githubusercontent.com/berndonline/openshift-ansible/master/examples/hipster-shop.yml
oc expose svc frontend-external

Getting the hostname for the exposed service:

$ oc get route
NAME                HOST/PORT                                                   PATH      SERVICES            PORT      TERMINATION   WILDCARD
frontend-external   frontend-external-myproject.apps.cluster1.paas.domain.com             frontend-external   http                    None

Use the browser to connect to our Hipster Shop:

It’s also very easy to destroy the cluster as it is to create it, as you seen previously:

$ ./openshift-install destroy cluster
INFO Disassociated                                 arn="arn:aws:ec2:eu-west-1:552276840222:route-table/rtb-083e2da5d1183efa7" id=rtbassoc-01d27db162fa45402
INFO Disassociated                                 arn="arn:aws:ec2:eu-west-1:552276840222:route-table/rtb-083e2da5d1183efa7" id=rtbassoc-057f593640067efc0
INFO Disassociated                                 arn="arn:aws:ec2:eu-west-1:552276840222:route-table/rtb-083e2da5d1183efa7" id=rtbassoc-05e821b451bead18f
INFO Disassociated                                 IAM instance profile="arn:aws:iam::552276840222:instance-profile/ocp4-bgx4c-worker-profile" arn="arn:aws:ec2:eu-west-1:552276840222:instance/i-0f64a911b1ffa3eff" id=i-0f64a911b1ffa3eff name=ocp4-bgx4c-worker-profile role=ocp4-bgx4c-worker-role
INFO Deleted                                       IAM instance profile="arn:aws:iam::552276840222:instance-profile/ocp4-bgx4c-worker-profile" arn="arn:aws:ec2:eu-west-1:552276840222:instance/i-0f64a911b1ffa3eff" id=i-0f64a911b1ffa3eff name=0xc00090f9a8
INFO Deleted                                       arn="arn:aws:ec2:eu-west-1:552276840222:instance/i-0f64a911b1ffa3eff" id=i-0f64a911b1ffa3eff
INFO Deleted                                       arn="arn:aws:ec2:eu-west-1:552276840222:instance/i-00b5eedc186ba26a7" id=i-00b5eedc186ba26a7
...
INFO Deleted                                       arn="arn:aws:ec2:eu-west-1:552276840222:security-group/sg-016d4c7d435a1c97f" id=sg-016d4c7d435a1c97f
INFO Deleted                                       arn="arn:aws:ec2:eu-west-1:552276840222:subnet/subnet-076348368858e9a82" id=subnet-076348368858e9a82
INFO Deleted                                       arn="arn:aws:ec2:eu-west-1:552276840222:vpc/vpc-00c611ae1b9b8e10a" id=vpc-00c611ae1b9b8e10a
INFO Deleted                                       arn="arn:aws:ec2:eu-west-1:552276840222:dhcp-options/dopt-0ce8b6a1c31e0ceac" id=dopt-0ce8b6a1c31e0ceac

The install experience is great for OpenShift 4.0 which makes it very easy for everyone to create and get started quickly with an enterprise container platform. From the operational perspective I still need to see how to run the new platform because all the operators are great and makes it an easy to use cluster but what happens when one of the operators goes rogue and debugging this I am most interested in.

Over the coming weeks I will look into more detail around OpenShift 4.0 and the different new features, I am especially interested in Service Mesh.

Typhoon Kubernetes Distribution

I stumbled across a very interesting Kubernetes distribution called Typhoon which runs a self-hosted control-plane using Bootkube running on CoreOS. Typhoon uses Terraform to deploy the required instances on various cloud providers or on bare-metal servers. I really like the concept of a minimal Kubernetes distribution and a simple bootstrap to deploy a full featured cluster in a few minutes. Check out the official Typhoon website or their Github repository for more information.

To install Typhoon I followed the documentation, everything is pretty simple with a bit of Terraform knowledge. Here’s my Github repository with my cluster configuration: https://github.com/berndonline/typhoon-kubernetes/tree/aws

Before you start you need to install Terraform v0.11.x and terraform-provider-ct, and setup a AWS Route53 domain for the Kubernetes cluster.

I created a new subdomain on Route53 and configured delegation on CloudFlare for the domain.

Let’s checkout the configuration, first the cluster.tf which I have modified slightly because I use Jenkins to deploy the Kubernetes cluster.

module "aws-cluster" {
  source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.13.3"

  providers = {
    aws = "aws.default"
    local = "local.default"
    null = "null.default"
    template = "template.default"
    tls = "tls.default"
  }

  # AWS
  cluster_name = "typhoon"
  dns_zone     = "${var.dns}"
  dns_zone_id  = "${var.dns_id}"

  # configuration
  ssh_authorized_key = "${var.ssh_key}"
  asset_dir          = "./.secrets/clusters/typhoon"

  # optional
  worker_count = 2
  worker_type  = "t3.small"
}

In the provider.tf I have only added S3 to be used for the Terraform backend state but otherwise I’ve left the defaults.

provider "aws" {
  version = "~> 1.13.0"
  alias   = "default"

  region  = "eu-west-1"
}

terraform {
  backend "s3" {
    bucket = "techbloc-terraform-data"
    key    = "openshift-311"
    region = "eu-west-1"
  }
}

...

I added a variables.tf file for the DNS and SSH variables.

variable "dns" {
}
variable "dns_id" {
}
variable "ssh_key" {
}

Let’s have a quick look at my simple Jenkins pipeline to deploy Typhoon Kubernetes. Apart from installing Kubernetes I am deploying the Nginx Ingress controller and  Heapster addons for the cluster. I’ve also added an example application I have used previously after deploying the cluster.

pipeline {
    agent any
    environment {
        AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
        AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
        TF_VAR_dns = credentials('TF_VAR_dns')
        TF_VAR_dns_id = credentials('TF_VAR_dns_id')
        TF_VAR_ssh_key = credentials('TF_VAR_ssh_key')
    }
    stages {
        stage('Prepare workspace') {
            steps {
                sh 'rm -rf *'
                git branch: 'aws', url: 'https://github.com/berndonline/typhoon-kubernetes.git'
                sh 'terraform init'
            }
        }
        stage('terraform apply') {
            steps {
                sshagent (credentials: ['fcdca8fa-aab9-3846-832f-4756392b7e2c']) {
                    sh 'terraform apply -auto-approve'
                    sh 'sleep 30'
                }
            }
        }
        stage('deploy nginx-ingress and heapster') {
            steps {    
                sh 'kubectl apply -R -f ./nginx-ingress/ --kubeconfig=./.secrets/clusters/typhoon/auth/kubeconfig'
                sh 'kubectl apply -R -f ./heapster/ --kubeconfig=./.secrets/clusters/typhoon/auth/kubeconfig'
                sh 'sleep 30'
            }
        }
        stage('deploy example application') {
            steps {    
                sh 'kubectl apply -f ./example/hello-kubernetes.yml --kubeconfig=./.secrets/clusters/typhoon/auth/kubeconfig'
            }
        }
        stage('Run terraform destroy') {
            steps {
                input 'Run terraform destroy?'
            }
        }
        stage('terraform destroy') {
            steps {
                sshagent (credentials: ['fcdca8fa-aab9-3846-832f-4756392b7e2c']) {
                    sh 'terraform destroy -force'
                }
            }
        }
    }
}

Let’s start the Jenkins pipeline:

Let’s check if I can access the hello-kubernetes application. For everyone who is interested, this is the link to the Github repository for the hello-kubernetes example application I have used.

I really like the Typhoon Kubernetes distribution and the work that went into it to create a easy way for everyone to install a Kubernetes cluster and start using it in a few minutes. I also find the way they’ve used Terraform and Bootkube to deploy the platform on CoreOS very inspiring and it gave me some ideas how I can make use of it for production clusters.

I actually like CoreOS and the easy bootstrapping with Terraform and Bootkube which I have not used before, I’ve always deployed OpenShift/Kubernetes on either RedHat or CentOS with Ansible, and find it a very interesting way to deploy a Kubernetes platform.

How to display OpenShift/Kubernetes namespace on bash prompt

Very short but useful post about how to display the current Kubernetes namespace on the bash command prompt. I got used add an -n <namespace-name> when I execute an oc command but it is still very useful to get the current namespace displayed on the command prompt especially when troubleshooting issues to not get lost in the different platform namespaces.

Create new file ~/.oc-prompt.sh in your users home folder.

#!/bin/bash
__oc_ps1()
{
    # Get current context
    CONTEXT=$(cat ~/.kube/config 2>/dev/null| grep -o '^current-context: [^/]*' | cut -d' ' -f2)

    if [ -n "$CONTEXT" ]; then
        echo "(ocp:${CONTEXT})"
    fi
}

Add the following lines at the end of the ~/.bashrc and re-connect your terminal session.

NORMAL="\[\033[00m\]"
BLUE="\[\033[01;34m\]"
YELLOW="\[\e[1;33m\]"
GREEN="\[\e[1;32m\]"

export PS1="${BLUE}\W ${GREEN}\u${YELLOW}\$(__oc_ps1)${NORMAL} \$ "
source ~/.oc-prompt.sh

The example bash prompt showing the current OpenShift/Kubernetes namespace:

Very useful when you need to administrate a cluster with multiple namespace.