Kubernetes in Docker (KinD) – Cluster Bootstrap Script for Continuous Integration

I have been using Kubernetes in Docker (KinD) for over a year and it’s ideal when you require an ephemeral Kubernetes cluster for local development or testing. My focus with the bootstrap script was to create a Kubernetes cluster where I can easily customise the configuration, choose the required CNI plugin, the Ingress controller or enable Service Mesh if needed, which is especially important in continuous integration pipelines. I will show you two simple examples below of how I use KinD for testing.

I created the ./kind.sh shell script which does what I need to create a cluster in a couple of minutes and apply the configuration.

    • Customise cluster configuration like Kubernetes version, the number of worker nodes, change the service- and pod IP address subnet and a couple of other cluster level configuration.
    • You can choose from different CNI plugins like KinD-CNI (default), Calico and Cilium, or optionally enable the Multus-CNI on top of the CNI plugin.
    • Install the popular known Nginx or Contour Kubernetes ingress controllers. Contour is interesting because it is an Envoy based Ingress controller and can be used for the Kubernetes Gateway API.
    • Enable Istio Service Mesh which is also available as a Gateway API option or install MetalLB, a Kubernetes service type load balancer plugin.
    • Install Operator Lifecycle Manager (OLM) to install Kubernetes community operators from OperatorHub.io.
$ ./kind.sh --help
usage: kind.sh [--name ]
               [--num-workers ]
               [--config-file ]
               [--kubernetes-version ]
               [--cluster-apiaddress ]
               [--cluster-apiport ]
               [--cluster-loglevel ]
               [--cluster-podsubnet ]
               [--cluster-svcsubnet ]
               [--disable-default-cni]
               [--install-calico-cni]
               [--install-cilium-cni]
               [--install-multus-cni]
               [--install-istio]
               [--install-metallb]
               [--install-nginx-ingress]
               [--install-contour-ingress]
               [--install-istio-gateway-api]
               [--install-contour-gateway-api]
               [--install-olm]
               [--help]

--name                          Name of the KIND cluster
                                DEFAULT: kind
--num-workers                   Number of worker nodes.
                                DEFAULT: 0 worker nodes.
--config-file                   Name of the KIND J2 configuration file.
                                DEFAULT: ./kind.yaml.j2
--kubernetes-version            Flag to specify the Kubernetes version.
                                DEFAULT: Kubernetes v1.21.1
--cluster-apiaddress            Kubernetes API IP address for kind (master).
                                DEFAULT: 0.0.0.0.
--cluster-apiport               Kubernetes API port for kind (master).
                                DEFAULT: 6443.
--cluster-loglevel              Log level for kind (master).
                                DEFAULT: 4.
--cluster-podsubnet             Pod subnet IP address range.
                                DEFAULT: 10.128.0.0/14.
--cluster-svcsubnet             Service subnet IP address range.
                                DEFAULT: 172.30.0.0/16.
--disable-default-cni           Flag to disable Kind default CNI - required to install custom cni plugin.
                                DEFAULT: Default CNI used.
--install-calico-cni            Flag to install Calico CNI Components.
                                DEFAULT: Don't install calico cni components.
--install-cilium-cni            Flag to install Cilium CNI Components.
                                DEFAULT: Don't install cilium cni components.
--install-multus-cni            Flag to install Multus CNI Components.
                                DEFAULT: Don't install multus cni components.
--install-istio                 Flag to install Istio Service Mesh Components.
                                DEFAULT: Don't install istio components.
--install-metallb               Flag to install Metal LB Components.
                                DEFAULT: Don't install loadbalancer components.
--install-nginx-ingress         Flag to install Ingress Components - can't be used in combination with istio.
                                DEFAULT: Don't install ingress components.
--install-contour-ingress       Flag to install Ingress Components - can't be used in combination with istio.
                                DEFAULT: Don't install ingress components.
--install-istio-gateway-api     Flag to install Istio Service Mesh Gateway API Components.
                                DEFAULT: Don't install istio components.
--install-contour-gateway-api   Flag to install Ingress Components - can't be used in combination with istio.
                                DEFAULT: Don't install ingress components.
--install-olm                   Flag to install Operator Lifecyle Manager
                                DEFAULT: Don't install olm components.
                                Visit https://operatorhub.io to install available operators
--delete                        Delete Kind cluster.

Based on the options you choose, the script renders the needed KinD config YAML file and creates the clusters locally in a couple of minutes. To install Istio Service Mesh on KinD you also need the Istio profile which you can find together with the bootstrap script in my GitHub Gists.

Let’s look into how I use KinD and the bootstrap script in Jenkins for continuous integration (CI). I have a pipeline which executes the bootstrap script to create the cluster on my Jenkins agent.

For now I kept the configuration very simple and only need the Nginx Ingress controller in this example:

stages {
    stage('Prepare workspace') {
        steps {
            git credentialsId: 'github-ssh', url: '[email protected]:ab7fb36162f39dbed08f7bd90072a3d2.git'
        }
    }

    stage('Create Kind cluster') {
        steps {
            sh '''#!/bin/bash
            bash ./kind.sh --kubernetes-version v1.21.1 \
                           --install-nginx-ingress
            '''
        }
    }
    stage('Clean-up workspace') {
        steps {
            sh 'rm -rf *'
        }
    }
}

Log output of the script parameters:

I have written a Go Helloworld application and the Jenkins pipeline which runs the Go unit-tests and builds the container image. It also triggers the build job for the create-kind-cluster pipeline to spin-up the Kubernetes cluster.

...
stage ('Create Kind cluster') {
    steps {
        build job: 'create-kind-cluster'
    }
}
...

It then continues to deploy the newly build Helloworld container image and executes a simple end-to-end ingress test.

I also use this same example for my Go Helloworld Kubernetes operator build pipeline. It builds the Go operator and again triggers the build job to create the KinD cluster. It then continues to deploy the Helloworld operator and applies the Custom Resources, and finishes with a simple end-to-end ingress test.

I hope this is an interesting and useful article. Visit my GitHub Gists to download the KinD bootstrap script.

Getting started with GKE – Google Kubernetes Engine

I have not spend much time with Google Cloud Platform because I have used mostly AWS cloud services like EKS but I wanted to give Google’s GKE – Kubernetes Engine a try to compare both offerings. My first impression is great about how easy it is to create a cluster and to enable options for NetworkPolicy or Istio Service Mesh without the need to manually install these compare to AWS EKS.

The GKE integration into the cloud offering is perfect, there is no need for a Kubernetes dashboard or custom monitoring / logging solutions, all is nicely integrated into the Google cloud services and can be used straight away once you created the cluster.

I created a new project called Kubernetes for deploying the GKE cluster. The command you see below creates a GKE cluster with the defined settings and options, and I really like the simplicity of a single command to create and manage the cluster similar like eksctl does:

gcloud beta container --project "kubernetes-xxxxxx" clusters create "cluster-1" \
  --region "europe-west1" \
  --no-enable-basic-auth \
  --cluster-version "1.15.4-gke.22" \
  --machine-type "n1-standard-2" \
  --image-type "COS" \
  --disk-type "pd-standard" \
  --disk-size "100" \
  --metadata disable-legacy-endpoints=true \
  --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
  --num-nodes "1" \
  --enable-stackdriver-kubernetes \
  --enable-ip-alias \
  --network "projects/kubernetes-xxxxxx/global/networks/default" \
  --subnetwork "projects/kubernetes-xxxxxx/regions/europe-west1/subnetworks/default" \
  --default-max-pods-per-node "110" \
  --enable-network-policy \
  --addons HorizontalPodAutoscaling,HttpLoadBalancing,Istio \
  --istio-config auth=MTLS_PERMISSIVE \
  --enable-autoupgrade \
  --enable-autorepair \
  --maintenance-window-start "2019-12-29T00:00:00Z" \
  --maintenance-window-end "2019-12-30T00:00:00Z" \
  --maintenance-window-recurrence "FREQ=WEEKLY;BYDAY=MO,TU,WE,TH,FR,SA,SU" \
  --enable-vertical-pod-autoscaling

With the gcloud command you can authenticate and generate a kubeconfig file for your cluster and start using kubectl directly to deploy your applications.

gcloud beta container clusters get-credentials cluster-1 --region europe-west1 --project kubernetes-xxxxxx

There is no need for a Kubernetes dashboard what I have mentioned because it is integrated into the Google Kubernetes Engine console. You are able to see cluster information and deployed workloads, and you are able to drill down to detailed information about running pods:

Google is offering the Kubernetes control-plane for free and which is a massive advantage for GKE because AWS on the other hand charges for the EKS control-plane around $144 per month.

You can keep your GKE control-plane running and scale down your instance pool to zero if no compute capacity is needed and scale up later if required:

# scale down node pool
gcloud container clusters resize cluster-1 --num-nodes=0 --region "europe-west1"

# scale up node pool 
gcloud container clusters resize cluster-1 --num-nodes=1 --region "europe-west1"

Let’s deploy the Google microservices demo application with Istio Service Mesh enabled:

# label default namespace to inject Envoy sidecar
kubectl label namespace default istio-injection=enabled

# check istio sidecar injector label
kubectl get namespace -L istio-injection

# deploy Google microservices demo manifests
kubectl create -f https://raw.githubusercontent.com/berndonline/microservices-demo/master/kubernetes-manifests/hipster-shop.yml
kubectl create -f https://raw.githubusercontent.com/berndonline/microservices-demo/master/istio-manifests/istio.yml

Get the public IP addresses for the frontend service and ingress gateway to connect with your browser:

# get frontend-external service IP address
kubectl get svc frontend-external --no-headers | awk '{ print $4 }'

# get istio ingress gateway service IP address
kubectl get svc istio-ingressgateway -n istio-system --no-headers | awk '{ print $4 }'

To delete the GKE cluster simply run the following gcloud command:

gcloud beta container --project "kubernetes-xxxxxx" clusters delete "cluster-1" --region "europe-west1"

Googles Kubernetes Engine is in my opinion the better offering compared to AWS EKS which seems a bit too basic.

Running Istio Service Mesh on Amazon EKS

I have not spend too much time with Istio in the last weeks but after my previous article about running Istio Service Mesh on OpenShift I wanted to do the same and deploy Istio Service Mesh on an Amazon EKS cluster. This time I did the recommended way of using a helm template to deploy Istio which is more flexible then the Ansible operator for the OpenShift deployment.

Once you have created your EKS cluster you can start, there are not many prerequisite for EKS so you can basically create the istio namespace and create a secret for Kiali, and start to deploy the helm template:

kubectl create namespace istio-system

USERNAME=$(echo -n 'admin' | base64)
PASSPHRASE=$(echo -n 'supersecretpassword!!' | base64)
NAMESPACE=istio-system

cat <<EOF | kubectl apply -n istio-system -f -
apiVersion: v1
kind: Secret
metadata:
  name: kiali
  namespace: $NAMESPACE
  labels:
    app: kiali
type: Opaque
data:
  username: $USERNAME
  passphrase: $PASSPHRASE
EOF

You then create the Custom Resource Definitions (CRDs) for Istio:

helm template istio-1.1.4/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -  

# Check the created Istio CRDs 
kubectl get crds -n istio-system | grep 'istio.io\|certmanager.k8s.io' | wc -l

At this point you can deploy the main Istio Helm template. See the installation options for more detail about customizing the installation:

helm template istio-1.1.4/install/kubernetes/helm/istio --name istio --namespace istio-system  --set grafana.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set kiali.dashboard.secretName=kiali --set kiali.dashboard.usernameKey=username --set kiali.dashboard.passphraseKey=passphrase | kubectl apply -f -
 
# Validate and see that all components start
kubectl get pods -n istio-system -w  

The Kiali service has the type clusterIP which we need to change to type LoadBalancer:

kubectl patch svc kiali -n istio-system --patch '{"spec": {"type": "LoadBalancer" }}'

# Get the create AWS ELB for the Kiali service
$ kubectl get svc kiali -n istio-system --no-headers | awk '{ print $4 }'
abbf8224773f111e99e8a066e034c3d4-78576474.eu-west-1.elb.amazonaws.com

Now we are able to access the Kiali dashboard and login with the credentials I have specified earlier in the Kiali secret.

We didn’t deploy anything else yet so the default namespace is empty:

I recommend having a look at the Istio-Sidecar injection. If your istio-sidecar containers are not getting deployed you might forgot to allow TCP port 443 from your control-plane to worker nodes. Have a look at the Github issue about this: Admission control webhooks (e.g. sidecar injector) don’t work on EKS.

We can continue and deploy the Google Hipster Shop example.

# Label default namespace to inject Envoy sidecar
kubectl label namespace default istio-injection=enabled

# Check istio sidecar injector label
kubectl get namespace -L istio-injection

# Deploy Google hipster shop manifests
kubectl create -f https://raw.githubusercontent.com/berndonline/aws-eks-terraform/master/example/istio-hipster-shop.yml
kubectl create -f https://raw.githubusercontent.com/berndonline/aws-eks-terraform/master/example/istio-manifest.yml

# Wait a few minutes before deploying the load generator
kubectl create -f https://raw.githubusercontent.com/berndonline/aws-eks-terraform/master/example/istio-loadgenerator.yml

We can check again the Kiali dashboard once the application is deployed and healthy. If there are issues with the Envoy sidecar you will see a warning “Missing Sidecar”:

We are also able to see the graph which shows detailed traffic flows within the microservice application.

Let’s get the hostname for the istio-ingressgateway service and connect via the web browser:

$ kubectl get svc istio-ingressgateway -n istio-system --no-headers | awk '{ print $4 }'
a16f7090c74ca11e9a1fb02cd763ca9e-362893117.eu-west-1.elb.amazonaws.com

Before you destroy your EKS cluster you should remove all installed components because Kubernetes service type LoadBalancer created AWS ELBs which will not get deleted and stay behind when you delete the EKS cluster:

kubectl label namespace default istio-injection-
kubectl delete -f https://raw.githubusercontent.com/berndonline/aws-eks-terraform/master/example/istio-loadgenerator.yml
kubectl delete -f https://raw.githubusercontent.com/berndonline/aws-eks-terraform/master/example/istio-hipster-shop.yml
kubectl delete -f https://raw.githubusercontent.com/berndonline/aws-eks-terraform/master/example/istio-manifest.yml

Finally to remove Istio from EKS you run the same Helm template command but do kubectl delete:

helm template istio-1.1.4/install/kubernetes/helm/istio --name istio --namespace istio-system  --set grafana.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set kiali.dashboard.secretName=kiali --set kiali.dashboard.usernameKey=username --set kiali.dashboard.passphraseKey=passphrase | kubectl delete -f -

Very simple to get started with Istio Service Mesh on EKS and if I find some time I will give the Istio Multicluster a try and see how this works to span Istio service mesh across multiple Kubernetes clusters.

Running Istio Service Mesh on OpenShift

In the Kubernetes/OpenShift community everyone is talking about Istio service mesh, so I wanted to share my experience about the installation and running a sample microservice application with Istio on OpenShift 3.11 and 4.0. Service mesh on OpenShift is still at least a few month away from being available generally to run in production but this gives you the possibility to start testing and exploring Istio. I have found good documentation about installing Istio on OCP and OKD have a look for more information.

To install Istio on OpenShift 3.11 you need to apply the node and master prerequisites you see below; for OpenShift 4.0 and above you can skip these steps and go directly to the istio-operator installation:

sudo bash -c 'cat << EOF > /etc/origin/master/master-config.patch
admissionConfig:
  pluginConfig:
    MutatingAdmissionWebhook:
      configuration:
        apiVersion: apiserver.config.k8s.io/v1alpha1
        kubeConfigFile: /dev/null
        kind: WebhookAdmission
    ValidatingAdmissionWebhook:
      configuration:
        apiVersion: apiserver.config.k8s.io/v1alpha1
        kubeConfigFile: /dev/null
        kind: WebhookAdmission
EOF'
        
sudo cp -p /etc/origin/master/master-config.yaml /etc/origin/master/master-config.yaml.prepatch
sudo bash -c 'oc ex config patch /etc/origin/master/master-config.yaml.prepatch -p "$(cat /etc/origin/master/master-config.patch)" > /etc/origin/master/master-config.yaml'
sudo su -
master-restart api
master-restart controllers
exit       

sudo bash -c 'cat << EOF > /etc/sysctl.d/99-elasticsearch.conf 
vm.max_map_count = 262144
EOF'

sudo sysctl vm.max_map_count=262144

The Istio installation is straight forward by starting first to install the istio-operator:

oc new-project istio-operator
oc new-app -f https://raw.githubusercontent.com/Maistra/openshift-ansible/maistra-0.9/istio/istio_community_operator_template.yaml --param=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL=<-master-public-hostname->

Verify the operator deployment:

oc logs -n istio-operator $(oc -n istio-operator get pods -l name=istio-operator --output=jsonpath={.items..metadata.name})

Once the operator is running we can start deploying Istio components by creating a custom resource:

cat << EOF >  ./istio-installation.yaml
apiVersion: "istio.openshift.com/v1alpha1"
kind: "Installation"
metadata:
  name: "istio-installation"
  namespace: istio-operator
EOF

oc create -n istio-operator -f ./istio-installation.yaml

Check and watch the Istio installation progress which might take a while to complete:

oc get pods -n istio-system -w

# The installation of the core components is finished when you see:
...
openshift-ansible-istio-installer-job-cnw72   0/1       Completed   0         4m

Afterwards, to finish off the Istio installation, we need to install the Kiali web console:

bash <(curl -L https://git.io/getLatestKialiOperator)
oc get route -n istio-system -l app=kiali

Verifying that all Istio components are running:

$ oc get pods -n istio-system
NAME                                          READY     STATUS      RESTARTS   AGE
elasticsearch-0                               1/1       Running     0          9m
grafana-74b5796d94-4ll5d                      1/1       Running     0          9m
istio-citadel-db879c7f8-kfxfk                 1/1       Running     0          11m
istio-egressgateway-6d78858d89-58lsd          1/1       Running     0          11m
istio-galley-6ff54d9586-8r7cl                 1/1       Running     0          11m
istio-ingressgateway-5dcf9fdf4b-4fjj5         1/1       Running     0          11m
istio-pilot-7ccf64f659-ghh7d                  2/2       Running     0          11m
istio-policy-6c86656499-v45zr                 2/2       Running     3          11m
istio-sidecar-injector-6f696b8495-8qqjt       1/1       Running     0          11m
istio-telemetry-686f78b66b-v7ljf              2/2       Running     3          11m
jaeger-agent-k4tpz                            1/1       Running     0          9m
jaeger-collector-64bc5678dd-wlknc             1/1       Running     0          9m
jaeger-query-776d4d754b-8z47d                 1/1       Running     0          9m
kiali-5fd946b855-7lw2h                        1/1       Running     0          2m
openshift-ansible-istio-installer-job-cnw72   0/1       Completed   0          13m
prometheus-75b849445c-l7rlr                   1/1       Running     0          11m

Let’s start to deploy the microservice application example by using the Google Hipster Shop, it contains multiple microservices which is great to test with Istio:

# Create new project
oc new-project hipster-shop

# Set permissions to allow Istio to deploy the Envoy-Proxy side-car container
oc adm policy add-scc-to-user anyuid -z default -n hipster-shop
oc adm policy add-scc-to-user privileged -z default -n hipster-shop

# Create Hipster Shop deployments and Istio services
oc create -f https://raw.githubusercontent.com/berndonline/openshift-ansible/master/examples/istio-hipster-shop.yml
oc create -f https://raw.githubusercontent.com/berndonline/openshift-ansible/master/examples/istio-manifest.yml

# Wait and check that all pods are running before creating the load generator
oc get pods -n hipster-shop -w

# Create load generator deployment
oc create -f https://raw.githubusercontent.com/berndonline/openshift-ansible/master/examples/istio-loadgenerator.yml

As you see below each pod has a sidecar container with the Istio Envoy proxy which handles pod traffic:

[[email protected] ~]$ oc get pods
NAME                                     READY     STATUS    RESTARTS   AGE
adservice-7894dbfd8c-g4m9v               2/2       Running   0          49m
cartservice-758d66c648-79fj4             2/2       Running   4          49m
checkoutservice-7b9dc8b755-h2b2v         2/2       Running   0          49m
currencyservice-7b5c5f48fc-gtm9x         2/2       Running   0          49m
emailservice-79578566bb-jvwbw            2/2       Running   0          49m
frontend-6497c5f748-5fc4f                2/2       Running   0          49m
loadgenerator-764c5547fc-sw6mg           2/2       Running   0          40m
paymentservice-6b989d657c-klp4d          2/2       Running   0          49m
productcatalogservice-5bfbf4c77c-cw676   2/2       Running   0          49m
recommendationservice-c947d84b5-svbk8    2/2       Running   0          49m
redis-cart-79d84748cf-cvg86              2/2       Running   0          49m
shippingservice-6ccb7d8ff7-66v8m         2/2       Running   0          49m
[[email protected] ~]$

The Kiali web console answers the question about what microservices are part of the service mesh and how are they connected which gives you a great level of detail about the traffic flows:

Detailed traffic flow view:

The Isito installation comes with Jaeger which is an open source tracing tool to monitor and troubleshoot transactions:

Enough about this, lets connect to our cool Hipster Shop and happy shopping:

Additionally there is another example, the Istio Bookinfo if you want to try something smaller and less complex:

oc new-project myproject

oc adm policy add-scc-to-user anyuid -z default -n myproject
oc adm policy add-scc-to-user privileged -z default -n myproject

oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo.yaml
oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo-gateway.yaml
export GATEWAY_URL=$(oc get route -n istio-system istio-ingressgateway -o jsonpath='{.spec.host}')
curl -o /dev/null -s -w "%{http_code}\n" http://$GATEWAY_URL/productpage

curl -o destination-rule-all.yaml https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/destination-rule-all.yaml
oc apply -f destination-rule-all.yaml

curl -o destination-rule-all-mtls.yaml https://raw.githubusercontent.com/istio/istio/release-1.0/samples/bookinfo/networking/destination-rule-all-mtls.yaml
oc apply -f destination-rule-all-mtls.yaml

oc get destinationrules -o yaml

I hope this is a useful article for getting started with Istio service mesh on OpenShift.