Getting started with OpenShift Hive

If you don’t know OpenShift Hive I recommend having a look at the video of my talk at RedHat OpenShift Commons about OpenShift Hive where I also talk about how you can provision and manage the lifecycle of OpenShift 4 clusters using the Kubernetes API and the OpenShift Hive operator.

The Hive operator has three main components the admission controller,  the Hive controller and the Hive operator itself. For more information about the Hive architecture visit the Hive docs:

You can use an OpenShift or native Kubernetes cluster to run the operator, in my case I use a EKS cluster. Let’s go through the prerequisites which are required to generate the manifests and the hiveutil:

$ curl -s "https://raw.githubusercontent.com/\
> kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"  | bash
$ sudo mv ./kustomize /usr/bin/
$ wget https://dl.google.com/go/go1.13.3.linux-amd64.tar.gz
$ tar -xvf go1.13.3.linux-amd64.tar.gz
$ sudo mv go /usr/local

To setup the Go environment copy the content below and add to your .profile:

export GOPATH="${HOME}/.go"
export PATH="$PATH:/usr/local/go/bin"
export PATH="$PATH:${GOPATH}/bin:${GOROOT}/bin"

Continue with installing the Go dependencies and clone the OpenShift Hive Github repository:

$ mkdir -p ~/.go/src/github.com/openshift/
$ go get github.com/golang/mock/mockgen
$ go get github.com/golang/mock/gomock
$ go get github.com/cloudflare/cfssl/cmd/cfssl
$ go get github.com/cloudflare/cfssl/cmd/cfssljson
$ cd ~/.go/src/github.com/openshift/
$ git clone https://github.com/openshift/hive.git
$ cd hive/
$ git checkout remotes/origin/master

Before we run make deploy I would recommend modifying the Makefile that we only generate the Hive manifests without deploying them to Kubernetes:

$ sed -i -e 's#oc apply -f config/crds# #' -e 's#kustomize build overlays/deploy | oc apply -f -#kustomize build overlays/deploy > hive.yaml#' Makefile
$ make deploy
# The apis-path is explicitly specified so that CRDs are not created for v1alpha1
go run tools/vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go crd --apis-path=pkg/apis/hive/v1
CRD files generated, files can be found under path /home/ubuntu/.go/src/github.com/openshift/hive/config/crds.
go generate ./pkg/... ./cmd/...
hack/update-bindata.sh
# Deploy the operator manifests:
mkdir -p overlays/deploy
cp overlays/template/kustomization.yaml overlays/deploy
cd overlays/deploy && kustomize edit set image registry.svc.ci.openshift.org/openshift/hive-v4.0:hive=registry.svc.ci.openshift.org/openshift/hivev1:hive
kustomize build overlays/deploy > hive.yaml
rm -rf overlays/deploy

Quick look at the content of the hive.yaml manifest:

$ cat hive.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: hive
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hive-operator
  namespace: hive

...

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    control-plane: hive-operator
    controller-tools.k8s.io: "1.0"
  name: hive-operator
  namespace: hive
spec:
  replicas: 1
  revisionHistoryLimit: 4
  selector:
    matchLabels:
      control-plane: hive-operator
      controller-tools.k8s.io: "1.0"
  template:
    metadata:
      labels:
        control-plane: hive-operator
        controller-tools.k8s.io: "1.0"
    spec:
      containers:
      - command:
        - /opt/services/hive-operator
        - --log-level
        - info
        env:
        - name: CLI_CACHE_DIR
          value: /var/cache/kubectl
        image: registry.svc.ci.openshift.org/openshift/hive-v4.0:hive
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 1
          httpGet:
            path: /debug/health
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
        name: hive-operator
        resources:
          requests:
            cpu: 100m
            memory: 256Mi
        volumeMounts:
        - mountPath: /var/cache/kubectl
          name: kubectl-cache
      serviceAccountName: hive-operator
      terminationGracePeriodSeconds: 10
      volumes:
      - emptyDir: {}
        name: kubectl-cache

Now we can apply the Hive custom resource definition (crds):

$ kubectl apply -f ./config/crds/
customresourcedefinition.apiextensions.k8s.io/checkpoints.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/clusterdeployments.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/clusterdeprovisions.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/clusterimagesets.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/clusterprovisions.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/clusterstates.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/dnszones.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/hiveconfigs.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/machinepools.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/selectorsyncidentityproviders.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/selectorsyncsets.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/syncidentityproviders.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/syncsets.hive.openshift.io created
customresourcedefinition.apiextensions.k8s.io/syncsetinstances.hive.openshift.io created

And continue to apply the hive.yaml manifest for deploying the OpenShift Hive operator and its components:

$ kubectl apply -f hive.yaml
namespace/hive created
serviceaccount/hive-operator created
clusterrole.rbac.authorization.k8s.io/hive-frontend created
clusterrole.rbac.authorization.k8s.io/hive-operator-role created
clusterrole.rbac.authorization.k8s.io/manager-role created
clusterrole.rbac.authorization.k8s.io/system:openshift:hive:hiveadmission created
rolebinding.rbac.authorization.k8s.io/extension-server-authentication-reader-hiveadmission created
clusterrolebinding.rbac.authorization.k8s.io/auth-delegator-hiveadmission created
clusterrolebinding.rbac.authorization.k8s.io/hive-frontend created
clusterrolebinding.rbac.authorization.k8s.io/hive-operator-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/hiveadmission-hive-hiveadmission created
clusterrolebinding.rbac.authorization.k8s.io/hiveapi-cluster-admin created
clusterrolebinding.rbac.authorization.k8s.io/manager-rolebinding created
deployment.apps/hive-operator created

For the Hive admission controller you need to generate a SSL certifcate:

$ ./hack/hiveadmission-dev-cert.sh
~/Dropbox/hive/hiveadmission-certs ~/Dropbox/hive
2020/02/03 22:17:30 [INFO] generate received request
2020/02/03 22:17:30 [INFO] received CSR
2020/02/03 22:17:30 [INFO] generating key: ecdsa-256
2020/02/03 22:17:30 [INFO] encoded CSR
certificatesigningrequest.certificates.k8s.io/hiveadmission.hive configured
certificatesigningrequest.certificates.k8s.io/hiveadmission.hive approved
-----BEGIN CERTIFICATE-----
MIICaDCCAVCgAwIBAgIQHvvDPncIWHRcnDzzoWGjQDANBgkqhkiG9w0BAQsFADAv
MS0wKwYDVQQDEyRiOTk2MzhhNS04OWQyLTRhZTAtYjI4Ny1iMWIwOGNmOGYyYjAw
HhcNMjAwMjAzMjIxNTA3WhcNMjUwMjAxMjIxNTA3WjAhMR8wHQYDVQQDExZoaXZl
YWRtaXNzaW9uLmhpdmUuc3ZjMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEea4N
UPbvzM3VdtOkdJ7lBytekRTvwGMqs9HgG14CtqCVCOFq8f+BeqqyrRbJsX83iBfn
gMc54moElb5kIQNjraNZMFcwDAYDVR0TAQH/BAIwADBHBgNVHREEQDA+ghZoaXZl
YWRtaXNzaW9uLmhpdmUuc3ZjgiRoaXZlYWRtaXNzaW9uLmhpdmUuc3ZjLmNsdXN0
ZXIubG9jYWwwDQYJKoZIhvcNAQELBQADggEBADhgT3tNnFs6hBIZFfWmoESe6nnZ
fy9GmlmF9qEBo8FZSk/LYvV0peOdgNZCHqsT2zaJjxULqzQ4zfSb/koYpxeS4+Bf
xwgHzIB/ylzf54wVkILWUFK3GnYepG5dzTXS7VHc4uiNJe0Hwc5JI4HBj7XdL3C7
cbPm7T2cBJi2jscoCWELWo/0hDxkcqZR7rdeltQQ+Uhz87LhTTqlknAMFzL7tM/+
pJePZMQgH97vANsbk97bCFzRZ4eABYSiN0iAB8GQM5M+vK33ZGSVQDJPKQQYH6th
Kzi9wrWEeyEtaWozD5poo9s/dxaLxFAdPDICkPB2yr5QZB+NuDgA+8IYffo=
-----END CERTIFICATE-----
secret/hiveadmission-serving-cert created
~/Dropbox/hive

Afterwards we can check if all the pods are running, this might take a few seconds:

$ kubectl get pods -n hive
NAME                                READY   STATUS    RESTARTS   AGE
hive-controllers-7c6ccc84b9-q7k7m   1/1     Running   0          31s
hive-operator-f9f4447fd-jbmkh       1/1     Running   0          55s
hiveadmission-6766c5bc6f-9667g      1/1     Running   0          27s
hiveadmission-6766c5bc6f-gvvlq      1/1     Running   0          27s

The Hive operator is successfully installed on your Kubernetes cluster but we are not finished yet. To create the required Cluster Deployment manifests we need to generate the hiveutil binary:

$ make hiveutil
go generate ./pkg/... ./cmd/...
hack/update-bindata.sh
go build -o bin/hiveutil github.com/openshift/hive/contrib/cmd/hiveutil

To generate Hive Cluster Deployment manifests just run the following hiveutil command below, I output the definition with -o into yaml:

$ bin/hiveutil create-cluster --base-domain=mydomain.example.com --cloud=aws mycluster -o yaml
apiVersion: v1
items:
- apiVersion: hive.openshift.io/v1
  kind: ClusterImageSet
  metadata:
    creationTimestamp: null
    name: mycluster-imageset
  spec:
    releaseImage: quay.io/openshift-release-dev/ocp-release:4.3.2-x86_64
  status: {}
- apiVersion: v1
  kind: Secret
  metadata:
    creationTimestamp: null
    name: mycluster-aws-creds
  stringData:
    aws_access_key_id: <-YOUR-AWS-ACCESS-KEY->
    aws_secret_access_key: <-YOUR-AWS-SECRET-KEY->
  type: Opaque
- apiVersion: v1
  data:
    install-config.yaml: <-BASE64-ENCODED-OPENSHIFT4-INSTALL-CONFIG->
  kind: Secret
  metadata:
    creationTimestamp: null
    name: mycluster-install-config
  type: Opaque
- apiVersion: hive.openshift.io/v1
  kind: ClusterDeployment
  metadata:
    creationTimestamp: null
    name: mycluster
  spec:
    baseDomain: mydomain.example.com
    clusterName: mycluster
    controlPlaneConfig:
      servingCertificates: {}
    installed: false
    platform:
      aws:
        credentialsSecretRef:
          name: mycluster-aws-creds
        region: us-east-1
    provisioning:
      imageSetRef:
        name: mycluster-imageset
      installConfigSecretRef:
        name: mycluster-install-config
  status:
    clusterVersionStatus:
      availableUpdates: null
      desired:
        force: false
        image: ""
        version: ""
      observedGeneration: 0
      versionHash: ""
- apiVersion: hive.openshift.io/v1
  kind: MachinePool
  metadata:
    creationTimestamp: null
    name: mycluster-worker
  spec:
    clusterDeploymentRef:
      name: mycluster
    name: worker
    platform:
      aws:
        rootVolume:
          iops: 100
          size: 22
          type: gp2
        type: m4.xlarge
    replicas: 3
  status:
    replicas: 0
kind: List
metadata: {}

I hope this post is useful in getting you started with OpenShift Hive. In my next article I will go through the details of the OpenShift 4 cluster deployment with Hive.

Read my new article about OpenShift / OKD 4.x Cluster Deployment using OpenShift Hive

OpenShift Hive – API driven OpenShift cluster provisioning and management operator

RedHat invited me and my colleague Matt to speak at RedHat OpenShift Commons in London about the API driven OpenShift cluster provisioning and management operator called OpenShift Hive. We have been using OpenShift Hive for the past few months to provision and manage the OpenShift 4 estate across multiple environments. Below the video recording of our talk at OpenShift Commons London:

The Hive operator requires to run on a separate Kubernetes cluster to centrally provision and manage the OpenShift 4 clusters. With Hive you can manage hundreds of cluster deployments and configuration with a single operator. There is nothing required on the OpenShift 4 clusters itself, Hive only requires access to the cluster API:

The ClusterDeployment custom resource is the definition for the cluster specs, similar to the openshift-installer install-config where you define cluster specifications, cloud credential and image pull secrets. Below is an example of the ClusterDeployment manifest:

---
apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
  name: mycluster
  namespace: mynamespace
spec:
  baseDomain: hive.example.com
  clusterName: mycluster
  platform:
    aws:
      credentialsSecretRef:
        name: mycluster-aws-creds
      region: eu-west-1
  provisioning:
    imageSetRef:
      name: openshift-v4.3.0
    installConfigSecretRef:
      name: mycluster-install-config
    sshPrivateKeySecretRef:
      name: mycluster-ssh-key
  pullSecretRef:
    name: mycluster-pull-secret

The SyncSet custom resource is defining the configuration and is able to regularly reconcile the manifests to keep all clusters synchronised. With SyncSets you can apply resources and patches as you see in the example below:

---
apiVersion: hive.openshift.io/v1
kind: SyncSet
metadata:
  name: mygroup
spec:
  clusterDeploymentRefs:
  - name: ClusterName
  resourceApplyMode: Upsert
  resources:
  - apiVersion: user.openshift.io/v1
    kind: Group
    metadata:
      name: mygroup
    users:
    - myuser
  patches:
  - kind: ConfigMap
    apiVersion: v1
    name: foo
    namespace: default
    patch: |-
      { "data": { "foo": "new-bar" } }
    patchType: merge
  secretReferences:
  - source:
      name: ad-bind-password
      namespace: default
    target:
      name: ad-bind-password
      namespace: openshift-config

Depending of the amount of resource and patches you want to apply, a SyncSet can get pretty large and is not very easy to manage. My colleague Matt wrote a SyncSet Generator, please check this Github repository.

In one of my next articles I will go into more detail on how to deploy OpenShift Hive and I’ll provide more examples of how to use ClusterDeployment and SyncSets. In the meantime please check out the OpenShift Hive repository for more details, additionally here are links to the Hive documentation on using Hive and Syncsets.

Read my new article about installing OpenShift Hive.

Using Operator Lifecycle Manager and create custom Operator Catalog for Kubernetes

In the beginning of 2019 RedHat announced the launch of the OperatorHub.io and a lot of things have happened since then; OpenShift version 4 got released which is fully managed by Kubernetes operators and other vendors started to release their own operators to deploy their applications to Kubernetes. Even creating your own operators is becoming more popular and state-of-the-art if you run your own Kubernetes clusters.

I want to go into the details of how the Operator Lifecycle Manager (OLM) works and how you can create your own operator Catalog Server and use with Kubernetes to install operators but before we start let’s look how this works. The OLM is responsible for installing and managing the lifecycle of Kubernetes operators and uses a CatalogSource from which it installs the operators.

We should start by creating our own Catalog Server which is straightforward: we need to clone the community-operator repository and build a new catalog-server container image. If you would like to modify which operators should be in the catalog just delete the operators in the ./upstream-community-operators/ folder or add your own operators, in my example I only want to keep the SysDig operator:

git clone https://github.com/operator-framework/community-operators
cd community-operators/
rm ci.Dockerfile
mv upstream.Dockerfile Dockerfile
cd upstream-community-operators/
rm -rfv !("sysdig")
cd ..

Now we can build the new catalog container image and push to the registry:

$ docker build . --rm -t berndonline/catalog-server
Sending build context to Docker daemon  23.34MB
Step 1/10 : FROM quay.io/operator-framework/upstream-registry-builder:v1.3.0 as builder
 ---> e08ceacda476
Step 2/10 : COPY upstream-community-operators manifests
 ---> 9e4b4e98a968
Step 3/10 : RUN ./bin/initializer -o ./bundles.db
 ---> Running in b11415b71497
time="2020-01-11T15:14:02Z" level=info msg="loading Bundles" dir=manifests
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=manifests load=bundles
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=sysdig load=bundles
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.0 load=bundles
time="2020-01-11T15:14:02Z" level=info msg="found csv, loading bundle" dir=manifests file=sysdig-operator.v1.4.0.clusterserviceversion.yaml load=bundles
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdig-operator.v1.4.0.clusterserviceversion.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdigagents.sysdig.com.crd.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.7 load=bundles
time="2020-01-11T15:14:02Z" level=info msg="found csv, loading bundle" dir=manifests file=sysdig-operator.v1.4.7.clusterserviceversion.yaml load=bundles
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdig-operator.v1.4.7.clusterserviceversion.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdigagents.sysdig.com.crd.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg="loading Packages and Entries" dir=manifests
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=manifests load=package
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=sysdig load=package
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.0 load=package
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.7 load=package
Removing intermediate container b11415b71497
 ---> d3e1417fd1ee
Step 4/10 : FROM scratch
 --->
Step 5/10 : COPY --from=builder /build/bundles.db /bundles.db
 ---> 32c4b0ba7422
Step 6/10 : COPY --from=builder /build/bin/registry-server /registry-server
 ---> 5607183f50e7
Step 7/10 : COPY --from=builder /bin/grpc_health_probe /bin/grpc_health_probe
 ---> 6e612705cab1
Step 8/10 : EXPOSE 50051
 ---> Running in 5930349a782e
Removing intermediate container 5930349a782e
 ---> 2a0e6d01f7f5
Step 9/10 : ENTRYPOINT ["/registry-server"]
 ---> Running in 1daf50f151ae
Removing intermediate container 1daf50f151ae
 ---> 9fe3fed7cc2a
Step 10/10 : CMD ["--database", "/bundles.db"]
 ---> Running in 154a8d3bb346
Removing intermediate container 154a8d3bb346
 ---> f4d99376cbef
Successfully built f4d99376cbef
Successfully tagged berndonline/catalog-server:latest
$ docker push berndonline/catalog-server
The push refers to repository [docker.io/berndonline/catalog-server]
0516ee590bf5: Pushed
3bbd78f51bb3: Pushed
e4bd72ca23da: Pushed
latest: digest: sha256:b2251ebb6049a1ea994fd710c9182c89866255011ee50fd2a6eeb55c6de2fa21 size: 947

Next we need to install the Operator Lifecycle Manager, go to the release page in Github and install the latest version. First this will add the Custom Resource Definitions for OLM and afterwards deploys the required OLM operator resources:

kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.13.0/crds.yaml
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.13.0/olm.yaml

Next we need to add the new CatalogSource and delete the default OperatorHub one to limit which operator can be installed:

cat <<EOF | kubectl apply -n olm -f -
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: custom-catalog
  namespace: olm
spec:
  sourceType: grpc
  image: docker.io/berndonline/catalog-server:latest
  displayName: Custom Operators
  publisher: techbloc.net
EOF

kubectl delete catalogsource operatorhubio-catalog -n olm

Do a quick check to make sure that the OLM components are running, you will see a pod with the custom-catalog which we previously created:

$ kubectl get pods -n olm
NAME                               READY   STATUS    RESTARTS   AGE
catalog-operator-5bdf7fc7b-wcbcs   1/1     Running   0          100s
custom-catalog-4hrbg               1/1     Running   0          32s
olm-operator-5ff565fcfc-2j9gt      1/1     Running   0          100s
packageserver-7fcbddc745-6s666     1/1     Running   0          88s
packageserver-7fcbddc745-jkfxs     1/1     Running   0          88s

Now we can look for the available operator manifests and see that our Custom Operator catalog only has the SysDig operator available:

$ kubectl get packagemanifests
NAME     CATALOG            AGE
sysdig   Custom Operators   36s

To install the SysDig operator we need to create the namespace, the operator group and the subscription which will instruct OLM to install the SysDig operator:

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: sysdig
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: operatorgroup
  namespace: sysdig
spec:
  targetNamespaces:
  - sysdig
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: sysdig
  namespace: sysdig
spec:
  channel: stable
  name: sysdig
  source: custom-catalog
  sourceNamespace: olm
EOF

At the end we need to check if the OLM installed the SysDig operator:

# Check that the subscription is created
$ kubectl get sub -n sysdig
NAME     PACKAGE   SOURCE           CHANNEL
sysdig   sysdig    custom-catalog   stable

# Check that OLM created an InstallPlan for installing the operator
$ kubectl get ip -n sysdig
NAME            CSV                      APPROVAL    APPROVED
install-sf6dl   sysdig-operator.v1.4.7   Automatic   true

# Check that the InstallPlan created the Cluster Service Version and installed the operator
$ kubectl get csv -n sysdig
NAME                     DISPLAY                 VERSION   REPLACES                 PHASE
sysdig-operator.v1.4.7   Sysdig Agent Operator   1.4.7     sysdig-operator.v1.4.0   Succeeded

# Check that the SysDig operator is running
$ kubectl get pod -n sysdig
NAME                               READY   STATUS    RESTARTS   AGE
sysdig-operator-74c9f665d9-bb8l9   1/1     Running   0          46s

Now you can install the SysDig agent by adding the following custom resource:

cat <<EOF | kubectl apply -f -
---
apiVersion: sysdig.com/v1alpha1
kind: SysdigAgent
metadata:
  name: agent
  namespace: sysdig
spec:
  ebpf:
    enabled: true
  secure:
    enabled: true
  sysdig:
    accessKey: XXXXXXX
EOF

To delete the SysDig operator just delete the namespace or run the following commands to delete subscription, operator group and cluster service version:

kubectl delete sub sysdig -n sysdig
kubectl delete operatorgroup operatorgroup -n sysdig
kubectl delete csv sysdig-operator.v1.4.7 -n sysdig

Thinking ahead you can let the Flux-CD operator manage all the resources and only use GitOps to apply cluster configuration:

I hope this article is interesting and useful, if you want to read more information about the Operator Lifecycle Manager please read the olm-book which has some useful information.

How to manage Kubernetes clusters the GitOps way with Flux CD

Kubernetes is becoming more and more popular, and so is managing clusters at scale. This article is about how to manage Kubernetes clusters the GitOps way using the Flux CD operator.

Flux can monitor container image and code repositories that you specify and trigger deployments to automatically change the configuration state of your Kubernetes cluster. The cluster configuration is centrally managed and stored in declarative form in Git, and there is no need for an administrator to manually apply manifests, the Flux operator synchronise to apply or delete the cluster configuration.

Before we start deploying the operator we need to install the fluxctl command-line utility and create the namespace:

sudo wget -O /usr/local/bin/fluxctl https://github.com/fluxcd/flux/releases/download/1.18.0/fluxctl_linux_amd64
sudo chmod 755 /usr/local/bin/fluxctl
kubectl create ns flux

Deploying the Flux operator is straight forward and requires a few options like git repository and git path. The path is important for my example because it tells the operator in which folder to look for manifests:

$ fluxctl install [email protected] [email protected]:berndonline/flux-cd.git --git-path=clusters/gke,common/stage --manifest-generation=true --git-branch=master --namespace=flux --registry-disable-scanning | kubectl apply -f -
deployment.apps/memcached created
service/memcached created
serviceaccount/flux created
clusterrole.rbac.authorization.k8s.io/flux created
clusterrolebinding.rbac.authorization.k8s.io/flux created
deployment.apps/flux created
secret/flux-git-deploy created

After you have applied the configuration, wait until the Flux pods are up and running:

$ kubectl get pods -n flux
NAME                       READY   STATUS    RESTARTS   AGE
flux-85cd9cd746-hnb4f      1/1     Running   0          74m
memcached-5dcd7579-d6vwh   1/1     Running   0          20h

The last step is to get the Flux operator deploy keys and copy the output to add to your Git repository:

fluxctl identity --k8s-fwd-ns flux

Now you are ready to synchronise the Flux operator with the repository. By default Flux automatically synchronises every 5 minutes to apply configuration changes:

$ fluxctl sync --k8s-fwd-ns flux
Synchronizing with [email protected]:berndonline/flux-cd.git
Revision of master to apply is 726944d
Waiting for 726944d to be applied ...
Done.

You are able to list workloads which are managed by the Flux operator:

$ fluxctl list-workloads --k8s-fwd-ns=flux -a
WORKLOAD                             CONTAINER         IMAGE                            RELEASE  POLICY
default:deployment/hello-kubernetes  hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    automated

How do we manage the configuration for multiple Kubernetes clusters?

I want to show you a simple example using Kustomize to manage multiple clusters across two environments (staging and production) with Flux. Basically you have a single repository and multiple clusters synchronising the configuration depending how you configure the –git-path variable of the Flux operator. The option –manifest-generation enables Kustomize for the operator and it is required to add a .flux.yaml to run Kustomize build on the cluster directories and to apply the generated manifests.

Let’s look at the repository file and folder structure. We have the base folder containing the common deployment configuration, the common folder with the environment separation for stage and prod overlays and the clusters folder which contains more cluster specific configuration:

├── .flux.yaml 
├── base
│   └── common
│       ├── deployment.yaml
│       ├── kustomization.yaml
│       ├── namespace.yaml
│       └── service.yaml
├── clusters
│   ├── eks
|   |   ├── eks-app1
│   │   |   ├── deployment.yaml
|   |   |   ├── kustomization.yaml
│   │   |   └── service.yaml
|   |   └── kustomization.yaml
│   ├── gke
|   |   ├── gke-app1
│   │   |   ├── deployment.yaml
|   |   |   ├── kustomization.yaml
│   │   |   └── service.yaml
|   |   ├── gke-app2
│   │   |   ├── deployment.yaml
|   |   |   ├── kustomization.yaml
│   │   |   └── service.yaml
|   |   └── kustomization.yaml
└── common
    ├── prod
    |   ├── prod.yaml
    |   └── kustomization.yaml
    └── stage
        ├──  team1
        |    ├── deployment.yaml
        |    ├── kustomization.yaml
        |    ├── namespace.yaml
        |    └── service.yaml
        ├── stage.yaml
        └── kustomization.yaml

If you are new to Kustomize I would recommend reading the article Kustomize – The right way to do templating in Kubernetes.

The last thing we need to do is to deploy the Flux operator to the two Kubernetes clusters. The only difference between both is the git-path variable which points the operator to the cluster and common directories were Kustomize applies the overlays based what is specified in kustomize.yaml. More details about the configuration you find in my example repository: https://github.com/berndonline/flux-cd

Flux config for Google GKE staging cluster:

fluxctl install [email protected] [email protected]:berndonline/flux-cd.git --git-path=clusters/gke,common/stage --manifest-generation=true --git-branch=master --namespace=flux | kubectl apply -f -

Flux config for Amazon EKS production cluster:

fluxctl install [email protected] [email protected]:berndonline/flux-cd.git --git-path=clusters/eks,common/prod --manifest-generation=true --git-branch=master --namespace=flux | kubectl apply -f -

After a few minutes the configuration is applied to the two clusters and you can validate the configuration.

Google GKE stage workloads:

$ fluxctl list-workloads --k8s-fwd-ns=flux -a
WORKLOAD                   CONTAINER         IMAGE                            RELEASE  POLICY
common:deployment/common   hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    automated
default:deployment/gke1    hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    
default:deployment/gke2    hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    
team1:deployment/team1     hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready
$ kubectl get svc --all-namespaces | grep LoadBalancer
common        common                 LoadBalancer   10.91.14.186   35.240.53.46     80:31537/TCP    16d
default       gke1                   LoadBalancer   10.91.7.169    35.195.241.46    80:30218/TCP    16d
default       gke2                   LoadBalancer   10.91.10.239   35.195.144.68    80:32589/TCP    16d
team1         team1                  LoadBalancer   10.91.1.178    104.199.107.56   80:31049/TCP    16d

GKE common stage application:

Amazon EKS prod workloads:

$ fluxctl list-workloads --k8s-fwd-ns=flux -a
WORKLOAD                          CONTAINER         IMAGE                                                                RELEASE  POLICY
common:deployment/common          hello-kubernetes  paulbouwer/hello-kubernetes:1.5                                      ready    automated
default:deployment/eks1           hello-kubernetes  paulbouwer/hello-kubernetes:1.5                                      ready
$ kubectl get svc --all-namespaces | grep LoadBalancer
common        common       LoadBalancer   10.100.254.171   a4caafcbf2b2911ea87370a71555111a-958093179.eu-west-1.elb.amazonaws.com    80:32318/TCP    3m8s
default       eks1         LoadBalancer   10.100.170.10    a4caeada52b2911ea87370a71555111a-1261318311.eu-west-1.elb.amazonaws.com   80:32618/TCP    3m8s

EKS common prod application:

I hope this article is useful to get started with GitOps and the Flux operator. In the future, I would like to see Flux being able to watch git tags which will make it easier to promote changes and manage clusters with version tags.

For more technical information have a look at the Flux CD documentation.

Create and run Ansible Operator on OpenShift

Since RedHat announced the new OpenShift version 4.0 they said it will be a very different experience to install and operate the platform, mostly because of Operators managing the components of the cluster. A few month back RedHat officially released the Operator-SDK and the Operator Hub to create your own operators and to share them.

I did some testing around the Ansible Operator which I wanted to share in this article but before we dig into creating our own operator we need to first install operator-sdk:

# Make sure you are able to use docker commands
sudo groupadd docker
sudo usermod -aG docker centos
ls -l /var/run/docker.sock
sudo chown root:docker /var/run/docker.sock

# Download Go
wget https://dl.google.com/go/go1.10.3.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.10.3.linux-amd64.tar.gz

# Modify bash_profile
vi ~/.bash_profile
export PATH=$PATH:/usr/local/go/bin:$HOME/go
export GOPATH=$HOME/go

# Load bash_profile
source ~/.bash_profile

# Install Go dep
mkdir -p /home/centos/go/bin
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
sudo cp /home/centos/go/bin/dep /usr/local/go/bin/

# Download and install operator framework
mkdir -p $GOPATH/src/github.com/operator-framework
cd $GOPATH/src/github.com/operator-framework
git clone https://github.com/operator-framework/operator-sdk
cd operator-sdk
git checkout master
make dep
make install
sudo cp /home/centos/go/bin/operator-sdk /usr/local/bin/

Let’s start creating our Ansible Operator using the operator-sdk command line which create a blank operator template which we will modify. You can create three different types of operators: Go, Helm or Ansible – check out the operator-sdk repository:

operator-sdk new helloworld-operator --api-version=hello.world.com/v1alpha1 --kind=Helloworld --type=ansible --cluster-scoped
cd ./helloworld-operator/

I am using the Ansible k8s module to create a Hello OpenShift deployment configuration in tasks/main.yml.

---
# tasks file for helloworld

- name: create deployment config
  k8s:
    definition:
      apiVersion: apps.openshift.io/v1
      kind: DeploymentConfig
      metadata:
        name: '{{ meta.name }}'
        labels:
          app: '{{ meta.name }}'
        namespace: '{{ meta.namespace }}'
...

Please have a look at my Github repository openshift-helloworld-operator for more details.

After we have modified the Ansible Role we can start and build operator which will create container we can afterwards push to a container registry like Docker Hub:

$ operator-sdk build berndonline/openshift-helloworld-operator:v0.1
INFO[0000] Building Docker image berndonline/openshift-helloworld-operator:v0.1
Sending build context to Docker daemon   192 kB
Step 1/3 : FROM quay.io/operator-framework/ansible-operator:v0.5.0
Trying to pull repository quay.io/operator-framework/ansible-operator ...
v0.5.0: Pulling from quay.io/operator-framework/ansible-operator
a02a4930cb5d: Already exists
1bdeea372afe: Pull complete
3b057581d180: Pull complete
12618e5abaa7: Pull complete
6f75beb67357: Pull complete
b241f86d9d40: Pull complete
e990bcb94ae6: Pull complete
3cd07ac53955: Pull complete
3fdda52e2c22: Pull complete
0fd51cfb1114: Pull complete
feaebb94b4da: Pull complete
4ff9620dce03: Pull complete
a428b645a85e: Pull complete
5daaf234bbf2: Pull complete
8cbdd2e4d624: Pull complete
fa8517b650e0: Pull complete
a2a83ad7ba5a: Pull complete
d61b9e9050fe: Pull complete
Digest: sha256:9919407a30b24d459e1e4188d05936b52270cafcd53afc7d73c89be02262f8c5
Status: Downloaded newer image for quay.io/operator-framework/ansible-operator:v0.5.0
 ---> 1e857f3522b5
Step 2/3 : COPY roles/ ${HOME}/roles/
 ---> 6e073916723a
Removing intermediate container cb3f89ba1ed6
Step 3/3 : COPY watches.yaml ${HOME}/watches.yaml
 ---> 8f0ee7ba26cb
Removing intermediate container 56ece5b800b2
Successfully built 8f0ee7ba26cb
INFO[0018] Operator build complete.

$ docker push berndonline/openshift-helloworld-operator:v0.1
The push refers to a repository [docker.io/berndonline/openshift-helloworld-operator]
2233d56d407b: Pushed
d60aa100721d: Pushed
a3a57fad5e76: Pushed
ab38e57f8581: Pushed
79b113b67633: Pushed
9cf5b154cadd: Pushed
b191ffbd3c8d: Pushed
5e21ced2d28b: Pushed
cdadb746680d: Pushed
d105c72f21c1: Pushed
1a899839ab25: Pushed
be81e9b31e54: Pushed
63d9d56008cb: Pushed
56a62cb9d96c: Pushed
3f9dc45a1d02: Pushed
dac20332f7b5: Pushed
24f8e5ff1817: Pushed
1bdae1c8263a: Pushed
bc08b53be3d4: Pushed
071d8bd76517: Mounted from openshift/origin-node
v0.1: digest: sha256:50fb222ec47c0d0a7006ff73aba868dfb3369df8b0b16185b606c10b2e30b111 size: 4495

After we have pushed the container to the registry we can continue on OpenShift and create the operator project together with the custom resource definition:

oc new-project helloworld-operator
oc create -f deploy/crds/hello_v1alpha1_helloworld_crd.yaml

Before we apply the resources let’s review and edit operator image configuration to point to our newly create operator container image:

$ cat deploy/operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: helloworld-operator
  template:
    metadata:
      labels:
        name: helloworld-operator
    spec:
      serviceAccountName: helloworld-operator
      containers:
        - name: helloworld-operator
          # Replace this with the built image name
          image: berndonline/openshift-helloworld-operator:v0.1
          imagePullPolicy: Always
          env:
            - name: WATCH_NAMESPACE
              value: ""
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "helloworld-operator"

$ cat deploy/role_binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: helloworld-operator
subjects:
- kind: ServiceAccount
  name: helloworld-operator
  # Replace this with the namespace the operator is deployed in.
  namespace: helloworld-operator
roleRef:
  kind: ClusterRole
  name: helloworld-operator
  apiGroup: rbac.authorization.k8s.io

$ cat deploy/role_user.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: helloworld-operator-execute
rules:
- apiGroups:
  - hello.world.com
  resources:
  - '*'
  verbs:
  - '*'

Afterwards we can deploy the required resources:

oc create -f deploy/operator.yaml \
          -f deploy/role_binding.yaml \
          -f deploy/role.yaml \
          -f deploy/service_account.yaml

Create a cluster-role for the custom resource definition and add bind user to a cluster-role to be able to create a custom resource:

oc create -f deploy/role_user.yaml 
oc adm policy add-cluster-role-to-user helloworld-operator-execute berndonline

If you forget to do this you will see the following error message:

Now we can login as your openshift user and create the custom resource in the namespace myproject:

$ oc create -n myproject -f deploy/crds/hello_v1alpha1_helloworld_cr.yaml
helloworld.hello.world.com/hello-openshift created
$ oc describe Helloworld/hello-openshift -n myproject
Name:         hello-openshift
Namespace:    myproject
Labels:       
Annotations:  
API Version:  hello.world.com/v1alpha1
Kind:         Helloworld
Metadata:
  Creation Timestamp:  2019-03-16T15:33:25Z
  Generation:          1
  Resource Version:    19692
  Self Link:           /apis/hello.world.com/v1alpha1/namespaces/myproject/helloworlds/hello-openshift
  UID:                 d6ce75d7-4800-11e9-b6a8-0a238ec78c2a
Spec:
  Size:  1
Status:
  Conditions:
    Last Transition Time:  2019-03-16T15:33:25Z
    Message:               Running reconciliation
    Reason:                Running
    Status:                True
    Type:                  Running
Events:                    

You can also create the custom resource via the web console:

You will get a security warning which you need to confirm to apply the custom resource:

After a few minutes the operator will create the deploymentconfig and will deploy the hello-openshift pod:

$ oc get dc
NAME              REVISION   DESIRED   CURRENT   TRIGGERED BY
hello-openshift   1          1         1         config,image(hello-openshift:latest)

$ oc get pods
NAME                      READY     STATUS    RESTARTS   AGE
hello-openshift-1-pjhm4   1/1       Running   0          2m

We can modify custom resource and change the spec size to three:

$ oc edit Helloworld/hello-openshift
...
spec:
  size: 3
...

$ oc describe Helloworld/hello-openshift
Name:         hello-openshift
Namespace:    myproject
Labels:       
Annotations:  
API Version:  hello.world.com/v1alpha1
Kind:         Helloworld
Metadata:
  Creation Timestamp:  2019-03-16T15:33:25Z
  Generation:          2
  Resource Version:    24902
  Self Link:           /apis/hello.world.com/v1alpha1/namespaces/myproject/helloworlds/hello-openshift
  UID:                 d6ce75d7-4800-11e9-b6a8-0a238ec78c2a
Spec:
  Size:  3
Status:
  Conditions:
    Last Transition Time:  2019-03-16T15:33:25Z
    Message:               Running reconciliation
    Reason:                Running
    Status:                True
    Type:                  Running
Events:                    
~ centos(ocp: myproject) $

The operator will change the deployment config and change the desired state to three pods:

$ oc get dc
NAME              REVISION   DESIRED   CURRENT   TRIGGERED BY
hello-openshift   1          3         3         config,image(hello-openshift:latest)

$ oc get pods
NAME                      READY     STATUS    RESTARTS   AGE
hello-openshift-1-pjhm4   1/1       Running   0          32m
hello-openshift-1-qhqgx   1/1       Running   0          3m
hello-openshift-1-qlb2q   1/1       Running   0          3m

To clean-up and remove the deployment config you need to delete the custom resource

oc delete Helloworld/hello-openshift -n myproject
oc adm policy remove-cluster-role-from-user helloworld-operator-execute berndonline

I hope this is a good and simple example to show how powerful operators are on OpenShift / Kubernetes.