OpenShift Hive v1.1.x – Latest updates & new features

Over a year has gone by since my first article about Getting started with OpenShift Hive and my talk at the RedHat OpenShift Gathering when the first stable OpenShift Hive v1 version got released. In between a lot has happened and OpenShift Hive v1.1.1 was released a few weeks ago. So I wanted to look into the new functionalities of OpenShift Hive.

  • Operator Lifecycle Manager (OLM) installation

Hive is now available through the Operator Hub community catalog and can be installed on both OpenShift or native Kubernetes cluster through the OLM. The install is straightforward by adding the operator-group and subscription manifests:

---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: operatorgroup
  namespace: hive
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hive
  namespace: hive
spec:
  channel: alpha
  name: hive-operator
  source: operatorhubio-catalog
  sourceNamespace: olm

Alternatively the Hive subscription can be configured with a manual install plan. In this case the OLM will not automatically upgrade the Hive operator when a new version is released – I highly recommend this for production deployments!

---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hive
  namespace: hive
spec:
  channel: alpha
  name: hive-operator
  installPlanApproval: Manual
  source: operatorhubio-catalog
  sourceNamespace: olm

After a few seconds you see an install plan being added.

$ k get installplan
NAME            CSV                    APPROVAL   APPROVED
install-9drmh   hive-operator.v1.1.0   Manual     false

Edit the install plan and set approved value to true – the OLM will start and install or upgrade the Hive operator automatically.

...
spec:
  approval: Manual
  approved: true
  clusterServiceVersionNames:
  - hive-operator.v1.1.0
  generation: 1
...

After the Hive operator is installed you need to apply the Hiveconfig object for the operator to install all of the needed Hive components. On non-OpenShift installs (native Kubernetes) you still need to generate Hiveadmission certificates for the admission controller pods to start otherwise they are missing the hiveadmission-serving-cert secret.

  • Hiveconfig – Velero backup and delete protection

There are a few small but also very useful changes in the Hiveconfig object. You can now enable the deleteProtection option which prevents administrators from accidental deletions of ClusterDeployments or SyncSets. Another great addition is that you can enable automatic configuration of Velero to backup your cluster namespaces, meaning you’re not required to configure backups separately.

---
apiVersion: hive.openshift.io/v1
kind: HiveConfig
metadata:
  name: hive
spec:
  logLevel: info
  targetNamespace: hive
  deleteProtection: enabled
  backup:
    velero:
      enabled: true
      namespace: velero

Backups are configured in the Velero namespace as specified in the Hiveconfig.

$ k get backups -n velero
NAME                              AGE
backup-okd-2021-03-26t11-57-32z   3h12m
backup-okd-2021-03-26t12-00-32z   3h9m
backup-okd-2021-03-26t12-35-44z   154m
backup-okd-2021-03-26t12-38-44z   151m
...

With the deletion protection enabled in the hiveconfig, the controller automatically adds the annotation hive.openshift.io/protected-delete: “true” to all resources and prevents these from accidental deletions:

$ k delete cd okd --wait=0
The ClusterDeployment "okd" is invalid: metadata.annotations.hive.openshift.io/protected-delete: Invalid value: "true": cannot delete while annotation is present
  • ClusterSync and Scaling Hive controller

To check applied resources through SyncSets and SelectorSyncSets, where Hive has previously used Syncsetnstance but these no longer exists. This now has move to ClusterSync to collect status information about applied resources:

$ k get clustersync okd -o yaml
apiVersion: hiveinternal.openshift.io/v1alpha1
kind: ClusterSync
metadata:
  name: okd
  namespace: okd
spec: {}
status:
  conditions:
  - lastProbeTime: "2021-03-26T16:13:57Z"
    lastTransitionTime: "2021-03-26T16:13:57Z"
    message: All SyncSets and SelectorSyncSets have been applied to the cluster
    reason: Success
    status: "False"
    type: Failed
  firstSuccessTime: "2021-03-26T16:13:57Z"
...

It is also possible to horizontally scale the Hive controller to change the synchronisation frequency for running larger OpenShift deployments.

---
apiVersion: hive.openshift.io/v1
kind: HiveConfig
metadata:
  name: hive
spec:
  logLevel: info
  targetNamespace: hive
  deleteProtection: enabled
  backup:
    velero:
      enabled: true
      namespace: velero
  controllersConfig:
    controllers:
    - config:
        concurrentReconciles: 10
        replicas: 3
      name: clustersync

Please checkout the scaling test script which I found in the Github repo, you can simulate fake clusters by adding the annotation “hive.openshift.io/fake-cluster=true” to your ClusterDeployment.

  • Hibernating clusters

RedHat introduced that you can hibernate (shutdown) clusters in OpenShift 4.5 when they are not needed and switch them easily back on when you need them. This is now possible with OpenShift Hive: you can hibernate and change the power state of a cluster deployment.

$ kubectl patch cd okd --type='merge' -p $'spec:\n powerState: Hibernating'

Checking the cluster deployment and power state change to stopping.

$ kubectl get cd
NAME   PLATFORM   REGION      CLUSTERTYPE   INSTALLED   INFRAID     VERSION   POWERSTATE   AGE
okd    aws        eu-west-1                 true        okd-jpqgb   4.7.0     Stopping     44m

After a couple of minutes the power state of the cluster nodes will change to hibernating.

$ kubectl get cd
NAME   PLATFORM   REGION      CLUSTERTYPE   INSTALLED   INFRAID     VERSION   POWERSTATE    AGE
okd    aws        eu-west-1                 true        okd-jpqgb   4.7.0     Hibernating   47m

In the AWS console you see the cluster instances as stopped.

When turning the cluster back online, change the power state in the cluster deployment to running.

$ kubectl patch cd okd --type='merge' -p $'spec:\n powerState: Running'

Again the power state changes to resuming.

$ kubectl get cd
NAME   PLATFORM   REGION      CLUSTERTYPE   INSTALLED   INFRAID     VERSION   POWERSTATE   AGE
okd    aws        eu-west-1                 true        okd-jpqgb   4.7.0     Resuming     49m

A few minutes later the cluster changes to running and is ready to use again.

$ k get cd
NAME   PLATFORM   REGION      CLUSTERTYPE   INSTALLED   INFRAID     VERSION   POWERSTATE   AGE
okd    aws        eu-west-1                 true        okd-jpqgb   4.7.0     Running      61m
  • Cluster pools

Cluster pools is something which came together with the hibernating feature which allows you to pre-provision OpenShift clusters without actually allocating them and after the provisioning they will hibernate until you claim a cluster. Again a nice feature and ideal use-case for ephemeral type development or integration test environments which allows you to have clusters ready to go to claim when needed and dispose them afterwards.

Create a ClusterPool custom resource which is similar to a cluster deployment.

apiVersion: hive.openshift.io/v1
kind: ClusterPool
metadata:
  name: okd-eu-west-1
  namespace: hive
spec:
  baseDomain: okd.domain.com
  imageSetRef:
    name: okd-4.7-imageset
  installConfigSecretTemplateRef: 
    name: install-config
  skipMachinePools: true
  platform:
    aws:
      credentialsSecretRef:
        name: aws-creds
      region: eu-west-1
  pullSecretRef:
    name: pull-secret
  size: 3

To claim a cluster from a pool, apply the ClusterClaim resource.

apiVersion: hive.openshift.io/v1
kind: ClusterClaim
metadata:
  name: okd-claim
  namespace: hive
spec:
  clusterPoolName: okd-eu-west-1
  lifetime: 8h

I haven’t tested this yet but will definitely start using this in the coming weeks. Have a look at the Hive documentation on using ClusterPool and ClusterClaim.

  • Cluster relocation

For me, having used OpenShift Hive for over one and half years to run OpenShift 4 cluster, this is a very useful functionality because at some point you might need to rebuild or move your management services to a new Hive cluster. The ClusterRelocator object gives you the option to do this.

$ kubectl create secret generic new-hive-cluster-kubeconfig -n hive --from-file=kubeconfig=./new-hive-cluster.kubeconfig

Create the ClusterRelocator object and specify the kubeconfig of the remote Hive cluster, and also add a clusterDeploymentSelector:

apiVersion: hive.openshift.io/v1
kind: ClusterRelocate
metadata:
  name: migrate
spec:
  kubeconfigSecretRef:
    namespace: hive
    name: new-hive-cluster-kubeconfig
  clusterDeploymentSelector:
    matchLabels:
      migrate: cluster

To move cluster deployments, add the label migrate=cluster to your OpenShift clusters you want to move.

$ kubectl label clusterdeployment okd migrate=cluster

The cluster deployment will move to the new Hive cluster and will be removed from the source Hive cluster without the de-provision. It’s important to keep in mind that you need to copy any other resources you need, such as secrets, syncsets, selectorsyncsets and syncidentiyproviders, before moving the clusters. Take a look at the Hive documentation for the exact steps.

  • Useful annotation

Pause SyncSets by adding the annotation “hive.openshift.io/syncset-pause=true” to the clusterdeployment which stops the reconcile of defined resources and great for troubleshooting.

In a cluster deployment you can set the option to preserve cluster on delete which allows the user to disconnect a cluster from Hive without de-provisioning it.

$ kubectl patch cd okd --type='merge' -p $'spec:\n preserveOnDelete: true'

This sums up the new features and functionalities you can use with the latest OpenShift Hive version.

Using Operator Lifecycle Manager and create custom Operator Catalog for Kubernetes

In the beginning of 2019 RedHat announced the launch of the OperatorHub.io and a lot of things have happened since then; OpenShift version 4 got released which is fully managed by Kubernetes operators and other vendors started to release their own operators to deploy their applications to Kubernetes. Even creating your own operators is becoming more popular and state-of-the-art if you run your own Kubernetes clusters.

I want to go into the details of how the Operator Lifecycle Manager (OLM) works and how you can create your own operator Catalog Server and use with Kubernetes to install operators but before we start let’s look how this works. The OLM is responsible for installing and managing the lifecycle of Kubernetes operators and uses a CatalogSource from which it installs the operators.

We should start by creating our own Catalog Server which is straightforward: we need to clone the community-operator repository and build a new catalog-server container image. If you would like to modify which operators should be in the catalog just delete the operators in the ./upstream-community-operators/ folder or add your own operators, in my example I only want to keep the SysDig operator:

git clone https://github.com/operator-framework/community-operators
cd community-operators/
rm ci.Dockerfile
mv upstream.Dockerfile Dockerfile
cd upstream-community-operators/
rm -rfv !("sysdig")
cd ..

Now we can build the new catalog container image and push to the registry:

$ docker build . --rm -t berndonline/catalog-server
Sending build context to Docker daemon  23.34MB
Step 1/10 : FROM quay.io/operator-framework/upstream-registry-builder:v1.3.0 as builder
 ---> e08ceacda476
Step 2/10 : COPY upstream-community-operators manifests
 ---> 9e4b4e98a968
Step 3/10 : RUN ./bin/initializer -o ./bundles.db
 ---> Running in b11415b71497
time="2020-01-11T15:14:02Z" level=info msg="loading Bundles" dir=manifests
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=manifests load=bundles
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=sysdig load=bundles
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.0 load=bundles
time="2020-01-11T15:14:02Z" level=info msg="found csv, loading bundle" dir=manifests file=sysdig-operator.v1.4.0.clusterserviceversion.yaml load=bundles
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdig-operator.v1.4.0.clusterserviceversion.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdigagents.sysdig.com.crd.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.7 load=bundles
time="2020-01-11T15:14:02Z" level=info msg="found csv, loading bundle" dir=manifests file=sysdig-operator.v1.4.7.clusterserviceversion.yaml load=bundles
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdig-operator.v1.4.7.clusterserviceversion.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg="loading bundle file" dir=manifests file=sysdigagents.sysdig.com.crd.yaml load=bundle
time="2020-01-11T15:14:02Z" level=info msg="loading Packages and Entries" dir=manifests
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=manifests load=package
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=sysdig load=package
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.0 load=package
time="2020-01-11T15:14:02Z" level=info msg=directory dir=manifests file=1.4.7 load=package
Removing intermediate container b11415b71497
 ---> d3e1417fd1ee
Step 4/10 : FROM scratch
 --->
Step 5/10 : COPY --from=builder /build/bundles.db /bundles.db
 ---> 32c4b0ba7422
Step 6/10 : COPY --from=builder /build/bin/registry-server /registry-server
 ---> 5607183f50e7
Step 7/10 : COPY --from=builder /bin/grpc_health_probe /bin/grpc_health_probe
 ---> 6e612705cab1
Step 8/10 : EXPOSE 50051
 ---> Running in 5930349a782e
Removing intermediate container 5930349a782e
 ---> 2a0e6d01f7f5
Step 9/10 : ENTRYPOINT ["/registry-server"]
 ---> Running in 1daf50f151ae
Removing intermediate container 1daf50f151ae
 ---> 9fe3fed7cc2a
Step 10/10 : CMD ["--database", "/bundles.db"]
 ---> Running in 154a8d3bb346
Removing intermediate container 154a8d3bb346
 ---> f4d99376cbef
Successfully built f4d99376cbef
Successfully tagged berndonline/catalog-server:latest
$ docker push berndonline/catalog-server
The push refers to repository [docker.io/berndonline/catalog-server]
0516ee590bf5: Pushed
3bbd78f51bb3: Pushed
e4bd72ca23da: Pushed
latest: digest: sha256:b2251ebb6049a1ea994fd710c9182c89866255011ee50fd2a6eeb55c6de2fa21 size: 947

Next we need to install the Operator Lifecycle Manager, go to the release page in Github and install the latest version. First this will add the Custom Resource Definitions for OLM and afterwards deploys the required OLM operator resources:

kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.13.0/crds.yaml
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.13.0/olm.yaml

Next we need to add the new CatalogSource and delete the default OperatorHub one to limit which operator can be installed:

cat <<EOF | kubectl apply -n olm -f -
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: custom-catalog
  namespace: olm
spec:
  sourceType: grpc
  image: docker.io/berndonline/catalog-server:latest
  displayName: Custom Operators
  publisher: techbloc.net
EOF

kubectl delete catalogsource operatorhubio-catalog -n olm

Do a quick check to make sure that the OLM components are running, you will see a pod with the custom-catalog which we previously created:

$ kubectl get pods -n olm
NAME                               READY   STATUS    RESTARTS   AGE
catalog-operator-5bdf7fc7b-wcbcs   1/1     Running   0          100s
custom-catalog-4hrbg               1/1     Running   0          32s
olm-operator-5ff565fcfc-2j9gt      1/1     Running   0          100s
packageserver-7fcbddc745-6s666     1/1     Running   0          88s
packageserver-7fcbddc745-jkfxs     1/1     Running   0          88s

Now we can look for the available operator manifests and see that our Custom Operator catalog only has the SysDig operator available:

$ kubectl get packagemanifests
NAME     CATALOG            AGE
sysdig   Custom Operators   36s

To install the SysDig operator we need to create the namespace, the operator group and the subscription which will instruct OLM to install the SysDig operator:

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: sysdig
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: operatorgroup
  namespace: sysdig
spec:
  targetNamespaces:
  - sysdig
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: sysdig
  namespace: sysdig
spec:
  channel: stable
  name: sysdig
  source: custom-catalog
  sourceNamespace: olm
EOF

At the end we need to check if the OLM installed the SysDig operator:

# Check that the subscription is created
$ kubectl get sub -n sysdig
NAME     PACKAGE   SOURCE           CHANNEL
sysdig   sysdig    custom-catalog   stable

# Check that OLM created an InstallPlan for installing the operator
$ kubectl get ip -n sysdig
NAME            CSV                      APPROVAL    APPROVED
install-sf6dl   sysdig-operator.v1.4.7   Automatic   true

# Check that the InstallPlan created the Cluster Service Version and installed the operator
$ kubectl get csv -n sysdig
NAME                     DISPLAY                 VERSION   REPLACES                 PHASE
sysdig-operator.v1.4.7   Sysdig Agent Operator   1.4.7     sysdig-operator.v1.4.0   Succeeded

# Check that the SysDig operator is running
$ kubectl get pod -n sysdig
NAME                               READY   STATUS    RESTARTS   AGE
sysdig-operator-74c9f665d9-bb8l9   1/1     Running   0          46s

Now you can install the SysDig agent by adding the following custom resource:

cat <<EOF | kubectl apply -f -
---
apiVersion: sysdig.com/v1alpha1
kind: SysdigAgent
metadata:
  name: agent
  namespace: sysdig
spec:
  ebpf:
    enabled: true
  secure:
    enabled: true
  sysdig:
    accessKey: XXXXXXX
EOF

To delete the SysDig operator just delete the namespace or run the following commands to delete subscription, operator group and cluster service version:

kubectl delete sub sysdig -n sysdig
kubectl delete operatorgroup operatorgroup -n sysdig
kubectl delete csv sysdig-operator.v1.4.7 -n sysdig

Thinking ahead you can let the Flux-CD operator manage all the resources and only use GitOps to apply cluster configuration:

I hope this article is interesting and useful, if you want to read more information about the Operator Lifecycle Manager please read the olm-book which has some useful information.