New Kubernetes GitOps Toolkit – Flux CD v2

I have been using the Flux CD operator for a few month to manage Kubernetes clusters in dev and prod and it is a great tool. When I initially reviewed Flux the first time back then, I liked it because of its simplicity but it was missing some important features such as the possibility to synchronise based on tags instead of a single branch, and configuring the Flux operator through the deployment wasn’t as good and intuitive, and caused some headaches.

A few days ago I stumbled across the new Flux CD GitOps Toolkit and it got my attention when I saw the new Flux v2 operator architecture. They’ve split the operator functions into three controller and using CRDs to configure Source, Kustomize and Helm configuration:

The feature which I was really waiting for was the support for Semantic Versioning semver in your GitRepository source. With this I am able to create platform releases, and can separate non-prod and prod clusters better which makes the deployment of configuration more controlled and flexible than previously with Flux v1.

You can see below the different release versions I’ve created in my cluster management repository:

The following two GitRepository examples; the first one syncs based on a static release tag 0.0.1 and the second syncs within a Semantic version range >=0.0.1 <0.1.0:

---
apiVersion: source.toolkit.fluxcd.io/v1alpha1
kind: GitRepository
metadata:
  creationTimestamp: null
  name: gitops-system
  namespace: gitops-system
spec:
  interval: 1m0s
  ref:
    tag: 0.0.1
  secretRef:
    name: gitops-system
  url: ssh://github.com/berndonline/gitops-toolkit
status: {}
---
apiVersion: source.toolkit.fluxcd.io/v1alpha1
kind: GitRepository
metadata:
  creationTimestamp: null
  name: gitops-system
  namespace: gitops-system
spec:
  interval: 1m0s
  ref:
    semver: '>=0.0.1 <0.1.0'
  secretRef:
    name: gitops-system
  url: ssh://github.com/berndonline/gitops-toolkit
status: {}

There are improvements for the Kustomize configuration to add additional overlays depending on your repository folder structure or combine this with another GitRepository source. In my example repository I have a cluster folder cluster-dev and a folder for common configuration:

.
|____cluster-dev
| |____kustomization.yaml
| |____hello-world_base
| | |____kustomization.yaml
| | |____deploy.yaml
|____common
  |____kustomization.yaml
  |____nginx-service.yaml
  |____nginx_base
    |____kustomization.yaml
    |____service.yaml
    |____nginx.yaml

You can add multiple Kustomize custom resources as you can see in my examples, one for the cluster specific config and a second one for the common configuration with can be applied to multiple clusters:

---
apiVersion: kustomize.toolkit.fluxcd.io/v1alpha1
kind: Kustomization
metadata:
  creationTimestamp: null
  name: cluster-conf
  namespace: gitops-system
spec:
  interval: 5m0s
  path: ./cluster-dev
  prune: true
  sourceRef:
    kind: GitRepository
    name: gitops-system
status: {}
---
apiVersion: kustomize.toolkit.fluxcd.io/v1alpha1
kind: Kustomization
metadata:
  creationTimestamp: null
  name: common-con
  namespace: gitops-system
spec:
  interval: 5m0s
  path: ./common
  prune: true
  sourceRef:
    kind: GitRepository
    name: gitops-system
status: {}

Let’s install the Flux CD GitOps Toolkit. The toolkit comes again with its own command-line utility tk which you use to install and configure the operator . You find available CLI versions on the Github release page.

Set up a  new repository to store you k8s configuration:

$ git clone ssh://github.com/berndonline/gitops-toolkit
$ cd gitops-toolkit
$ mkdir -p ./cluster-dev/gitops-system

Generate the GitOps Toolkit manifests and store under gitops-system folder, afterwards apply the configuration to your k8s cluster:

$ tk install --version=latest \
    --export > ./cluster-dev/gitops-system/toolkit-components.yaml
$ kubectl apply -f ./cluster-dev/gitops-system/toolkit-components.yaml 
namespace/gitops-system created
customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io created
customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io created
customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io created
customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io created
customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io created
customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io created
customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io created
customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io created
role.rbac.authorization.k8s.io/crd-controller-gitops-system created
rolebinding.rbac.authorization.k8s.io/crd-controller-gitops-system created
clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-gitops-system created
service/notification-controller created
service/source-controller created
service/webhook-receiver created
deployment.apps/helm-controller created
deployment.apps/kustomize-controller created
deployment.apps/notification-controller created
deployment.apps/source-controller created
networkpolicy.networking.k8s.io/deny-ingress created

Check if all the pods are running and use the command tk check to see if the toolkit is working correctly:

$ kubectl get pod -n gitops-system
NAME                                       READY   STATUS    RESTARTS   AGE
helm-controller-64f846df8c-g4mhv           1/1     Running   0          19s
kustomize-controller-6d9745c8cd-n8tth      1/1     Running   0          19s
notification-controller-587c49f7fc-ldcg2   1/1     Running   0          18s
source-controller-689dcd8bd7-rzp55         1/1     Running   0          18s
$ tk check
► checking prerequisites
✔ kubectl 1.18.3 >=1.18.0
✔ Kubernetes 1.18.6 >=1.16.0
► checking controllers
✔ source-controller is healthy
✔ kustomize-controller is healthy
✔ helm-controller is healthy
✔ notification-controller is healthy
✔ all checks passed

Now you can create a GitRepository custom resource, it will generate a ssh key local and displays the public key which you need to add to your repository deploy keys:

$ tk create source git gitops-system \
  --url=ssh://github.com/berndonline/gitops-toolkit \ 
  --ssh-key-algorithm=ecdsa \
  --ssh-ecdsa-curve=p521 \
  --branch=master \
  --interval=1m
► generating deploy key pair
ecdsa-sha2-nistp521 xxxxxxxxxxx
Have you added the deploy key to your repository: y
► collecting preferred public key from SSH server
✔ collected public key from SSH server:
github.com ssh-rsa xxxxxxxxxxx
► applying secret with keys
✔ authentication configured
✚ generating source
► applying source
✔ source created
◎ waiting for git sync
✗ git clone error: remote repository is empty

Continue with adding the Kustomize configuration:

$ tk create kustomization gitops-system \
  --source=gitops-system \
  --path="./cluster-dev" \
  --prune=true \
  --interval=5m
✚ generating kustomization
► applying kustomization
✔ kustomization created
◎ waiting for kustomization sync
✗ Source is not ready

Afterwards you can add your Kubernetes manifests to your repository and the operator will start synchronising the repository and apply the configuration which you’ve defined.

You can export the Source and Kustomize configuration:

$ tk export source git gitops-system \
 > ./cluster-dev/gitops-system/toolkit-source.yaml
$ tk export kustomization gitops-system \
 > ./cluster-dev/gitops-system/toolkit-kustomization.yaml

You basically finished installing the GitOps Toolkit and below you have some useful commands to reconcile the configured custom resources:

$ tk reconcile source git gitops-system
$ tk reconcile kustomization gitops-system

I was thinking of explaining how to setup a Kubernetes platform repository and do release versioning with the Flux GitOps Toolkit in one of my next articles. Please let me know if you have questions.

How to manage Kubernetes clusters the GitOps way with Flux CD

Kubernetes is becoming more and more popular, and so is managing clusters at scale. This article is about how to manage Kubernetes clusters the GitOps way using the Flux CD operator.

Flux can monitor container image and code repositories that you specify and trigger deployments to automatically change the configuration state of your Kubernetes cluster. The cluster configuration is centrally managed and stored in declarative form in Git, and there is no need for an administrator to manually apply manifests, the Flux operator synchronise to apply or delete the cluster configuration.

Before we start deploying the operator we need to install the fluxctl command-line utility and create the namespace:

sudo wget -O /usr/local/bin/fluxctl https://github.com/fluxcd/flux/releases/download/1.18.0/fluxctl_linux_amd64
sudo chmod 755 /usr/local/bin/fluxctl
kubectl create ns flux

Deploying the Flux operator is straight forward and requires a few options like git repository and git path. The path is important for my example because it tells the operator in which folder to look for manifests:

$ fluxctl install [email protected] [email protected]:berndonline/flux-cd.git --git-path=clusters/gke,common/stage --manifest-generation=true --git-branch=master --namespace=flux --registry-disable-scanning | kubectl apply -f -
deployment.apps/memcached created
service/memcached created
serviceaccount/flux created
clusterrole.rbac.authorization.k8s.io/flux created
clusterrolebinding.rbac.authorization.k8s.io/flux created
deployment.apps/flux created
secret/flux-git-deploy created

After you have applied the configuration, wait until the Flux pods are up and running:

$ kubectl get pods -n flux
NAME                       READY   STATUS    RESTARTS   AGE
flux-85cd9cd746-hnb4f      1/1     Running   0          74m
memcached-5dcd7579-d6vwh   1/1     Running   0          20h

The last step is to get the Flux operator deploy keys and copy the output to add to your Git repository:

fluxctl identity --k8s-fwd-ns flux

Now you are ready to synchronise the Flux operator with the repository. By default Flux automatically synchronises every 5 minutes to apply configuration changes:

$ fluxctl sync --k8s-fwd-ns flux
Synchronizing with [email protected]:berndonline/flux-cd.git
Revision of master to apply is 726944d
Waiting for 726944d to be applied ...
Done.

You are able to list workloads which are managed by the Flux operator:

$ fluxctl list-workloads --k8s-fwd-ns=flux -a
WORKLOAD                             CONTAINER         IMAGE                            RELEASE  POLICY
default:deployment/hello-kubernetes  hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    automated

How do we manage the configuration for multiple Kubernetes clusters?

I want to show you a simple example using Kustomize to manage multiple clusters across two environments (staging and production) with Flux. Basically you have a single repository and multiple clusters synchronising the configuration depending how you configure the –git-path variable of the Flux operator. The option –manifest-generation enables Kustomize for the operator and it is required to add a .flux.yaml to run Kustomize build on the cluster directories and to apply the generated manifests.

Let’s look at the repository file and folder structure. We have the base folder containing the common deployment configuration, the common folder with the environment separation for stage and prod overlays and the clusters folder which contains more cluster specific configuration:

├── .flux.yaml 
├── base
│   └── common
│       ├── deployment.yaml
│       ├── kustomization.yaml
│       ├── namespace.yaml
│       └── service.yaml
├── clusters
│   ├── eks
|   |   ├── eks-app1
│   │   |   ├── deployment.yaml
|   |   |   ├── kustomization.yaml
│   │   |   └── service.yaml
|   |   └── kustomization.yaml
│   ├── gke
|   |   ├── gke-app1
│   │   |   ├── deployment.yaml
|   |   |   ├── kustomization.yaml
│   │   |   └── service.yaml
|   |   ├── gke-app2
│   │   |   ├── deployment.yaml
|   |   |   ├── kustomization.yaml
│   │   |   └── service.yaml
|   |   └── kustomization.yaml
└── common
    ├── prod
    |   ├── prod.yaml
    |   └── kustomization.yaml
    └── stage
        ├──  team1
        |    ├── deployment.yaml
        |    ├── kustomization.yaml
        |    ├── namespace.yaml
        |    └── service.yaml
        ├── stage.yaml
        └── kustomization.yaml

If you are new to Kustomize I would recommend reading the article Kustomize – The right way to do templating in Kubernetes.

The last thing we need to do is to deploy the Flux operator to the two Kubernetes clusters. The only difference between both is the git-path variable which points the operator to the cluster and common directories were Kustomize applies the overlays based what is specified in kustomize.yaml. More details about the configuration you find in my example repository: https://github.com/berndonline/flux-cd

Flux config for Google GKE staging cluster:

fluxctl install [email protected] [email protected]:berndonline/flux-cd.git --git-path=clusters/gke,common/stage --manifest-generation=true --git-branch=master --namespace=flux | kubectl apply -f -

Flux config for Amazon EKS production cluster:

fluxctl install [email protected] [email protected]:berndonline/flux-cd.git --git-path=clusters/eks,common/prod --manifest-generation=true --git-branch=master --namespace=flux | kubectl apply -f -

After a few minutes the configuration is applied to the two clusters and you can validate the configuration.

Google GKE stage workloads:

$ fluxctl list-workloads --k8s-fwd-ns=flux -a
WORKLOAD                   CONTAINER         IMAGE                            RELEASE  POLICY
common:deployment/common   hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    automated
default:deployment/gke1    hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    
default:deployment/gke2    hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready    
team1:deployment/team1     hello-kubernetes  paulbouwer/hello-kubernetes:1.5  ready
$ kubectl get svc --all-namespaces | grep LoadBalancer
common        common                 LoadBalancer   10.91.14.186   35.240.53.46     80:31537/TCP    16d
default       gke1                   LoadBalancer   10.91.7.169    35.195.241.46    80:30218/TCP    16d
default       gke2                   LoadBalancer   10.91.10.239   35.195.144.68    80:32589/TCP    16d
team1         team1                  LoadBalancer   10.91.1.178    104.199.107.56   80:31049/TCP    16d

GKE common stage application:

Amazon EKS prod workloads:

$ fluxctl list-workloads --k8s-fwd-ns=flux -a
WORKLOAD                          CONTAINER         IMAGE                                                                RELEASE  POLICY
common:deployment/common          hello-kubernetes  paulbouwer/hello-kubernetes:1.5                                      ready    automated
default:deployment/eks1           hello-kubernetes  paulbouwer/hello-kubernetes:1.5                                      ready
$ kubectl get svc --all-namespaces | grep LoadBalancer
common        common       LoadBalancer   10.100.254.171   a4caafcbf2b2911ea87370a71555111a-958093179.eu-west-1.elb.amazonaws.com    80:32318/TCP    3m8s
default       eks1         LoadBalancer   10.100.170.10    a4caeada52b2911ea87370a71555111a-1261318311.eu-west-1.elb.amazonaws.com   80:32618/TCP    3m8s

EKS common prod application:

I hope this article is useful to get started with GitOps and the Flux operator. In the future, I would like to see Flux being able to watch git tags which will make it easier to promote changes and manage clusters with version tags.

For more technical information have a look at the Flux CD documentation.

Automate Ansible AWX configuration using Tower-CLI

Some time has gone by since my article about Getting started with Ansible AWX (Open Source Tower version) , and I wanted to continue focusing on AWX and show how to automate the configuration of an AWX Tower server.

Before we configure AWX we should install the tower-cli. You can find more information about the Tower CLI here: https://github.com/ansible/tower-cli. I also recommend having a look at the tower-cli documentation: https://tower-cli.readthedocs.io/en/latest/

sudo pip install ansible-tower-cli

The tower-cli is very useful when you want to monitor the running jobs. The web console is not that great when it comes to large playbook and is pretty slow at showing the running job state. See below the basic configuration before you start using the tower-cli:

[email protected]:~$ tower-cli config host 94.130.51.22
Configuration updated successfully.
[email protected]:~$ tower-cli login admin
Password:
{
 "id": 1,
 "type": "o_auth2_access_token",
 "url": "/api/v2/tokens/1/",
 "created": "2018-09-15T17:41:23.942572Z",
 "modified": "2018-09-15T17:41:23.955795Z",
 "description": "Tower CLI",
 "user": 1,
 "refresh_token": null,
 "application": null,
 "expires": "3018-01-16T17:41:23.937872Z",
 "scope": "write"
}
Configuration updated successfully.
[email protected]:~$ 

But now let’s continue and show how we can use the tower-cli to configure and monitor Ansible AWX Tower.

Create a project:

tower-cli project create --name "My Project" --description "My project description" --organization "Default" --scm-type "git" --scm-url "https://github.com/ansible/ansible-tower-samples"

Create an inventory:

tower-cli inventory create --name "My Inventory" --organization "Default"

Add hosts to an inventory:

tower-cli host create --name "localhost" --inventory "My Inventory" --variables "ansible_connection: local"

Create credentials:

tower-cli credential create --name "My Credential" --credential-type "Machine" --user "admin"

Create a Project Job Template:

tower-cli job_template create --name "My Job Template" --project "My Project" --inventory "My Inventory" --job-type "run" --credential "My Credential" --playbook "hello_world.yml" --verbosity "default"

After we successfully created everything let’s now run the job template and monitor the output via the tower-cli:

tower-cli job launch --job-template "My Job Template"
tower-cli job monitor <ID>

Command line output:

[email protected]:~$ tower-cli job launch --job-template "My Job Template"
Resource changed.
== ============ =========================== ======= =======
id job_template           created           status  elapsed
== ============ =========================== ======= =======
26           15 2018-10-12T12:22:48.599748Z pending 0.0
== ============ =========================== ======= =======
[email protected]:~$ tower-cli job monitor 26
------Starting Standard Out Stream------


PLAY [Hello World Sample] ******************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [Hello Message] ***********************************************************
ok: [localhost] => {
    "msg": "Hello World!"
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0

------End of Standard Out Stream--------
Resource changed.
== ============ =========================== ========== =======
id job_template           created             status   elapsed
== ============ =========================== ========== =======
26           15 2018-10-12T12:22:48.599748Z successful 8.861
== ============ =========================== ========== =======
[email protected]:~$

With the tower-cli commands we can write a simple playbook using the Ansible Shell module.

Playbook site.yml:

---
- hosts: localhost
  gather_facts: 'no'

  tasks:
    - name: Add tower project
      shell: |
        tower-cli project create \
        --name "My Project" \
        --description "My project description" \
        --organization "Default" \
        --scm-type "git" \
        --scm-url "https://github.com/ansible/ansible-tower-samples"

    - name: Add tower inventory
      shell: |
        tower-cli inventory create \
        --name "My Inventory" \
        --organization "Default"

    - name: Add host to inventory
      shell: |
        tower-cli host create \
        --name "localhost" \
        --inventory "My Inventory" \
        --variables "ansible_connection: local"
    
    - name: Add credential
      shell: |
        tower-cli credential create \
        --name "My Credential" \
        --credential-type "Machine" \
        --user "admin"
        
    - name: wait 15 seconds to pull project SCM content
      wait_for: timeout=15
      delegate_to: localhost
 
    - name: Add job template
      shell: |
        tower-cli job_template create \
        --name "My Job Template" \
        --project "My Project" \
        --inventory "My Inventory" \
        --job-type "run" \
        --credential "My Credential" \
        --playbook "hello_world.yml" \
        --verbosity "default"

Let’s run the playbook:

[email protected]:~/awx-provision$ ansible-playbook site.yml

PLAY [localhost] **************************************************************************************************************************************************

TASK [Add tower project] ******************************************************************************************************************************************
changed: [localhost]

TASK [Add tower inventory] ****************************************************************************************************************************************
changed: [localhost]

TASK [Add host to inventory] **************************************************************************************************************************************
changed: [localhost]

TASK [Add credential] *********************************************************************************************************************************************
changed: [localhost]

TASK [wait 15 seconds to pull project SCM content] ****************************************************************************************************************
ok: [localhost -> localhost]

TASK [Add job template] *******************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ********************************************************************************************************************************************************
localhost : ok=6 changed=5 unreachable=0 failed=0

[email protected]:~/awx-provision$

If you like this article, please share your feedback and leave a comment.

Getting started with Ansible AWX (Open Source Tower version)

Ansible released AWX a few weeks ago, an open source (community supported) version of their commercial Ansible Tower product. This is a web-based graphical interface to manage Ansible playbooks, inventories, and schedule jobs to run playbooks.

The Github repository you find here: https://github.com/ansible/awx

Let’s start with the installation of Ansible AWX, very easy because everything is dockerized and see the install guide for more information.

Modify the inventory file under the installer folder and change the Postgres data folder which is otherwise located under /tmp, also change Postgres DB username and password if needed. I would recommend binding AWX to localhost and put an Nginx reverse proxy in front with SSL encryption.

Changes in the inventory file:

postgres_data_dir=/var/lib/postgresql/data/
host_port=127.0.0.1:8052

Start the build of the Docker container:

ansible-playbook -i inventory install.yml

After the Ansible Playbook run completes, you see the following Docker container:

[email protected]:~/awx/installer$ docker ps
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                NAMES
26a73c91cb04        ansible/awx_task:latest   "/tini -- /bin/sh ..."   2 days ago          Up 24 hours         8052/tcp                             awx_task
07774696a7f2        ansible/awx_web:latest    "/tini -- /bin/sh ..."   2 days ago          Up 24 hours         127.0.0.1:8052->8052/tcp             awx_web
981f4f02c759        memcached:alpine          "docker-entrypoint..."   2 days ago          Up 24 hours         11211/tcp                            memcached
4f4a3141b54d        rabbitmq:3                "docker-entrypoint..."   2 days ago          Up 24 hours         4369/tcp, 5671-5672/tcp, 25672/tcp   rabbitmq
faf07f7b4682        postgres:9.6              "docker-entrypoint..."   2 days ago          Up 24 hours         5432/tcp                             postgres
[email protected]:~/awx/installer$

Install Nginx:

sudo apt-get update
sudo apt-get install nginx
sudo rm /etc/nginx/sites-enabled/default

Create Nginx vhosts configuration:

sudo vi /etc/nginx/sites-available/awx
server {
    listen 443 ssl;
    server_name awx.domain.com;

    ssl on;
    ssl_certificate /etc/nginx/ssl/awx.domain.com-cert.pem;
    ssl_certificate_key /etc/nginx/ssl/awx.domain.com-key.pem;

    location / {
        proxy_pass http://127.0.0.1:8052;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Create symlink in sites enable to point to awx config:

sudo ln -s /etc/nginx/sites-available/awx /etc/nginx/sites-enabled/awx

Reload Nginx to apply configuration:

sudo systemctl reload nginx

Afterwards you are able to login with username “admin” and password “password”:

I created a simple job for testing with AWX, you first start to create a project, credentials and inventories. The project points to your Git repository:

Under the job you configure which project, credentials and inventories to use:

Once saved you can manually trigger the job, it first pulls the latest playbook from your version control repository and afterwards executes the configured Ansible playbook:

The job details look very similar if you run an playbook on the CLI:

Ansible AWX is a very useful tool if you need to manage different Ansible playbooks and do job scheduling if you are not already using other tools like Jenkins or Gitlab-CI. But even then it is a good addition to use AWX to run ad-hoc playbooks.

Check out my new articles about Automate Ansible AWX configuration using Tower-CLI and Build Ansible Tower Container.

Leave a comment