Software defined Load Balancing with AVI Networks

Throughout my career I have used various load balancing platforms, from commercial products like F5 or Citrix NetScaler to open source software like HA proxy. All of them do their job of balancing traffic between servers but the biggest problem is the scalability: yes you can deploy more load balancers but the config is static bound to the appliance.

AVI Networks has a very interesting concept of moving away from the traditional idea of load balancing and solving this problem by decoupling the control-plane from the data-plane which makes the load balancing Service Engines basically just forward traffic and can be more easily scaled-out when needed. Another nice advantage is that these Service Engines are container based and can run on basically every type of infrastructure from Bare Metal, on VMs to modern containerized platforms like Kubernetes or OpenShift:

All the AVI components are running as container image on any type of infrastructure or platform architecture which makes the deployment very easy to run on-premise or cloud systems.

The Service Engines on Hypervisor or Base-metal servers need network cards which support Intel’s DPDK for better packet forwarding. Have a look at the AVI linux server deployment guide: https://avinetworks.com/docs/latest/installing-avi-vantage-for-a-linux-server-cloud/

Here now, is a basic step-by-step guide on how to install the AVI Vantage Controller and additional Service Engines. Have a look at the AVI Knowledge-Base where the install is explained in detail:  https://avinetworks.com/docs/latest/installing-avi-vantage-for-a-linux-server-cloud/

Here is the link to my Vagrant environment: https://github.com/berndonline/avi-lab-vagrant

Let’s start with the manual AVI Controller installation:

[vagrant@localhost ~]$ sudo ./avi_baremetal_setup.py
AviVantage Version Tag: 17.2.11-9014
Found disk with largest capacity at [/]

Welcome to Avi Initialization Script

Pre-requisites: This script assumes the below utilities are installed:
                  docker (yum -y install docker/apt-get install docker.io)
Supported Vers: OEL - 6.5,6.7,6.9,7.0,7.1,7.2,7.3,7.4 Centos/RHEL - 7.0,7.1,7.2,7.3,7.4, Ubuntu - 14.04,16.04

Do you want to run Avi Controller on this Host [y/n] y
Do you want to run Avi SE on this Host [y/n] n
Enter The Number Of Cores For Avi Controller. Range [4, 4] 4
Please Enter Memory (in GB) for Avi Controller. Range [12, 7]
Please enter directory path for Avi Controller Config (Default [/opt/avi/controller/data/])
Please enter disk size (in GB) for Avi Controller Config (Default [30G]) 10
Do you have separate partition for Avi Controller Metrics ? If yes, please enter directory path, else leave it blank
Do you have separate partition for Avi Controller Client Logs ? If yes, please enter directory path, else leave it blank
Please enter Controller IP (Default [10.255.1.232])
Enter the Controller SSH port. (Default [5098])
Enter the Controller system-internal portal port. (Default [8443])
AviVantage Version Tag: 17.2.11-9014
AviVantage Version Tag: 17.2.11-9014
Run SE           : No
Run Controller   : Yes
Controller Cores : 4
Memory(GB)       : 7
Disk(GB)         : 10
Controller IP    : 10.255.1.232
Disabling Avi Services...
Loading Avi CONTROLLER Image. Please Wait..
Installation Successful. Starting Services..
[vagrant@localhost ~]$
[vagrant@localhost ~]$ sudo systemctl start avicontroller

Or as a single command without interactive mode:

[vagrant@localhost ~]$ sudo ./avi_baremetal_setup.py -c -cd 10 -cc 4 -cm 7 -i 10.255.1.232
AviVantage Version Tag: 17.2.11-9014
Found disk with largest capacity at [/]
AviVantage Version Tag: 17.2.11-9014
AviVantage Version Tag: 17.2.11-9014
Run SE           : No
Run Controller   : Yes
Controller Cores : 4
Memory(GB)       : 7
Disk(GB)         : 10
Controller IP    : 10.255.1.232
Disabling Avi Services...
Loading Avi CONTROLLER Image. Please Wait..
Installation Successful. Starting Services..
[vagrant@localhost ~]$
[vagrant@localhost ~]$ sudo systemctl start avicontroller

The installer basically installed a container image on the server which runs the AVI Controller:

[vagrant@localhost ~]$ sudo docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED              STATUS              PORTS                                                                                                                                    NAMES
c689435f74fd        avinetworks/controller:17.2.11-9014                   "/opt/avi/scripts/do…"   About a minute ago   Up About a minute   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:5054->5054/tcp, 0.0.0.0:5098->5098/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:161->161/udp   avicontroller
[vagrant@localhost ~]$

Next you can connect via the web console to change the password and finalise the configuration to configure DNS, NTP and SMTP:

When you get to the menu Orchestrator integration you can put in the details for the controller to install additional service engines:

In the meantime the AVI Controller installs the specified Service Engines in the background, which automatically appear once this is completed under the infrastructure menu:

Like with the AVI Controller, the Service Engines run as container image:

[vagrant@localhost ~]$ sudo docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS               NAMES
2c6b207ed376        avinetworks/se:17.2.11-9014                   "/opt/avi/scripts/do…"   51 seconds ago      Up 50 seconds                           avise
[vagrant@localhost ~]$

The next article will be about automatically deploying the AVI Controller and Service Engines via Ansible, and looking into how to integrate AVI with OpenShift.

Please share your feedback and leave a comment.

Deploy OpenShift 3.7 Origin Cluster with Ansible

Something completely different to my more network related posts, this time it is about Platform as a Service with OpenShift Origin. There is a big push for containerized platform services from development.

I was testing the official OpenShift Origin Ansible Playbook to install a small 5 node cluster and created an OpenShift Vagrant environment for this.

Cluster overview:

I recommend having a look at the official RedHat OpenShift documentation to understand the architecture because it is quite a complex platform.

As a pre-requisite, you need to install the vagrant hostmanager because Openshift needs to resolve hostnames and I don’t want to install a separate DNS server. Here you find more information: https://github.com/devopsgroup-io/vagrant-hostmanager

vagrant plugin install vagrant-hostmanager

sudo bash -c 'cat << EOF > /etc/sudoers.d/vagrant_hostmanager2
Cmnd_Alias VAGRANT_HOSTMANAGER_UPDATE = /bin/cp <your-home-folder>/.vagrant.d/tmp/hosts.local /etc/hosts
%sudo ALL=(root) NOPASSWD: VAGRANT_HOSTMANAGER_UPDATE
EOF'

Next, clone my Vagrant repository and the official OpenShift Origin ansible:

git clone [email protected]:berndonline/openshift-origin-vagrant.git
git clone [email protected]:openshift/openshift-ansible.git

Let’s start first by booting the OpenShift vagrant environment:

cd openshift-origin-vagrant/
./vagrant_up.sh

The vagrant host manager will update dynamically the /etc/hosts file on both the Guest and the Host machine:

...
## vagrant-hostmanager-start id: 55ed9acf-25e9-4b19-bfab-e0812a292dc0
10.255.1.81	origin-master

10.255.1.231	origin-etcd

10.255.1.182	origin-infra

10.255.1.72	origin-node-1

10.255.1.145	origin-node-2

## vagrant-hostmanager-end
...

Let’s have a quick look at the OpenShift inventory file. This has settings for the different node types and custom OpenShift and Vagrant variables. You need to modify a few things like public hostname and default subdomain:

OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=vagrant
ansible_become=yes

deployment_type=origin
openshift_release=v3.7.0
containerized=true
openshift_install_examples=true
enable_excluders=false
openshift_check_min_host_memory_gb=4
openshift_disable_check=docker_image_availability,docker_storage,disk_availability

# use htpasswd authentication with demo/demo
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'demo': '$apr1$.MaA77kd$Rlnn6RXq9kCjnEfh5I3w/.'}

# put the router on dedicated infra node
openshift_hosted_router_selector='region=infra'
openshift_master_default_subdomain=origin.paas.domain.com

# put the image registry on dedicated infra node
openshift_hosted_registry_selector='region=infra'

# project pods should be placed on primary nodes
osm_default_node_selector='region=primary'

# Vagrant variables
ansible_port='22' 
ansible_user='vagrant'
ansible_ssh_private_key_file='/home/berndonline/.vagrant.d/insecure_private_key'

[masters]
origin-master  openshift_public_hostname="console.paas.domain.com"

[etcd]
origin-etcd

[nodes]
# master needs to be included in the node to be configured in the SDN
origin-master
origin-infra openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
origin-node-[1:2] openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

Now that we are ready, we need to check out the latest release and execute the Ansible Playbook:

cd openshift-ansible/
git checkout release-3.7
ansible-playbook ./playbooks/byo/config.yml -i ../openshift-origin-vagrant/inventory

The playbook takes forever to run, so do something else for the next 10 to 15 mins.

...

PLAY RECAP **********************************************************************************************************************************************************
localhost                  : ok=13   changed=0    unreachable=0    failed=0
origin-etcd                : ok=147  changed=47   unreachable=0    failed=0
origin-infra               : ok=202  changed=61   unreachable=0    failed=0
origin-master              : ok=561  changed=224  unreachable=0    failed=0
origin-node                : ok=201  changed=61   unreachable=0    failed=0


INSTALLER STATUS ****************************************************************************************************************************************************
Initialization             : Complete
Health Check               : Complete
etcd Install               : Complete
Master Install             : Complete
Master Additional Install  : Complete
Node Install               : Complete
Hosted Install             : Complete
Service Catalog Install    : Complete

Sunday 21 January 2018  20:55:16 +0100 (0:00:00.011)       0:11:56.549 ********
===============================================================================
etcd : Pull etcd container ---------------------------------------------------------------------------------------------------------------------------------- 79.51s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ----------------------------------------------------------------------------- 31.54s
openshift_node : Pre-pull node image when containerized ----------------------------------------------------------------------------------------------------- 31.28s
template_service_broker : Verify that TSB is running -------------------------------------------------------------------------------------------------------- 30.87s
docker : Install Docker ------------------------------------------------------------------------------------------------------------------------------------- 30.41s
docker : Install Docker ------------------------------------------------------------------------------------------------------------------------------------- 26.32s
openshift_cli : Pull CLI Image ------------------------------------------------------------------------------------------------------------------------------ 23.03s
openshift_service_catalog : wait for api server to be ready ------------------------------------------------------------------------------------------------- 21.32s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ----------------------------------------------------------------------------- 16.27s
restart master api ------------------------------------------------------------------------------------------------------------------------------------------ 10.69s
restart master controllers ---------------------------------------------------------------------------------------------------------------------------------- 10.62s
openshift_node : Start and enable node ---------------------------------------------------------------------------------------------------------------------- 10.42s
openshift_node : Start and enable node ---------------------------------------------------------------------------------------------------------------------- 10.30s
openshift_master : Start and enable master api on first master ---------------------------------------------------------------------------------------------- 10.21s
openshift_master : Start and enable master controller service ----------------------------------------------------------------------------------------------- 10.19s
os_firewall : Install iptables packages --------------------------------------------------------------------------------------------------------------------- 10.15s
os_firewall : Wait 10 seconds after disabling firewalld ----------------------------------------------------------------------------------------------------- 10.07s
os_firewall : need to pause here, otherwise the iptables service starting can sometimes cause ssh to fail --------------------------------------------------- 10.05s
openshift_node : Pre-pull node image when containerized ------------------------------------------------------------------------------------------------------ 7.85s
openshift_service_catalog : oc_process ----------------------------------------------------------------------------------------------------------------------- 7.44s

To publish both the openshift_public_hostname and openshift_master_default_subdomain, I have a Nginx reverse proxy running and publish 8443 from the origin-master and 80, 443 from the origin-infra nodes.

Here a Nginx example:

server {
  listen 8443 ssl;
  listen [::]:8443 ssl;
  server_name console.paas.domain.com;

  ssl on;
  ssl_certificate /etc/nginx/ssl/paas.domain.com-cert.pem;
  ssl_certificate_key /etc/nginx/ssl/paas.domain.com-key.pem;

  access_log  /var/log/nginx/openshift-console_access.log;
  error_log   /var/log/nginx/openshift-console_error.log;

location / {
  proxy_pass https://10.255.1.81:8443;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection 'upgrade';
  proxy_set_header Host $host;
  proxy_cache_bypass $http_upgrade;

  }
}

I will try to write more about OpenShift and Platform as a Service and how to deploy small applications like WordPress.

Have fun testing OpenShift and please share your feedback.

Leave a comment

VMware NSX-T 2.0 First Impression

Over the past two days I spend some time with VMware NSX-T 2.0 which has multi-hypervisor (KVM and ESXi) support, and is for containerised platform environments like Kubernetes and RedHat OpenShift. VMware has as well an NSX-T cloud version which can run in Amazon AWS and Google cloud services.

First big change is the new HTML5 web client which looks nice and clean, the menu structure is different to NSX-V for vSphere which you have to get used to first. NSX-V will also get the new HTML5 web clients soon I have heard:

VMware did quite a few changes in NSX-T, they moved over to Geneve and replaced the VXLAN encapsulation which is currently used in NSX-V. That makes it impossible at the moment to connect NSX-V and NSX-T because of the different overlay technologies.

Routing works different to the previous NSX for vSphere version having Tier 0 (edge/aggregation) and Tier 1 (tenant) routers. Previously in NSX-V you used Edge appliances as tenant router which is now replace with Tier 1 distributed routing. On the Tier 1 tenant router you don’t need to configure BGP anymore, you just specify to advertise connected routes, the connection between Tier 1 and Tier 0 also pushed down the default gateway.

The Edge appliance can be deployed as virtual machine or on Bare-Metal servers which makes the Transport Zoning different to NSX-V because Edge appliances need to be part of Transport Zones to connect to the overlay and physical VLAN:

On the Edge itself you have two functions, Distributed Routing (DR) for stateless forwarding and Service Routing (SR) for stateful forwarding like NAT:

Load balancing is currently missing  in the Edge appliance but this is coming in one of the next releases for NSX-T.

Here a network design with Tier 0 and Tier 1 routing in NSX-T:

I will write another post in the coming weeks about the detailed routing configuration in NSX-T. I am also curious to integrate Kubernetes in NSX-T to try out the integration for containerise platform environments.