Ansible Playbook to deploy AVI Controller and Service Engines

After my first blog post about Software defined Load Balancing with AVI Networks, here is how to automatically deploy AVI controller and services engines via Ansible.

Here are the links to my repositories; AVI Vagrant environment: https://github.com/berndonline/avi-lab-vagrant and AVI Ansible Playbook: https://github.com/berndonline/avi-lab-provision

Make sure that your vagrant environment is running,

berndonline@lab:~/avi-lab-vagrant$ vagrant status
Current machine states:

avi-controller-1          running (libvirt)
avi-controller-2          running (libvirt)
avi-controller-3          running (libvirt)
avi-se-1                  running (libvirt)
avi-se-2                  running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

I needed to modify the ansible.cfg to integrate a filter plugin:

[defaults]
inventory = ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
host_key_checking=False

library = /home/berndonline/avi-lab-provision/lib
filter_plugins = /home/berndonline/avi-lab-provision/lib/filter_plugins

The controller installation is actually very simple and I got it from the official AVI ansible role they created, I added a second role to check ones the controller nodes are successfully booted:

---
- hosts: avi-controller
  user: '{{ ansible_ssh_user }}'
  gather_facts: "true"
  roles:
    - {role: ansible-role-avicontroller, become: true}
    - {role: avi-post-controller, become: false}

There’s one important thing to know before we run the playbook. When you have an AVI subscription you get custom container images with a predefined default password which makes it easier for you to do the cluster setup fully automated. You find the default password variable in group_vars/all.yml there you set as well if the password should be changed.

Let’s execute the ansible playbook, it takes a bit time for the three nodes to boot up:

berndonline@lab:~/avi-lab-vagrant$ ansible-playbook ../avi-lab-provision/playbooks/avi-controller-install.yml

PLAY [avi-controller] *********************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************
ok: [avi-controller-3]
ok: [avi-controller-2]
ok: [avi-controller-1]

TASK [ansible-role-avicontroller : Avi Controller | Deployment] ***************************************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avicontroller/tasks/docker/main.yml for avi-controller-1, avi-controller-2, avi-controller-3

TASK [ansible-role-avicontroller : Avi Controller | Services | systemd | Check if Avi Controller installed] *******************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avicontroller/tasks/docker/services/systemd/check.yml for avi-controller-1, avi-controller-2, avi-controller-3

TASK [ansible-role-avicontroller : Avi Controller | Check if Avi Controller installed] ****************************************************************************
ok: [avi-controller-3]
ok: [avi-controller-2]
ok: [avi-controller-1]

TASK [ansible-role-avicontroller : Avi Controller | Services | init.d | Check if Avi Controller installed] ********************************************************
skipping: [avi-controller-1]
skipping: [avi-controller-2]
skipping: [avi-controller-3]

TASK [ansible-role-avicontroller : Avi Controller | Check minimum requirements] ***********************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avicontroller/tasks/docker/requirements.yml for avi-controller-1, avi-controller-2, avi-controller-3

TASK [ansible-role-avicontroller : Avi Controller | Requirements | Check for docker] ******************************************************************************
ok: [avi-controller-2]
ok: [avi-controller-3]
ok: [avi-controller-1]

...

TASK [avi-post-controller : wait for cluster nodes up] ************************************************************************************************************
FAILED - RETRYING: wait for cluster nodes up (30 retries left).
FAILED - RETRYING: wait for cluster nodes up (30 retries left).
FAILED - RETRYING: wait for cluster nodes up (30 retries left).

...

FAILED - RETRYING: wait for cluster nodes up (7 retries left).
FAILED - RETRYING: wait for cluster nodes up (8 retries left).
FAILED - RETRYING: wait for cluster nodes up (7 retries left).
FAILED - RETRYING: wait for cluster nodes up (7 retries left).
ok: [avi-controller-2]
ok: [avi-controller-3]
ok: [avi-controller-1]

PLAY RECAP ********************************************************************************************************************************************************
avi-controller-1           : ok=36   changed=6    unreachable=0    failed=0
avi-controller-2           : ok=35   changed=5    unreachable=0    failed=0
avi-controller-3           : ok=35   changed=5    unreachable=0    failed=0

berndonline@lab:~/avi-lab-vagrant$

We are not finished yet and need to set basic settings like NTP and DNS, and need to configure the AVI three node controller cluster with another playbook:

---
- hosts: localhost
  connection: local
  roles:
    - {role: avi-cluster-setup, become: false}
    - {role: avi-change-password, become: false, when: avi_change_password == true}

The first role uses the REST API to do the configuration changes and requires the AVI ansible sdk role and for these reason it is very useful using the custom subscription images because you know the default password otherwise you need to modify the main setup.json file.

Let’s run the AVI cluster setup playbook:

berndonline@lab:~/avi-lab-vagrant$ ansible-playbook ../avi-lab-provision/playbooks/avi-cluster-setup.yml

PLAY [localhost] **************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************
ok: [localhost]

TASK [ansible-role-avisdk : Checking if avisdk python library is present] *****************************************************************************************
ok: [localhost] => {
    "msg": "Please make sure avisdk is installed via pip. 'pip install avisdk --upgrade'"
}

TASK [avi-cluster-setup : set AVI dns and ntp facts] **************************************************************************************************************
ok: [localhost]

TASK [avi-cluster-setup : set AVI cluster facts] ******************************************************************************************************************
ok: [localhost]

TASK [avi-cluster-setup : configure ntp and dns controller nodes] *************************************************************************************************
changed: [localhost]

TASK [avi-cluster-setup : configure AVI cluster] ******************************************************************************************************************
changed: [localhost]

TASK [avi-cluster-setup : wait for cluster become active] *********************************************************************************************************
FAILED - RETRYING: wait for cluster become active (30 retries left).
FAILED - RETRYING: wait for cluster become active (29 retries left).
FAILED - RETRYING: wait for cluster become active (28 retries left).

...

FAILED - RETRYING: wait for cluster become active (14 retries left).
FAILED - RETRYING: wait for cluster become active (13 retries left).
FAILED - RETRYING: wait for cluster become active (12 retries left).
ok: [localhost]

TASK [avi-change-password : change default admin password on cluster build when subscription] *********************************************************************
skipping: [localhost]

PLAY RECAP ********************************************************************************************************************************************************
localhost                  : ok=7    changed=2    unreachable=0    failed=0

berndonline@lab:~/avi-lab-vagrant$

We can check in the web console to see if the cluster is booted and correctly setup:

Last but not least we need the ansible playbook for the AVI service engines installation which relies on the official AVI ansible se role:

---
- hosts: avi-se
  user: '{{ ansible_ssh_user }}'
  gather_facts: "true"
  roles:
    - {role: ansible-role-avise, become: true}

Let’s run the playbook for the service engines installation:

berndonline@lab:~/avi-lab-vagrant$ ansible-playbook ../avi-lab-provision/playbooks/avi-se-install.yml

PLAY [avi-se] *****************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************
ok: [avi-se-2]
ok: [avi-se-1]

TASK [ansible-role-avisdk : Checking if avisdk python library is present] *****************************************************************************************
ok: [avi-se-1] => {
    "msg": "Please make sure avisdk is installed via pip. 'pip install avisdk --upgrade'"
}
ok: [avi-se-2] => {
    "msg": "Please make sure avisdk is installed via pip. 'pip install avisdk --upgrade'"
}

TASK [ansible-role-avise : Avi SE | Set facts] ********************************************************************************************************************
skipping: [avi-se-1]
skipping: [avi-se-2]

TASK [ansible-role-avise : Avi SE | Deployment] *******************************************************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avise/tasks/docker/main.yml for avi-se-1, avi-se-2

TASK [ansible-role-avise : Avi SE | Check minimum requirements] ***************************************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avise/tasks/docker/requirements.yml for avi-se-1, avi-se-2

TASK [ansible-role-avise : Avi SE | Requirements | Check for docker] **********************************************************************************************
ok: [avi-se-2]
ok: [avi-se-1]

TASK [ansible-role-avise : Avi SE | Requirements | Set facts] *****************************************************************************************************
ok: [avi-se-1]
ok: [avi-se-2]

TASK [ansible-role-avise : Avi SE | Requirements | Validate Parameters] *******************************************************************************************
ok: [avi-se-1] => {
    "changed": false,
    "msg": "All assertions passed"
}
ok: [avi-se-2] => {
    "changed": false,
    "msg": "All assertions passed"
}

...

TASK [ansible-role-avise : Avi SE | Services | systemd | Start the service since it's not running] ****************************************************************
changed: [avi-se-1]
changed: [avi-se-2]

RUNNING HANDLER [ansible-role-avise : Avi SE | Services | systemd | Daemon reload] ********************************************************************************
ok: [avi-se-2]
ok: [avi-se-1]

RUNNING HANDLER [ansible-role-avise : Avi SE | Services | Restart the avise service] ******************************************************************************
changed: [avi-se-2]
changed: [avi-se-1]

PLAY RECAP ********************************************************************************************************************************************************
avi-se-1                   : ok=47   changed=7    unreachable=0    failed=0
avi-se-2                   : ok=47   changed=7    unreachable=0    failed=0

berndonline@lab:~/avi-lab-vagrant$

After a few minutes you see the AVI service engines automatically register on the controller cluster and you are ready start configuring the detailed load balancing configuration:

Please share your feedback and leave a comment.

Software defined Load Balancing with AVI Networks

Throughout my career I have used various load balancing platforms, from commercial products like F5 or Citrix NetScaler to open source software like HA proxy. All of them do their job of balancing traffic between servers but the biggest problem is the scalability: yes you can deploy more load balancers but the config is static bound to the appliance.

AVI Networks has a very interesting concept of moving away from the traditional idea of load balancing and solving this problem by decoupling the control-plane from the data-plane which makes the load balancing Service Engines basically just forward traffic and can be more easily scaled-out when needed. Another nice advantage is that these Service Engines are container based and can run on basically every type of infrastructure from Bare Metal, on VMs to modern containerized platforms like Kubernetes or OpenShift:

All the AVI components are running as container image on any type of infrastructure or platform architecture which makes the deployment very easy to run on-premise or cloud systems.

The Service Engines on Hypervisor or Base-metal servers need network cards which support Intel’s DPDK for better packet forwarding. Have a look at the AVI linux server deployment guide: https://avinetworks.com/docs/latest/installing-avi-vantage-for-a-linux-server-cloud/

Here now, is a basic step-by-step guide on how to install the AVI Vantage Controller and additional Service Engines. Have a look at the AVI Knowledge-Base where the install is explained in detail:  https://avinetworks.com/docs/latest/installing-avi-vantage-for-a-linux-server-cloud/

Here is the link to my Vagrant environment: https://github.com/berndonline/avi-lab-vagrant

Let’s start with the manual AVI Controller installation:

[vagrant@localhost ~]$ sudo ./avi_baremetal_setup.py
AviVantage Version Tag: 17.2.11-9014
Found disk with largest capacity at [/]

Welcome to Avi Initialization Script

Pre-requisites: This script assumes the below utilities are installed:
                  docker (yum -y install docker/apt-get install docker.io)
Supported Vers: OEL - 6.5,6.7,6.9,7.0,7.1,7.2,7.3,7.4 Centos/RHEL - 7.0,7.1,7.2,7.3,7.4, Ubuntu - 14.04,16.04

Do you want to run Avi Controller on this Host [y/n] y
Do you want to run Avi SE on this Host [y/n] n
Enter The Number Of Cores For Avi Controller. Range [4, 4] 4
Please Enter Memory (in GB) for Avi Controller. Range [12, 7]
Please enter directory path for Avi Controller Config (Default [/opt/avi/controller/data/])
Please enter disk size (in GB) for Avi Controller Config (Default [30G]) 10
Do you have separate partition for Avi Controller Metrics ? If yes, please enter directory path, else leave it blank
Do you have separate partition for Avi Controller Client Logs ? If yes, please enter directory path, else leave it blank
Please enter Controller IP (Default [10.255.1.232])
Enter the Controller SSH port. (Default [5098])
Enter the Controller system-internal portal port. (Default [8443])
AviVantage Version Tag: 17.2.11-9014
AviVantage Version Tag: 17.2.11-9014
Run SE           : No
Run Controller   : Yes
Controller Cores : 4
Memory(GB)       : 7
Disk(GB)         : 10
Controller IP    : 10.255.1.232
Disabling Avi Services...
Loading Avi CONTROLLER Image. Please Wait..
Installation Successful. Starting Services..
[vagrant@localhost ~]$
[vagrant@localhost ~]$ sudo systemctl start avicontroller

Or as a single command without interactive mode:

[vagrant@localhost ~]$ sudo ./avi_baremetal_setup.py -c -cd 10 -cc 4 -cm 7 -i 10.255.1.232
AviVantage Version Tag: 17.2.11-9014
Found disk with largest capacity at [/]
AviVantage Version Tag: 17.2.11-9014
AviVantage Version Tag: 17.2.11-9014
Run SE           : No
Run Controller   : Yes
Controller Cores : 4
Memory(GB)       : 7
Disk(GB)         : 10
Controller IP    : 10.255.1.232
Disabling Avi Services...
Loading Avi CONTROLLER Image. Please Wait..
Installation Successful. Starting Services..
[vagrant@localhost ~]$
[vagrant@localhost ~]$ sudo systemctl start avicontroller

The installer basically installed a container image on the server which runs the AVI Controller:

[vagrant@localhost ~]$ sudo docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED              STATUS              PORTS                                                                                                                                    NAMES
c689435f74fd        avinetworks/controller:17.2.11-9014                   "/opt/avi/scripts/do…"   About a minute ago   Up About a minute   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:5054->5054/tcp, 0.0.0.0:5098->5098/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:161->161/udp   avicontroller
[vagrant@localhost ~]$

Next you can connect via the web console to change the password and finalise the configuration to configure DNS, NTP and SMTP:

When you get to the menu Orchestrator integration you can put in the details for the controller to install additional service engines:

In the meantime the AVI Controller installs the specified Service Engines in the background, which automatically appear once this is completed under the infrastructure menu:

Like with the AVI Controller, the Service Engines run as container image:

[vagrant@localhost ~]$ sudo docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS               NAMES
2c6b207ed376        avinetworks/se:17.2.11-9014                   "/opt/avi/scripts/do…"   51 seconds ago      Up 50 seconds                           avise
[vagrant@localhost ~]$

The next article will be about automatically deploying the AVI Controller and Service Engines via Ansible, and looking into how to integrate AVI with OpenShift.

Please share your feedback and leave a comment.

Citrix NetScaler (Update)

Almost 3 years ago I evaluated and implemented for my ex company F5 BIG-IP 8950 load balancer; now for my new company I start implementing Citrix NetScaler VPX for our Windows infrastructre (Lync, Exchange and Citrix). Here a short overview how it is integrate in the two data centre’s:

From the first look I like the NetScaler, the CLI is a bit easier to understand from what I think but will see how it goes over the next weeks regarding balancing compare to F5 😉

I like the policy based routing implementation on the Netscaler, here a short example:

add ns pbr mgmt-access ALLOW -srcIP = 10.1.0.200 -destIP = 192.168.0.1-192.168.0.254 -nextHop 10.1.0.254 -priority 10

Access to the 10.1.0.200 will be routed to the gateway 10.1.0.254 even you have a default gateway configured what shows to another direction.

In my set-up you need to configure policy based routing if server in the Windows backend network try to access virtual server IPs in the LB-Transfer network otherwise you have asymetric routing.

add ns pbr VIP-WIND-DC02 ALLOW -srcIP = 10.2.0.1-10.2.0.100 -destIP = 10.1.0.1-10.1.0.254 -nextHop 10.2.0.254 -priority 11