Ansible Playbook for deploying AVI Controller nodes and Service Engines

After my first blog post about Software defined Load Balancing with AVI Networks, here is how to automatically deploy AVI controller and services engines via Ansible.

Here are the links to my repositories; AVI Vagrant environment: https://github.com/berndonline/avi-lab-vagrant and AVI Ansible Playbook: https://github.com/berndonline/avi-lab-provision

Make sure that your vagrant environment is running,

[email protected]:~/avi-lab-vagrant$ vagrant status
Current machine states:

avi-controller-1          running (libvirt)
avi-controller-2          running (libvirt)
avi-controller-3          running (libvirt)
avi-se-1                  running (libvirt)
avi-se-2                  running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

I needed to modify the ansible.cfg to integrate a filter plugin:

[defaults]
inventory = ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
host_key_checking=False

library = /home/berndonline/avi-lab-provision/lib
filter_plugins = /home/berndonline/avi-lab-provision/lib/filter_plugins

The controller installation is actually very simple and I got it from the official AVI ansible role they created, I added a second role to check ones the controller nodes are successfully booted:

---
- hosts: avi-controller
  user: '{{ ansible_ssh_user }}'
  gather_facts: "true"
  roles:
    - {role: ansible-role-avicontroller, become: true}
    - {role: avi-post-controller, become: false}

There’s one important thing to know before we run the playbook. When you have an AVI subscription you get custom container images with a predefined default password which makes it easier for you to do the cluster setup fully automated. You find the default password variable in group_vars/all.yml there you set as well if the password should be changed.

Let’s execute the ansible playbook, it takes a bit time for the three nodes to boot up:

[email protected]:~/avi-lab-vagrant$ ansible-playbook ../avi-lab-provision/playbooks/avi-controller-install.yml

PLAY [avi-controller] *********************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************
ok: [avi-controller-3]
ok: [avi-controller-2]
ok: [avi-controller-1]

TASK [ansible-role-avicontroller : Avi Controller | Deployment] ***************************************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avicontroller/tasks/docker/main.yml for avi-controller-1, avi-controller-2, avi-controller-3

TASK [ansible-role-avicontroller : Avi Controller | Services | systemd | Check if Avi Controller installed] *******************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avicontroller/tasks/docker/services/systemd/check.yml for avi-controller-1, avi-controller-2, avi-controller-3

TASK [ansible-role-avicontroller : Avi Controller | Check if Avi Controller installed] ****************************************************************************
ok: [avi-controller-3]
ok: [avi-controller-2]
ok: [avi-controller-1]

TASK [ansible-role-avicontroller : Avi Controller | Services | init.d | Check if Avi Controller installed] ********************************************************
skipping: [avi-controller-1]
skipping: [avi-controller-2]
skipping: [avi-controller-3]

TASK [ansible-role-avicontroller : Avi Controller | Check minimum requirements] ***********************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avicontroller/tasks/docker/requirements.yml for avi-controller-1, avi-controller-2, avi-controller-3

TASK [ansible-role-avicontroller : Avi Controller | Requirements | Check for docker] ******************************************************************************
ok: [avi-controller-2]
ok: [avi-controller-3]
ok: [avi-controller-1]

...

TASK [avi-post-controller : wait for cluster nodes up] ************************************************************************************************************
FAILED - RETRYING: wait for cluster nodes up (30 retries left).
FAILED - RETRYING: wait for cluster nodes up (30 retries left).
FAILED - RETRYING: wait for cluster nodes up (30 retries left).

...

FAILED - RETRYING: wait for cluster nodes up (7 retries left).
FAILED - RETRYING: wait for cluster nodes up (8 retries left).
FAILED - RETRYING: wait for cluster nodes up (7 retries left).
FAILED - RETRYING: wait for cluster nodes up (7 retries left).
ok: [avi-controller-2]
ok: [avi-controller-3]
ok: [avi-controller-1]

PLAY RECAP ********************************************************************************************************************************************************
avi-controller-1           : ok=36   changed=6    unreachable=0    failed=0
avi-controller-2           : ok=35   changed=5    unreachable=0    failed=0
avi-controller-3           : ok=35   changed=5    unreachable=0    failed=0

[email protected]:~/avi-lab-vagrant$

We are not finished yet and need to set basic settings like NTP and DNS, and need to configure the AVI three node controller cluster with another playbook:

---
- hosts: localhost
  connection: local
  roles:
    - {role: avi-cluster-setup, become: false}
    - {role: avi-change-password, become: false, when: avi_change_password == true}

The first role uses the REST API to do the configuration changes and requires the AVI ansible sdk role and for these reason it is very useful using the custom subscription images because you know the default password otherwise you need to modify the main setup.json file.

Let’s run the AVI cluster setup playbook:

[email protected]:~/avi-lab-vagrant$ ansible-playbook ../avi-lab-provision/playbooks/avi-cluster-setup.yml

PLAY [localhost] **************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************
ok: [localhost]

TASK [ansible-role-avisdk : Checking if avisdk python library is present] *****************************************************************************************
ok: [localhost] => {
    "msg": "Please make sure avisdk is installed via pip. 'pip install avisdk --upgrade'"
}

TASK [avi-cluster-setup : set AVI dns and ntp facts] **************************************************************************************************************
ok: [localhost]

TASK [avi-cluster-setup : set AVI cluster facts] ******************************************************************************************************************
ok: [localhost]

TASK [avi-cluster-setup : configure ntp and dns controller nodes] *************************************************************************************************
changed: [localhost]

TASK [avi-cluster-setup : configure AVI cluster] ******************************************************************************************************************
changed: [localhost]

TASK [avi-cluster-setup : wait for cluster become active] *********************************************************************************************************
FAILED - RETRYING: wait for cluster become active (30 retries left).
FAILED - RETRYING: wait for cluster become active (29 retries left).
FAILED - RETRYING: wait for cluster become active (28 retries left).

...

FAILED - RETRYING: wait for cluster become active (14 retries left).
FAILED - RETRYING: wait for cluster become active (13 retries left).
FAILED - RETRYING: wait for cluster become active (12 retries left).
ok: [localhost]

TASK [avi-change-password : change default admin password on cluster build when subscription] *********************************************************************
skipping: [localhost]

PLAY RECAP ********************************************************************************************************************************************************
localhost                  : ok=7    changed=2    unreachable=0    failed=0

[email protected]:~/avi-lab-vagrant$

We can check in the web console to see if the cluster is booted and correctly setup:

Last but not least we need the ansible playbook for the AVI service engines installation which relies on the official AVI ansible se role:

---
- hosts: avi-se
  user: '{{ ansible_ssh_user }}'
  gather_facts: "true"
  roles:
    - {role: ansible-role-avise, become: true}

Let’s run the playbook for the service engines installation:

[email protected]:~/avi-lab-vagrant$ ansible-playbook ../avi-lab-provision/playbooks/avi-se-install.yml

PLAY [avi-se] *****************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************
ok: [avi-se-2]
ok: [avi-se-1]

TASK [ansible-role-avisdk : Checking if avisdk python library is present] *****************************************************************************************
ok: [avi-se-1] => {
    "msg": "Please make sure avisdk is installed via pip. 'pip install avisdk --upgrade'"
}
ok: [avi-se-2] => {
    "msg": "Please make sure avisdk is installed via pip. 'pip install avisdk --upgrade'"
}

TASK [ansible-role-avise : Avi SE | Set facts] ********************************************************************************************************************
skipping: [avi-se-1]
skipping: [avi-se-2]

TASK [ansible-role-avise : Avi SE | Deployment] *******************************************************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avise/tasks/docker/main.yml for avi-se-1, avi-se-2

TASK [ansible-role-avise : Avi SE | Check minimum requirements] ***************************************************************************************************
included: /home/berndonline/avi-lab-provision/roles/ansible-role-avise/tasks/docker/requirements.yml for avi-se-1, avi-se-2

TASK [ansible-role-avise : Avi SE | Requirements | Check for docker] **********************************************************************************************
ok: [avi-se-2]
ok: [avi-se-1]

TASK [ansible-role-avise : Avi SE | Requirements | Set facts] *****************************************************************************************************
ok: [avi-se-1]
ok: [avi-se-2]

TASK [ansible-role-avise : Avi SE | Requirements | Validate Parameters] *******************************************************************************************
ok: [avi-se-1] => {
    "changed": false,
    "msg": "All assertions passed"
}
ok: [avi-se-2] => {
    "changed": false,
    "msg": "All assertions passed"
}

...

TASK [ansible-role-avise : Avi SE | Services | systemd | Start the service since it's not running] ****************************************************************
changed: [avi-se-1]
changed: [avi-se-2]

RUNNING HANDLER [ansible-role-avise : Avi SE | Services | systemd | Daemon reload] ********************************************************************************
ok: [avi-se-2]
ok: [avi-se-1]

RUNNING HANDLER [ansible-role-avise : Avi SE | Services | Restart the avise service] ******************************************************************************
changed: [avi-se-2]
changed: [avi-se-1]

PLAY RECAP ********************************************************************************************************************************************************
avi-se-1                   : ok=47   changed=7    unreachable=0    failed=0
avi-se-2                   : ok=47   changed=7    unreachable=0    failed=0

[email protected]:~/avi-lab-vagrant$

After a few minutes you see the AVI service engines automatically register on the controller cluster and you are ready start configuring the detailed load balancing configuration:

Please share your feedback and leave a comment.

Software defined Load Balancing with AVI Networks

Throughout my career I have used various load balancing platforms, from commercial products like F5 or Citrix NetScaler to open source software like HA proxy. All of them do their job of balancing traffic between servers but the biggest problem is the scalability: yes you can deploy more load balancers but the config is static bound to the appliance.

AVI Networks has a very interesting concept of moving away from the traditional idea of load balancing and solving this problem by decoupling the control-plane from the data-plane which makes the load balancing Service Engines basically just forward traffic and can be more easily scaled-out when needed. Another nice advantage is that these Service Engines are container based and can run on basically every type of infrastructure from Bare Metal, on VMs to modern containerized platforms like Kubernetes or OpenShift:

All the AVI components are running as container image on any type of infrastructure or platform architecture which makes the deployment very easy to run on-premise or cloud systems.

The Service Engines on Hypervisor or Base-metal servers need network cards which support Intel’s DPDK for better packet forwarding. Have a look at the AVI linux server deployment guide: https://avinetworks.com/docs/latest/installing-avi-vantage-for-a-linux-server-cloud/

Here now, is a basic step-by-step guide on how to install the AVI Vantage Controller and additional Service Engines. Have a look at the AVI Knowledge-Base where the install is explained in detail:  https://avinetworks.com/docs/latest/installing-avi-vantage-for-a-linux-server-cloud/

Here is the link to my Vagrant environment: https://github.com/berndonline/avi-lab-vagrant

Let’s start with the manual AVI Controller installation:

[[email protected] ~]$ sudo ./avi_baremetal_setup.py
AviVantage Version Tag: 17.2.11-9014
Found disk with largest capacity at [/]

Welcome to Avi Initialization Script

Pre-requisites: This script assumes the below utilities are installed:
                  docker (yum -y install docker/apt-get install docker.io)
Supported Vers: OEL - 6.5,6.7,6.9,7.0,7.1,7.2,7.3,7.4 Centos/RHEL - 7.0,7.1,7.2,7.3,7.4, Ubuntu - 14.04,16.04

Do you want to run Avi Controller on this Host [y/n] y
Do you want to run Avi SE on this Host [y/n] n
Enter The Number Of Cores For Avi Controller. Range [4, 4] 4
Please Enter Memory (in GB) for Avi Controller. Range [12, 7]
Please enter directory path for Avi Controller Config (Default [/opt/avi/controller/data/])
Please enter disk size (in GB) for Avi Controller Config (Default [30G]) 10
Do you have separate partition for Avi Controller Metrics ? If yes, please enter directory path, else leave it blank
Do you have separate partition for Avi Controller Client Logs ? If yes, please enter directory path, else leave it blank
Please enter Controller IP (Default [10.255.1.232])
Enter the Controller SSH port. (Default [5098])
Enter the Controller system-internal portal port. (Default [8443])
AviVantage Version Tag: 17.2.11-9014
AviVantage Version Tag: 17.2.11-9014
Run SE           : No
Run Controller   : Yes
Controller Cores : 4
Memory(GB)       : 7
Disk(GB)         : 10
Controller IP    : 10.255.1.232
Disabling Avi Services...
Loading Avi CONTROLLER Image. Please Wait..
Installation Successful. Starting Services..
[[email protected] ~]$
[[email protected] ~]$ sudo systemctl start avicontroller

Or as a single command without interactive mode:

[[email protected] ~]$ sudo ./avi_baremetal_setup.py -c -cd 10 -cc 4 -cm 7 -i 10.255.1.232
AviVantage Version Tag: 17.2.11-9014
Found disk with largest capacity at [/]
AviVantage Version Tag: 17.2.11-9014
AviVantage Version Tag: 17.2.11-9014
Run SE           : No
Run Controller   : Yes
Controller Cores : 4
Memory(GB)       : 7
Disk(GB)         : 10
Controller IP    : 10.255.1.232
Disabling Avi Services...
Loading Avi CONTROLLER Image. Please Wait..
Installation Successful. Starting Services..
[[email protected] ~]$
[[email protected] ~]$ sudo systemctl start avicontroller

The installer basically installed a container image on the server which runs the AVI Controller:

[[email protected] ~]$ sudo docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED              STATUS              PORTS                                                                                                                                    NAMES
c689435f74fd        avinetworks/controller:17.2.11-9014                   "/opt/avi/scripts/do…"   About a minute ago   Up About a minute   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:5054->5054/tcp, 0.0.0.0:5098->5098/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:161->161/udp   avicontroller
[[email protected] ~]$

Next you can connect via the web console to change the password and finalise the configuration to configure DNS, NTP and SMTP:

When you get to the menu Orchestrator integration you can put in the details for the controller to install additional service engines:

In the meantime the AVI Controller installs the specified Service Engines in the background, which automatically appear once this is completed under the infrastructure menu:

Like with the AVI Controller, the Service Engines run as container image:

[[email protected] ~]$ sudo docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS               NAMES
2c6b207ed376        avinetworks/se:17.2.11-9014                   "/opt/avi/scripts/do…"   51 seconds ago      Up 50 seconds                           avise
[[email protected] ~]$

The next article will be about automatically deploying the AVI Controller and Service Engines via Ansible, and looking into how to integrate AVI with OpenShift.

Please share your feedback and leave a comment.

NetScaler HTTP-to-HTTPS Redirect Configuration Example

Here an easy quick example how to redirect HTTP to HTTPS, you can also do the redirect within the virtual server but then the virtual server is shown as down.

The following example is a nicer way to implement the redirect.

add responder action responder-HTTP-HTTPS redirect "\"https://\"+http.REQ.HEADER(\"Host\").HTTP_HEADER_SAFE+http.REQ.URL.PATH_AND_QUERY.HTTP_URL_SAFE"
add responder policy responder-POLICY-EXCHANGE "http.REQ.HOSTNAME.EQ(\"owa.domain.com\") && client.TCP.DSTPORT.EQ(80)" responder-HTTP-HTTPS
set responder param -undefAction NOOP

add serviceGroup service-EXCHANGE-OWA_80 HTTP -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP YES -appflowLog DISABLED
bind serviceGroup service-EXCHANGE-OWA_80 EXCHANGE-CAS01 80 -CustomServerID "\"None\""
add lb vserver vserver-EXCHANGE-OWA_80 HTTP 192.168.0.1 80 -persistenceType NONE -cltTimeout 180
bind lb vserver vserver-EXCHANGE-OWA_80 service-EXCHANGE-OWA_80
bind lb vserver vserver-EXCHANGE-OWA_80 -policyName responder-POLICY-EXCHANGE -priority 100 -gotoPriorityExpression END -type REQUEST

Howto Update Citrix NetScaler Firmware

Log on to the NetScaler appliance with SSH, such as PuTTY. Use the nsroot credentials to log in to the appliance.

Switch to the shell prompt.

Last login: Tue Mar  4 00:03:13 2014 from 10.49.9.110
Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994
        The Regents of the University of California.  All rights reserved.
 Done
> 
> shell
Copyright (c) 1992-2008 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
[email protected]#

Run the following command to change to the default installation directory:

[email protected]# cd /var/nsinstall/
[email protected]# ls
10.1nsinstall   installns_state
[email protected]# cd 10.1nsinstall/
[email protected]# ls
build_123.9
[email protected]# mkdir build_124.13

Upload new firmware in the build directory build-10.1-124.13_nc.tgz

Run the following command to extract the firmware:

[email protected]# cd build_124.13/
[email protected]# ls
build-10.1-124.13_nc.tgz
[email protected]# tar xzvf build-10.1-124.13_nc.tgz 
.ns.version
ns-10.1-124.13.gz
ns-10.1-124.13.sha2
installns
nsconfig
bootloader.tgz
help.tgz
CitrixNetScalerManagementPackSCOM2012.msi
CitrixNetScalerLoadBalancer.msi
BaltimoreCyberTrustRoot.cert
BaltimoreCyberTrustRoot_CH.cert
Citrix_Access_Gateway.dmg
macversion.txt
apidoc.tgz
NSConfig.wsdl
NSStat.wsdl
ns-10.1-124.13-gui.tar
ns-10.1-124.13-nitro-java.tgz
ns-10.1-124.13-nitro-csharp.tgz
ns-10.1-124.13-nitro-rest.tgz
ns-10.1-124.13-nitro-perl-samples.tgz
ns-10.1-124.13-nitro-python-samples.tgz
vmware-tools.tgz

Run the following command to install the software you have downloaded:

[email protected]# ./installns 

installns version (10.1-124.13) kernel (ns-10.1-124.13.gz)

  The Netscaler version 10.1-124.13 checksum file is located on 
  http://www.mycitrix.com under Support > Downloads > Citrix NetScaler.
  Select the Release 10.1-124.13 link and expand the "Show Documentation" link
  to view the SHA2 checksum file for build 10.1-124.13.

  There may be a pause of up to 3 minutes while data is written to the flash.
  Do not interrupt the installation process once it has begun.

Installation will proceed in 5 seconds, CTRL-C to abort
Installation is starting ...
VPX platform. Skipping CallHome checks.

Copying ns-10.1-124.13.gz to /flash/ns-10.1-124.13.gz ... 
.......................................................
Installing XML API documentation...
Installing NSConfig.wsdl...
Installing NSStat.wsdl...
Installing online help...
Installing SCOM Management Pack...
Installing LoadBalancer Pack...
Installing GUI...
Installing Mac binary and Mac version file...
Installing NITRO...
Installing Jazz certificate ...
Installing Call Home certificate ...
Creating after upgrade script ...

Installation has completed.

Reboot NOW? [Y/N] Y
Rebooting ...

NetScaler Global Server Load Balancing (GSLB) Configuration

Bin some month since I started working with Citrix NetScaler and so far I really like the NetScaler. I will not go into the deep how Global Server Load Balancing (GSLB) works and only explain my configuration. I use Exchange OWA as an example for GSLB, I will also not explain how to set-up a virtual server for Exchange OWA, please have a look at my previous blog post: NetScaler Exchange 2013 Load Balancing.

In my configuration I will use the same GSLB virtual server for internal and external access to Exchange OWA. The NetScaler see’s if you are coming from the internal network and give you a private IP address back, or when you are external you get a public IP address back for the same DNS entry.

Internal GSLB

External GSLB

Before you start you have to delegate a Subdomains in Microsoft DNS or BIND for Global Server Load Balancing on a NetScaler Appliance, more information how to do that you find here: http://support.citrix.com/article/CTX121713

VPX A

Enable GSLB on the NetScaler in location A and configure sites and ADNS service

enable ns feature GSLB

add gslb site site-A 10.1.0.200
add gslb site site-B 10.2.0.200

set ns rpcNode 10.1.0.200 -password ***key*** -srcIP * -secure YES
set ns rpcNode 10.2.0.200 -password ***key*** -srcIP * -secure YES

add service service-ADNS_53 10.1.0.240 ADNS 53 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport NO -sp OFF -cltTimeout 120 -svrTimeout 120 -CustomServerID "\"None\"" -CKA NO -TCPB NO -CMP NO
add service service-ADNS_TCP53 10.1.0.240 ADNS_TCP 53 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CustomServerID "\"None\"" -CKA NO -TCPB NO -CMP NO

add dns addRec ns01-a.gslb.domain.com 217.100.100.101
add dns soaRec gslb.domain.com -originServer ns01-a.gslb.domain.com -contact hostmaster.gslb.domain.com 
add dns nsRec gslb.domain.com ns01-a.gslb.domain.com -TTL 300 
add dns addRec ns01-a.gslb.domain.com 217.100.100.101 add dns zone gslb.domain.com -proxyMode NO

VPX B

Enable GSLB on the NetScaler in location B and configure sites and ADNS service

enable ns feature GSLB

add gslb site site-A 10.1.0.200
add gslb site site-B 10.2.0.200

set ns rpcNode 10.1.0.200 -password ***key*** -srcIP * -secure YES
set ns rpcNode 10.2.0.200 -password ***key*** -srcIP * -secure YES

add service service-ADNS_53 10.2.0.240 ADNS 53 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport NO -sp OFF -cltTimeout 120 -svrTimeout 120 -CustomServerID "\"None\"" -CKA NO -TCPB NO -CMP NO
add service service-ADNS_TCP53 10.2.0.240 ADNS_TCP 53 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CustomServerID "\"None\"" -CKA NO -TCPB NO -CMP NO

add dns addRec ns01-b.gslb.domain.com 217.100.200.101
add dns soaRec gslb.domain.com -originServer ns01-b.gslb.domain.com -contact hostmaster.gslb.domain.com 
add dns nsRec gslb.domain.com ns01-b.gslb.domain.com -TTL 300 
add dns addRec ns01-b.gslb.domain.com 217.100.200.101 add dns zone gslb.domain.com -proxyMode NO

VPX A

Configure GSLB service and virtual server in location A

add server vserver-EXCHANGE-OWA-A 10.1.0.100
add server vserver-EXCHANGE-OWA-B 10.2.0.100

add gslb vserver vserver-GSLB-EXCHANGE-OWA SSL -backupLBMethod ROUNDROBIN -tolerance 0 -EDR ENABLED -appflowLog DISABLED
set gslb vserver vserver-GSLB-EXCHANGE-OWA -backupLBMethod ROUNDROBIN -tolerance 0 -EDR ENABLED -appflowLog DISABLED

add gslb service service-GSLB-EXCHANGE-OWA-A_443 vserver-EXCHANGE-OWA-A SSL 443 -publicIP 217.100.100.102 -publicPort 443 -maxClient 0 -siteName site-A -cltTimeout 180 -svrTimeout 360 -downStateFlush DISABLED
add gslb service service-GSLB-EXCHANGE-OWA-B_443 vserver-EXCHANGE-OWA-B SSL 443 -publicIP 217.100.200.102 -publicPort 443 -maxClient 0 -siteName site-B -cltTimeout 180 -svrTimeout 360 -downStateFlush DISABLED

bind gslb vserver vserver-GSLB-EXCHANGE-OWA_443 -serviceName service-GSLB-EXCHANGE-OWA-A_443
bind gslb vserver vserver-GSLB-EXCHANGE-OWA_443 -serviceName service-GSLB-EXCHANGE-OWA-B_443
bind gslb vserver vserver-GSLB-EXCHANGE-OWA_443 -domainName owa.gslb.domain.com -TTL 5 -sitedomainTTL 300
bind gslb service service-GSLB-EXCHANGE-OWA-D_443 -monitorName https
bind gslb service service-GSLB-EXCHANGE-OWA-C_443 -monitorName https

VPX B

Configure GSLB service and virtual server in location B

add server vserver-EXCHANGE-OWA-A 10.1.0.100
add server vserver-EXCHANGE-OWA-B 10.2.0.100

add gslb vserver vserver-GSLB-EXCHANGE-OWA SSL -backupLBMethod ROUNDROBIN -tolerance 0 -EDR ENABLED -appflowLog DISABLED
set gslb vserver vserver-GSLB-EXCHANGE-OWA -backupLBMethod ROUNDROBIN -tolerance 0 -EDR ENABLED -appflowLog DISABLED

add gslb service service-GSLB-EXCHANGE-OWA-A_443 vserver-EXCHANGE-OWA-A SSL 443 -publicIP 217.100.100.102 -publicPort 443 -maxClient 0 -siteName site-A -cltTimeout 180 -svrTimeout 360 -downStateFlush DISABLED
add gslb service service-GSLB-EXCHANGE-OWA-B_443 vserver-EXCHANGE-OWA-B SSL 443 -publicIP 217.100.200.102 -publicPort 443 -maxClient 0 -siteName site-B -cltTimeout 180 -svrTimeout 360 -downStateFlush DISABLED

bind gslb vserver vserver-GSLB-EXCHANGE-OWA_443 -serviceName service-GSLB-EXCHANGE-OWA-A_443
bind gslb vserver vserver-GSLB-EXCHANGE-OWA_443 -serviceName service-GSLB-EXCHANGE-OWA-B_443
bind gslb vserver vserver-GSLB-EXCHANGE-OWA_443 -domainName owa.gslb.domain.com -TTL 5 -sitedomainTTL 300
bind gslb service service-GSLB-EXCHANGE-OWA-A_443 -monitorName https
bind gslb service service-GSLB-EXCHANGE-OWA-B_443 -monitorName https

Now you need to create an DNS view because  we assign the public IP to the GSLB service and everybody gets the public IP as DNS response. With the internal DNS view, internal users get the internal private IP address back.

VPX A

add dns view view-INTERNAL
add dns action action-DNS-INTERNAL ViewName -viewName view-INTERNAL
add dns policy policy-DNS-INTERNAL "client.IP.SRC.IN_SUBNET(10.0.0.0/8)" action-DNS-INTERNAL
bind dns global policy-DNS-INTERNAL 100 -gotoPriorityExpression END -type REQ_DEFAULT

bind gslb service service-GSLB-EXCHANGE-OWA-A_443 -viewName view-INTERNAL 10.1.0.100
bind gslb service service-GSLB-EXCHANGE-OWA-B_443 -viewName view-INTERNAL 10.2.0.100

VPX B

add dns view view-INTERNAL
add dns action action-DNS-INTERNAL ViewName -viewName view-INTERNAL
add dns policy policy-DNS-INTERNAL "client.IP.SRC.IN_SUBNET(10.0.0.0/8)" action-DNS-INTERNAL
bind dns global policy-DNS-INTERNAL 100 -gotoPriorityExpression END -type REQ_DEFAULT

bind gslb service service-GSLB-EXCHANGE-OWA-A_443 -viewName view-INTERNAL 10.1.0.100
bind gslb service service-GSLB-EXCHANGE-OWA-B_443 -viewName view-INTERNAL 10.2.0.100

That’s it from the configuration for GSLB, quite easy and straight forward 🙂

Here you find a very detailed PDF from Citrix about GSLB: http://support.citrix.com/servlet/KbServlet/download/22506-102-671576/gslb-primer_FINAL_1019.pdf