F5 iRule for layer 7 balancing

This is an iRule for F5 BIGIP to balance on the requested http path, so http://domain.com/appversion1-0 or http://domain.com/appversion2-0 . The benefit is that you have different server pools under the same domain.

With the iRule you can also access every member and open a status.txt or servername.txt with the following path http://domain.com/monitor/servername.txt?app=appversion1-0&node=1 .  This option is for monitoring reasons as you can monitor every node over the external URL.

I also use the status.txt for an http monitor to see if the server is online or offline. You can use that to set the web server offline without doing that over the F5 management console.

GET /monitor/status.txt HTTP/1.1\r\nHOST:\r\nConnection: Close\r\n\r\n
Enable: ONLINE
Disable: OFFLINE

In the servername.txt I write the name of the server.

This is the iRule:

when HTTP_REQUEST {
switch -glob [string tolower [HTTP::path]] {
"*appversion1-0*" { pool pool_http80_domain.com-appversion1-0}
"*appversion2-0*" { pool pool_http80_domain.com-appversion2-0}
"*monitor*" {

switch [URI::query [HTTP::uri] "app"] {
appversion1-0 {
switch [URI::query [HTTP::uri] "node"] {
1 { pool pool_http80_domain.com-appversion1-0 member 10.0.1.1 80 }
2 { pool pool_http80_domain.com-appversion1-0 member 10.0.1.2 80 }

default { HTTP::respond 200 content "<html><head><title>Member Status</title></head><body>INVALID OR MISSING QUERYSTRING: You must enter the URL in the following format http://domain.com/monitor/(status|servername).txt?app=AppVersionNumber&node=NodeNumber" "Content-Type" "text/html" }

}
}

appversion2-0 {
switch [URI::query [HTTP::uri] "node"] {
1 { pool pool_http80_domain.com-appversion2-0 member 10.0.2.1 80 }
2 { pool pool_http80_domain.com-appversion2-0 member 10.0.2.2 80 }

default { HTTP::respond 200 content "<html><head><title>Member Status</title></head><body>INVALID OR MISSING QUERYSTRING: You must enter the URL in the following format http://domain.com/monitor/(status|serversame).txt?app=AppVersionNumber&node=NodeNumber" "Content-Type" "text/html" }

}
}

default { HTTP::respond 200 content "<html><head><title>Member Status</title></head><body>INVALID OR MISSING QUERYSTRING: You must enter the URL in the following format http://domain.com/monitor/(status|servername).txt?app=AppVersionNumber&node=NodeNumber" "Content-Type" "text/html" }

}
}

default { pool pool_http80_domain-appversion1-0 }

}
}

The default app pool is appversion1-0 so everything that didn’t match is forwarded to that pool.

F5 BIGIP Software upgrade procedure

Here is a procedure on how to upgrade the software on an F5 BIGIP cluster tier.

  1. Download last ISO version for F5 BIGIP LTM
  2. Upload the ISO to the standby unit
  3. Install the ISO to a free partition on the standby unit
  4. Change boot location to the new partition and device automatically reboots.

After 6 to 7 minutes the device booted with the new software version. From now on you won’t be able to sync your configs between the devices anymore because of the different software versions it’s only possible within the same release. F5 doesn’t recommend to run cluster with different software versions.

Now initiate the failover on the active unit! Do that maybe at low traffic period! Normally there should be no side effect and the failover is not visible to clients because of the mirroring the sessions.

In case of problems with the new software version you should be able to switch back to the other cluster member with the old software version at any time.

  1. When you verify that everything is working after the failover you can start the other device now
  2. Upload ISO file
  3. Install ISO file
  4. Change boot location

The same procedure you use to install hotfixes on the BigIP.

F5 Load Balancer

My company decided to buy two F5 Big IP Local Traffic Manager 8950 for our web cluster systems with a maximum balanced capacity of 20 gigabit. We also got the F5 Local Traffic Manager as Virtual Edition for our test enviroment and to integrate it into the test network.

In the beginning the F5 looked very complicated but after some time now playing around with it and setting up things, it got easier and easier. I only need to have a deeper look at the iRules because this feature allows you can to really do everything with the F5 load balancer.

So for now I can’t really write more about it but I will soon  post some updates.

Next week I have the integration of the F5 Big-IP into the production network of my company and will start setting up the first new linux web cluster but only with a layer 4 balancing.

Update:

Here a nice picture of the new F5 BigIP boxes in the data center from my company:

For the new web cluster for my company we set-up layer 4 balanced web server pools and layer 7 balanced pools.

The layer 7 balancing was necessary because there where different server pools with applications that needed to run under the same domain and where routed on the expected http/https request. I will post the iRule for everybody soon.

Here two interesting ressources to get information:

Ask F5

F5 Devcentral

 

Bug in Cisco Catalyst 2960S

I found a new bug in the IOS 12.2(55) on our c2960s where we had a high cpu usage and lot of traceback syslog messages.

Here is the info from the Cisco TAC about the problem:

Symptom:
C2960S switch getting the following message:

%SUPERVISOR-4-UNEXPECTED: rfd_idx = 56B hwptr 64D75AC queue 8 
-Traceback= 12ECA7C 12EF254 12EF5DC 1382180 137C680 137C628 13821F0 1383128 137C730 184AA64 184AA3C 1848974

Conditions:
WS-C2960S running IOS earlier than 12.2(58)SE

Workaround:
Temporary workaround is reload, fix is in 12.2(58)SE
Action Plan
1. As a temporary workaround we can reload the switch.
2. For permanent solution please upgrade the switch to 12.2(58)SE.

Cisco FlexLink Configuration Examples

Here you can find some configuration examples for the Cisco Flexlink.

This example shows how to configure an interface with a backup interface and to verify the configuration:

Switch# configure terminal
Switch(conf)# interface fastethernet1/1
Switch(conf-if)# switchport backup interface fastethernet1/2
Switch(conf-if)# end
Switch# show interface switchport backup

Switch Backup Interface Pairs:

Active Interface        Backup Interface        State
------------------------------------------------------------------------
FastEthernet1/1         FastEthernet1/2         Active Up/Backup Standby
FastEthernet1/3         FastEthernet1/4         Active Up/Backup Standby
Port-channel1           GigabitEthernet1/1      Active Up/Backup Standby

This example shows how to configure preemption mode as bandwidth for a  backup interface pair and to verify the configuration:

Switch# configure terminal
Switch(conf)# interface gigabitethernet1/0/1
Switch(conf-if)# switchport backup interface gigabitethernet1/2
Switch(conf-if)# switchport backup interface gigabitethernet1/2 preemption mode forced
Switch(conf-if)# switchport backup interface gigabitethernet1/2 preemption delay 50
Switch(conf-if)# end
Switch# show interface switchport backup detail

Active Interface     Backup Interface     State
------------------------------------------------------------------------
GigabitEthernet1/21     GigabitEthernet1/2     Active Down/Backup Down

Interface Pair : Gi1/21, Gi1/2
Preemption Mode : forced
Preemption Delay : 50 seconds
Bandwidth : 10000 Kbit (Gi1/1), 10000 Kbit (Gi1/2)
Mac Address Move Update Vlan : auto

To configure VLAN load balancing on Flex Links,  follow these steps:
In this example, VLANs 1 to 50, 60, and 100 to 120 are configured on the  switch:

Switch(config)# interface fastethernet 1/6
Switch(config-if)# switchport backup interface fastethernet 1/0/8 prefer vlan 60,100-120

When both interfaces are up, Fast Ethernet port1/0/8 forwards traffic for VLANs 60 and 100 to 120 and Fast Ethernet port 1/0/6 forwards traffic for VLANs 1 to 50

Switch# show interfaces switchport backup

Switch Backup Interface Pairs:

Active Interface     Backup Interface     State
------------------------------------------------------------------------
FastEthernet1/6     FastEthernet1/8     Active Up/Backup Standby

Vlans Preferred on Active Interface: 1-50
Vlans Preferred on Backup Interface: 60, 100-120