Cumulus Networks Case Study

Cumulus Networks published a new case study about my work with them on my recent datacenter network rebuild using Cumulus Linux on Dell Open Networking switches and about how we have used Cumulus NetQ as fabric validation system.

Have a look and read the case study here:

Link: https://cumulusnetworks.com/customers/smartgames-technologies/

Cumulus also published a press release a few weeks ago with one of my quotes about NetQ I made when we were working on the case study.

“With NetQ, we can run small check commands and see what really is going on in our network,” said Bernd Malmqvist, Tech Lead Systems Operations at SmartGames Technologies. “The benefits to us are early alerting and validating the entire state of the fabric. Monitoring is one thing, but with NetQ, the knowledge is instant. NetQ is really unique; it’s a tool that tells us exactly what is wrong in our environment, and the insight to know where an issue is stemming from.”

You can read the full press release read here:

https://cumulusnetworks.com/about/press-releases/cumulus-networks-bolsters-cumulus-netq-kubernetes-integration-provide-network-operators-actionable-insight-container-networking/

Using Cumulus NetQ fabric validation with Ansible

Here a new post about Cumulus NetQ, I build a small Ansible playbook to validate the state of MLAG within a Cumulus Linux fabric using automation.

In this case I use the command “netq check clag json” to check for nodes in failed or warning state. This example can be used when doing automated changes to MLAG and to validate the configuration afterwards, or as a pre-check before I execute the main playbook.

---
- hosts: spine leaf
  gather_facts: False
  user: cumulus

  tasks:
     - name: Gather Clag info in JSON
       command: netq check clag json
       register: result
       run_once: true
       failed_when: "'ERROR' in result.stdout"

     - name: stdout string into json
       set_fact: json_output="{{result.stdout | from_json }}"
       run_once: true

     - name: output of json_output variable
       debug:
         var: json_output
       run_once: true

     - name: check failed clag members
       debug: msg="Check failed clag members"
       when: json_output["failedNodes"]|length == 0
       run_once: true

     - name: clag members status failed
       fail: msg="Device {{item['node']}}, Why node is in failed state? {{item['reason']}}"
       with_items:  "{{json_output['failedNodes']}}"
       run_once: true

     - name: clag members status warning
       fail: msg="Device {{item['node']}}, Why node is in warning state? {{item['reason']}}"
       when: json_output["warningNodes"] is defined
       with_items:  "{{json_output['warningNodes']}}"
       run_once: true

Here the output when MLAG is healthy:

PLAY [spine leaf] *********************************************************************************************************************************************************************************************************************

TASK [Gather Clag info in JSON] *******************************************************************************************************************************************************************************************************
Friday 20 October 2017  17:56:35 +0200 (0:00:00.017)       0:00:00.017 ********
changed: [spine-1]

TASK [stdout string into json] ********************************************************************************************************************************************************************************************************
Friday 20 October 2017  17:56:35 +0200 (0:00:00.325)       0:00:00.343 ********
ok: [spine-1]

TASK [output of json_output variable] *************************************************************************************************************************************************************************************************
Friday 20 October 2017  17:56:35 +0200 (0:00:00.010)       0:00:00.353 ********
ok: [spine-1] => {
    "json_output": {
        "failedNodes": [],
        "summary": {
            "checkedNodeCount": 4,
            "failedNodeCount": 0,
            "warningNodeCount": 0
        }
    }
}

TASK [check failed clag members] ******************************************************************************************************************************************************************************************************
Friday 20 October 2017  17:56:35 +0200 (0:00:00.010)       0:00:00.363 ********
ok: [spine-1] => {
    "msg": "Check failed clag members"
}

TASK [clag members status failed] *****************************************************************************************************************************************************************************************************
Friday 20 October 2017  17:56:35 +0200 (0:00:00.011)       0:00:00.374 ********

TASK [clag members status warning] ****************************************************************************************************************************************************************************************************
Friday 20 October 2017  17:56:35 +0200 (0:00:00.007)       0:00:00.382 ********
skipping: [spine-1]

PLAY RECAP ****************************************************************************************************************************************************************************************************************************
spine-1                    : ok=4    changed=1    unreachable=0    failed=0

Friday 20 October 2017  17:56:35 +0200 (0:00:00.008)       0:00:00.391 ********
===============================================================================
Gather Clag info in JSON ------------------------------------------------ 0.33s
check failed clag members ----------------------------------------------- 0.01s
stdout string into json ------------------------------------------------- 0.01s
output of json_output variable ------------------------------------------ 0.01s
clag members status warning --------------------------------------------- 0.01s
clag members status failed ---------------------------------------------- 0.01s

In the following example leaf-1 node is in warning state because of a missing “clagd-backup-ip“, another warning could be also a single attached bond interface:

PLAY [spine leaf] *********************************************************************************************************************************************************************************************************************

TASK [Gather Clag info in JSON] *******************************************************************************************************************************************************************************************************
Friday 20 October 2017  18:02:05 +0200 (0:00:00.016)       0:00:00.016 ********
changed: [spine-1]

TASK [stdout string into json] ********************************************************************************************************************************************************************************************************
Friday 20 October 2017  18:02:05 +0200 (0:00:00.225)       0:00:00.241 ********
ok: [spine-1]

TASK [output of json_output variable] *************************************************************************************************************************************************************************************************
Friday 20 October 2017  18:02:05 +0200 (0:00:00.010)       0:00:00.251 ********
ok: [spine-1] => {
    "json_output": {
        "failedNodes": [],
        "summary": {
            "checkedNodeCount": 4,
            "failedNodeCount": 0,
            "warningNodeCount": 1
        },
        "warningNodes": [
            {
                "node": "leaf-1",
                "reason": "Backup IP Failed"
            }
        ]
    }
}

TASK [check failed clag members] ******************************************************************************************************************************************************************************************************
Friday 20 October 2017  18:02:05 +0200 (0:00:00.010)       0:00:00.261 ********
ok: [spine-1] => {
    "msg": "Check failed clag members"
}

TASK [clag members status failed] *****************************************************************************************************************************************************************************************************
Friday 20 October 2017  18:02:05 +0200 (0:00:00.011)       0:00:00.273 ********

TASK [clag members status warning] ****************************************************************************************************************************************************************************************************
Friday 20 October 2017  18:02:05 +0200 (0:00:00.007)       0:00:00.281 ********
failed: [spine-1] (item={u'node': u'leaf-1', u'reason': u'Backup IP Failed'}) => {"failed": true, "item": {"node": "leaf-1", "reason": "Backup IP Failed"}, "msg": "Device leaf-1, Why node is in warning state? Backup IP Failed"}

NO MORE HOSTS LEFT ********************************************************************************************************************************************************************************************************************
	to retry, use: --limit @/home/berndonline/cumulus-lab-vagrant/netq_check_clag.retry

PLAY RECAP ****************************************************************************************************************************************************************************************************************************
spine-1                    : ok=4    changed=1    unreachable=0    failed=1

Friday 20 October 2017  18:02:05 +0200 (0:00:00.015)       0:00:00.297 ********
===============================================================================
Gather Clag info in JSON ------------------------------------------------ 0.23s
clag members status warning --------------------------------------------- 0.02s
check failed clag members ----------------------------------------------- 0.01s
output of json_output variable ------------------------------------------ 0.01s
stdout string into json ------------------------------------------------- 0.01s
clag members status failed ---------------------------------------------- 0.01s

Another example is that NetQ reports about a problem that leaf-1 has no matching clagid on peer, in this case on leaf-2 the interface bond1 is missing in the configuration:

PLAY [spine leaf] ***********************************************************************************************************************************************************************************************************************

TASK [Gather Clag info in JSON] *********************************************************************************************************************************************************************************************************
Monday 23 October 2017  18:49:15 +0200 (0:00:00.016)       0:00:00.016 ********
changed: [spine-1]

TASK [stdout string into json] **********************************************************************************************************************************************************************************************************
Monday 23 October 2017  18:49:15 +0200 (0:00:00.223)       0:00:00.240 ********
ok: [spine-1]

TASK [output of json_output variable] ***************************************************************************************************************************************************************************************************
Monday 23 October 2017  18:49:15 +0200 (0:00:00.010)       0:00:00.250 ********
ok: [spine-1] => {
    "json_output": {
        "failedNodes": [
            {
                "node": "leaf-1",
                "reason": "Conflicted Bonds: bond1:matching clagid not configured on peer"
            }
        ],
        "summary": {
            "checkedNodeCount": 4,
            "failedNodeCount": 1,
            "warningNodeCount": 1
        },
        "warningNodes": [
            {
                "node": "leaf-1",
                "reason": "Singly Attached Bonds: bond1"
            }
        ]
    }
}

TASK [check failed clag members] ********************************************************************************************************************************************************************************************************
Monday 23 October 2017  18:49:15 +0200 (0:00:00.010)       0:00:00.260 ********
skipping: [spine-1]

TASK [clag members status failed] *******************************************************************************************************************************************************************************************************
Monday 23 October 2017  18:49:15 +0200 (0:00:00.009)       0:00:00.269 ********
failed: [spine-1] (item={u'node': u'leaf-1', u'reason': u'Conflicted Bonds: bond1:matching clagid not configured on peer'}) => {"failed": true, "item": {"node": "leaf-1", "reason": "Conflicted Bonds: bond1:matching clagid not configured on peer"}, "msg": "Device leaf-1, Why node is in failed state? Conflicted Bonds: bond1:matching clagid not configured on peer"}

NO MORE HOSTS LEFT **********************************************************************************************************************************************************************************************************************
	to retry, use: --limit @/home/berndonline/cumulus-lab-vagrant/netq_check_clag.retry

PLAY RECAP ******************************************************************************************************************************************************************************************************************************
spine-1                    : ok=3    changed=1    unreachable=0    failed=1

Monday 23 October 2017  18:49:15 +0200 (0:00:00.014)       0:00:00.284 ********
===============================================================================
Gather Clag info in JSON ------------------------------------------------ 0.22s
clag members status failed ---------------------------------------------- 0.02s
stdout string into json ------------------------------------------------- 0.01s
output of json_output variable ------------------------------------------ 0.01s
check failed clag members ----------------------------------------------- 0.01s

This is just an example to show what possibilities I have with Cumulus NetQ when I use automation to validate my changes.

There are some information in the Cumulus NetQ documentation about, taking preventive steps with your network: https://docs.cumulusnetworks.com/display/NETQ/Taking+Preventative+Steps+with+Your+Network

New Cumulus Linux leaf-switches

Just added another pair of Cumulus Linux leaf (top of rack) switches and more VMware compute nodes to our production system.

We use a Layer 2 network fabric, see my recent post about an example: Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Got a bit tight in the rack for the DA cables, basically each server has 2x 10Gbit/s data- and 2x 10Gbit/s storage-network connections.

Front rack view:

Another two racks will follow soon in the same configuration.

 

Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Here a basic Ansible Playbook for a Cumulus Linux lab which I use for testing. Similar spine and leaf configuration I used in my recent data centre redesign, the playbook includes one VRF and all SVIs are joined the VRF.

I use the Cumulus VX appliance under GNS3 which you can get for free at Cumulus: https://cumulusnetworks.com/products/cumulus-vx/

The first step is to configure the management interface of the Cumulus switches, edit /etc/network/interfaces and afterwards run “ifreload -a” to appy the config changes:

auto eth0
iface eth0
	address 192.168.100.20x/24
	gateway 192.168.100.2

Hosts file:

[spine]
spine-1 
spine-2
[leaf]
leaf-1 
leaf-2 

Before you start, you should push your ssh keys and set the hostname to prepare the switches.

Now we are ready to deploy the interface configuration for the Layer 2 Fabric, below the interfaces.yml file.

---
- hosts: all
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
    spine_interfaces:
      - { clag: bond1, desc: downlink-leaf, clagid: 1, port: swp1 swp2 }
    spine_bridge_ports: "peerlink bond1"
    bridge_vlans: "100-199"
    spine_vrf: "vrf-prod"
    spine_bridge:
      - { desc: web, vlan: 100, address: "{{ vlan100_address }}", address_virtual: "00:00:5e:00:01:00 10.1.0.254/24", vrf: vrf-prod }
      - { desc: app, vlan: 101, address: "{{ vlan101_address }}", address_virtual: "00:00:5e:00:01:01 10.1.1.254/24", vrf: vrf-prod }
      - { desc: db, vlan: 102, address: "{{ vlan102_address }}", address_virtual: "00:00:5e:00:01:02 10.1.2.254/24", vrf: vrf-prod }
    leaf_interfaces:
      - { clag: bond1, desc: uplink-spine, clagid: 1, port: swp1 swp2 }
    leaf_access_interfaces:
      - { desc: web-server, vlan: 100, port: swp3 }
      - { desc: app-server, vlan: 101, port: swp4 }
      - { desc: db-server, vlan: 102, port: swp5 }
    leaf_bridge_ports: "bond1 swp3 swp4 swp5"
  handlers:
    - name: ifreload
      command: ifreload -a
  tasks:
    - name: deploys spine interface configuration
      template: src=templates/spine_interfaces.j2 dest=/etc/network/interfaces
      when: "'spine' in group_names"
      notify: ifreload
    - name: deploys leaf interface configuration
      template: src=templates/leaf_interfaces.j2 dest=/etc/network/interfaces
      when: "'leaf' in group_names"
      notify: ifreload

I use Jinja2 templates for the interfaces configuration.

Here the output from the Ansible Playbook which only takes a few seconds to run:

[root@ansible cumulus]$ ansible-playbook interfaces.yml -i hosts

PLAY [all] *********************************************************************

TASK [deploys spine interface configuration] ***********************************
skipping: [leaf-2]
skipping: [leaf-1]
changed: [spine-2]
changed: [spine-1]

TASK [deploys leaf interface configuration] ************************************
skipping: [spine-1]
skipping: [spine-2]
changed: [leaf-1]
changed: [leaf-2]

RUNNING HANDLER [ifreload] *****************************************************
changed: [leaf-2]
changed: [leaf-1]
changed: [spine-2]
changed: [spine-1]

PLAY RECAP *********************************************************************
leaf-1                     : ok=2    changed=2    unreachable=0    failed=0
leaf-2                     : ok=2    changed=2    unreachable=0    failed=0
spine-1                    : ok=2    changed=2    unreachable=0    failed=0
spine-2                    : ok=2    changed=2    unreachable=0    failed=0

[root@ansible cumulus]$

Lets quickly verify the configuration:

cumulus@spine-1:~$ net show int

    Name           Master    Speed      MTU  Mode           Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  -------------  -------------  -------------  ---------------------------------
UP  lo             None      N/A      65536  Loopback                                     IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt           cumulus        eth0           IP: 192.168.100.205/24
UP  bond1          bridge    2G        1500  Bond/Trunk                                   Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                    Untagged Members: bond1, peerlink
UP  bridge-100-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.0.254/24
UP  bridge-101-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.1.254/24
UP  bridge-102-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.2.254/24
UP  bridge.100     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.0.252/24
UP  bridge.101     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.1.252/24
UP  bridge.102     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.2.252/24
UP  peerlink       bridge    1G        1500  Bond/Trunk                                   Bond Members: swp11(UP)
UP  peerlink.4094  None      1G        1500  SubInt/L3                                    IP: 169.254.1.1/30
UP  vrf-prod       None      N/A      65536  NotConfigured

cumulus@spine-1:~$

cumulus@spine-1:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.205/24
                                  ====          eth0          spine-2
                                  ====          eth0          leaf-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          leaf-1        Master: bond1(UP)
swp2         1G       BondMember  swp2          leaf-2        Master: bond1(UP)
swp11        1G       BondMember  swp11         spine-2       Master: peerlink(UP)
cumulus@spine-1:~$
cumulus@leaf-1:~$ net show int

    Name           Master    Speed      MTU  Mode        Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  ----------  -------------  -------------  --------------------------------
UP  lo             None      N/A      65536  Loopback                                  IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt        cumulus        eth0           IP: 192.168.100.207/24
UP  swp3           bridge    1G        1500  Access/L2                                 Untagged VLAN: 100
UP  swp4           bridge    1G        1500  Access/L2                                 Untagged VLAN: 101
UP  swp5           bridge    1G        1500  Access/L2                                 Untagged VLAN: 102
UP  bond1          bridge    2G        1500  Bond/Trunk                                Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                 Untagged Members: bond1, swp3-5
UP  peerlink       None      1G        1500  Bond                                      Bond Members: swp11(UP)
UP  peerlink.4093  None      1G        1500  SubInt/L3                                 IP: 169.254.1.1/30

cumulus@leaf-1:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.207/24
                                  ====          eth0          spine-2
                                  ====          eth0          spine-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          spine-1       Master: bond1(UP)
swp2         1G       BondMember  swp1          spine-2       Master: bond1(UP)
swp11        1G       BondMember  swp11         leaf-2        Master: peerlink(UP)
cumulus@leaf-1:~$

As you can see the configuration is correctly deployed and you can start testing.

The configuration for a real datacentre Fabric is of course, more complex (multiple VRFs, SVIs and complex Routing) but with Ansible you can quickly deploy and manage hundreds of switches.

In one of the next posts, I will write an Ansible Playbook for a Layer 3 datacentre Fabric configuration using BGP and ECMP routing on Quagga.

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Cumulus Networks (Open Networking)

In my recent data centre network redesign project I used Cumulus Linux on Open Networking switches from Dell (S3048-ON and S4048-ON). I heard from Cumulus Networks around 2 1/2 years ago and did some testing with the VX appliance back then but I was waiting for the official release of the VRF feature. Ones I got the go ahead for the data centre redesign project, it was clear to replace Cisco and use Cumulus Linux on the all the data centre switches.

I was one of the first Cumulus users of finding a bug with VRF in Quagga because of extensive use of VRFs:

RN-518 (CM-13328) - Quagga sometimes installs a duplicate static route

After a Cumulus Linux switch with VRF routes installed was 
rebooted, the VRF routes were present in the kernel but 
not installed into hardware. Restarting Quagga resolved 
the issue.

This issue has been fixed in the update 
to the 3.1.2 repository.

Like in my latest post the idea was to use a converged network stack which needed some more advanced configuration on the switch itself with different VRFs to split the data centre into Corporate (office) and Production.

So far I have a very good experience with Cumulus Networks, the support is awesome and very skilled!