VMware NSX Edge Routing

I recently deployed VMware NSX (Software defined Network) in our datacentre.

About the NSX Edge cluster there are some some specific requirements when it comes to physical connectivity. All the information you find as well in the VMware NSX reference design guide.

On Cumulus Linux side I am using BGP in Quagga and the traffic is distributed via ECMP (Equal-cost multi-path) over multiple Edge nodes within NSX.

See below the overview:

Very important to have an dedicated VLAN per core switch to the Edge Nodes. In my tests it didn’t work with a shared VLAN via the Cumulus core, the BGP neighbor relationships were correctly established but there was a problem with the packet forwarding via the Peerlink.

Here the example Quagga BGP config from spine-1:

router bgp 65001 vrf vrf-nsx
 neighbor 10.100.254.1 remote-as 65002
 neighbor 10.100.254.1 password verystrongpassword!!
 neighbor 10.100.254.1 timers 1 3
 neighbor 10.100.254.2 remote-as 65002
 neighbor 10.100.254.2 password verystrongpassword!!
 neighbor 10.100.254.2 timers 1 3
 neighbor 10.100.254.3 remote-as 65002
 neighbor 10.100.254.3 password verystrongpassword!!
 neighbor 10.100.254.3 timers 1 3
 neighbor 10.100.254.4 remote-as 65002
 neighbor 10.100.254.4 password verystrongpassword!!
 neighbor 10.100.254.4 timers 1 3
 neighbor 10.100.255.2 remote-as 65001
 neighbor 10.100.255.2 password verystrongpassword!!

 address-family ipv4 unicast
  network 0.0.0.0/0
  neighbor 10.100.254.1 route-map bgp-in in
  neighbor 10.100.254.2 route-map bgp-in in
  neighbor 10.100.254.3 route-map bgp-in in
  neighbor 10.100.254.4 route-map bgp-in in
  neighbor 10.100.255.2 next-hop-self
  neighbor 10.100.255.2 route-map bgp-in in
 exit-address-family

ip route 0.0.0.0/0 10.100.255.14 vrf vrf_prod-nsx

access-list bgp-in permit 10.100.0.0/17

route-map bgp-in permit 10
 match ip address bgp-in

The second core switch, spine-2 looks exactly the same only different IP addresses are used.

More about my experience with VMware NSX will follow soon.

Ansible Playbook for Cumulus Linux (Layer 3 Fabric)

Like promised, here a basic Ansible Playbook for a Cumulus Linux Layer 3 Fabric running BGP which you see in large-scale data centre deployments.

You push the layer 2 network as close as possible to the server and use ECMP (Equal-cost multi-path) routing to distribute your traffic via multiple uplinks.

These kind of network designs are highly scalable and in my example a 2-Tier deployment but you can easily use 3-Tiers where the Leaf switches become the distribution layer and you add additional ToR (Top of Rack) switches.

Here some interesting information about Facebook’s next-generation data centre fabric: Introducing data center fabric, the next-generation Facebook data center network

I use the same hosts file like from my previous blog post Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Hosts file:

[spine]
spine-1
spine-2
[leaf]
leaf-1
leaf-2

 

Ansible Playbook:

---
- hosts: all
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
    spine_interfaces:
      - { port: swp1, desc: leaf-1, address: "{{ swp1_address}}" }
      - { port: swp2, desc: leaf-2, address: "{{ swp2_address}}" }
      - { port: swp6, desc: layer3_peerlink, address: "{{ peer_address}}" }
    leaf_interfaces:
      - { port: swp1, desc: spine-1, address: "{{ swp1_address}}" }
      - { port: swp2, desc: spine-2, address: "{{ swp2_address}}" }      
  handlers:
    - name: ifreload
      command: ifreload -a
    - name: restart quagga
      service: name=quagga state=restarted
  tasks:
    - name: deploys spine interface configuration
      template: src=templates/spine_routing_interfaces.j2 dest=/etc/network/interfaces
      when: "'spine' in group_names"
      notify: ifreload
    - name: deploys leaf interface configuration
      template: src=templates/leaf_routing_interfaces.j2 dest=/etc/network/interfaces
      when: "'leaf' in group_names"
      notify: ifreload
    - name: deploys quagga configuration
      template: src=templates/quagga.conf.j2 dest=/etc/quagga/Quagga.conf
      notify: restart quagga

Let’s run the Playbook and see the output:

[[email protected] cumulus]$ ansible-playbook routing.yml -i hosts

PLAY [all] *********************************************************************

TASK [deploys spine interface configuration] ***********************************
skipping: [leaf-2]
skipping: [leaf-1]
changed: [spine-2]
changed: [spine-1]

TASK [deploys leaf interface configuration] ************************************
skipping: [spine-1]
skipping: [spine-2]
changed: [leaf-2]
changed: [leaf-1]

TASK [deploys quagga configuration] ********************************************
changed: [leaf-2]
changed: [spine-2]
changed: [spine-1]
changed: [leaf-1]

RUNNING HANDLER [ifreload] *****************************************************
changed: [leaf-2]
changed: [leaf-1]
changed: [spine-2]
changed: [spine-1]

RUNNING HANDLER [restart quagga] ***********************************************
changed: [leaf-1]
changed: [leaf-2]
changed: [spine-1]
changed: [spine-2]

PLAY RECAP *********************************************************************
leaf-1                     : ok=4    changed=4    unreachable=0    failed=0
leaf-2                     : ok=4    changed=4    unreachable=0    failed=0
spine-1                    : ok=4    changed=4    unreachable=0    failed=0
spine-2                    : ok=4    changed=4    unreachable=0    failed=0

[[email protected] cumulus]$

To verify the configuration let’s look at the BGP routes on the leaf switches:

[email protected]:/home/cumulus# net show route bgp
RIB entry for bgp
=================
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, P - PIM, T - Table, v - VNC,
       V - VPN,
       > - selected route, * - FIB route

B>* 10.0.0.0/30 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                    via 10.0.1.5, swp2, 00:02:14
B   10.0.1.0/30 [20/0] via 10.0.1.1 inactive, 00:02:14
                       via 10.0.1.5, swp2, 00:02:14
B   10.0.1.4/30 [20/0] via 10.0.1.5 inactive, 00:02:14
                       via 10.0.1.1, swp1, 00:02:14
B>* 10.0.2.0/30 [20/0] via 10.0.1.5, swp2, 00:02:14
  *                    via 10.0.1.1, swp1, 00:02:14
B>* 10.0.2.4/30 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                    via 10.0.1.5, swp2, 00:02:14
B>* 10.200.0.0/24 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                      via 10.0.1.5, swp2, 00:02:14
[email protected]:/home/cumulus#
[email protected]:/home/cumulus# net show route bgp
RIB entry for bgp
=================
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, P - PIM, T - Table, v - VNC,
       V - VPN,
       > - selected route, * - FIB route

B>* 10.0.0.0/30 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                    via 10.0.2.1, swp2, 00:02:22
B>* 10.0.1.0/30 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                    via 10.0.2.1, swp2, 00:02:22
B>* 10.0.1.4/30 [20/0] via 10.0.2.1, swp2, 00:02:22
  *                    via 10.0.2.5, swp1, 00:02:22
B   10.0.2.0/30 [20/0] via 10.0.2.1 inactive, 00:02:22
                       via 10.0.2.5, swp1, 00:02:22
B   10.0.2.4/30 [20/0] via 10.0.2.5 inactive, 00:02:22
                       via 10.0.2.1, swp2, 00:02:22
B>* 10.100.0.0/24 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                      via 10.0.2.1, swp2, 00:02:22
[email protected]:/home/cumulus#

Have fun!

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.