Ansible Playbook for Cumulus Linux (Layer 3 Fabric)

Like promised, here a basic Ansible Playbook for a Cumulus Linux Layer 3 Fabric running BGP which you see in large-scale data centre deployments.

You push the layer 2 network as close as possible to the server and use ECMP (Equal-cost multi-path) routing to distribute your traffic via multiple uplinks.

These kind of network designs are highly scalable and in my example a 2-Tier deployment but you can easily use 3-Tiers where the Leaf switches become the distribution layer and you add additional ToR (Top of Rack) switches.

Here some interesting information about Facebook’s next-generation data centre fabric: Introducing data center fabric, the next-generation Facebook data center network

I use the same hosts file like from my previous blog post Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Hosts file:

[spine]
spine-1
spine-2
[leaf]
leaf-1
leaf-2

 

Ansible Playbook:

---
- hosts: all
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
    spine_interfaces:
      - { port: swp1, desc: leaf-1, address: "{{ swp1_address}}" }
      - { port: swp2, desc: leaf-2, address: "{{ swp2_address}}" }
      - { port: swp6, desc: layer3_peerlink, address: "{{ peer_address}}" }
    leaf_interfaces:
      - { port: swp1, desc: spine-1, address: "{{ swp1_address}}" }
      - { port: swp2, desc: spine-2, address: "{{ swp2_address}}" }      
  handlers:
    - name: ifreload
      command: ifreload -a
    - name: restart quagga
      service: name=quagga state=restarted
  tasks:
    - name: deploys spine interface configuration
      template: src=templates/spine_routing_interfaces.j2 dest=/etc/network/interfaces
      when: "'spine' in group_names"
      notify: ifreload
    - name: deploys leaf interface configuration
      template: src=templates/leaf_routing_interfaces.j2 dest=/etc/network/interfaces
      when: "'leaf' in group_names"
      notify: ifreload
    - name: deploys quagga configuration
      template: src=templates/quagga.conf.j2 dest=/etc/quagga/Quagga.conf
      notify: restart quagga

Let’s run the Playbook and see the output:

[root@ansible cumulus]$ ansible-playbook routing.yml -i hosts

PLAY [all] *********************************************************************

TASK [deploys spine interface configuration] ***********************************
skipping: [leaf-2]
skipping: [leaf-1]
changed: [spine-2]
changed: [spine-1]

TASK [deploys leaf interface configuration] ************************************
skipping: [spine-1]
skipping: [spine-2]
changed: [leaf-2]
changed: [leaf-1]

TASK [deploys quagga configuration] ********************************************
changed: [leaf-2]
changed: [spine-2]
changed: [spine-1]
changed: [leaf-1]

RUNNING HANDLER [ifreload] *****************************************************
changed: [leaf-2]
changed: [leaf-1]
changed: [spine-2]
changed: [spine-1]

RUNNING HANDLER [restart quagga] ***********************************************
changed: [leaf-1]
changed: [leaf-2]
changed: [spine-1]
changed: [spine-2]

PLAY RECAP *********************************************************************
leaf-1                     : ok=4    changed=4    unreachable=0    failed=0
leaf-2                     : ok=4    changed=4    unreachable=0    failed=0
spine-1                    : ok=4    changed=4    unreachable=0    failed=0
spine-2                    : ok=4    changed=4    unreachable=0    failed=0

[roote@ansible cumulus]$

To verify the configuration let’s look at the BGP routes on the leaf switches:

root@leaf-1:/home/cumulus# net show route bgp
RIB entry for bgp
=================
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, P - PIM, T - Table, v - VNC,
       V - VPN,
       > - selected route, * - FIB route

B>* 10.0.0.0/30 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                    via 10.0.1.5, swp2, 00:02:14
B   10.0.1.0/30 [20/0] via 10.0.1.1 inactive, 00:02:14
                       via 10.0.1.5, swp2, 00:02:14
B   10.0.1.4/30 [20/0] via 10.0.1.5 inactive, 00:02:14
                       via 10.0.1.1, swp1, 00:02:14
B>* 10.0.2.0/30 [20/0] via 10.0.1.5, swp2, 00:02:14
  *                    via 10.0.1.1, swp1, 00:02:14
B>* 10.0.2.4/30 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                    via 10.0.1.5, swp2, 00:02:14
B>* 10.200.0.0/24 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                      via 10.0.1.5, swp2, 00:02:14
root@leaf-1:/home/cumulus#
root@leaf-2:/home/cumulus# net show route bgp
RIB entry for bgp
=================
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, P - PIM, T - Table, v - VNC,
       V - VPN,
       > - selected route, * - FIB route

B>* 10.0.0.0/30 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                    via 10.0.2.1, swp2, 00:02:22
B>* 10.0.1.0/30 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                    via 10.0.2.1, swp2, 00:02:22
B>* 10.0.1.4/30 [20/0] via 10.0.2.1, swp2, 00:02:22
  *                    via 10.0.2.5, swp1, 00:02:22
B   10.0.2.0/30 [20/0] via 10.0.2.1 inactive, 00:02:22
                       via 10.0.2.5, swp1, 00:02:22
B   10.0.2.4/30 [20/0] via 10.0.2.5 inactive, 00:02:22
                       via 10.0.2.1, swp2, 00:02:22
B>* 10.100.0.0/24 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                      via 10.0.2.1, swp2, 00:02:22
root@leaf-2:/home/cumulus#

Have fun!

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Here a basic Ansible Playbook for a Cumulus Linux lab which I use for testing. Similar spine and leaf configuration I used in my recent data centre redesign, the playbook includes one VRF and all SVIs are joined the VRF.

I use the Cumulus VX appliance under GNS3 which you can get for free at Cumulus: https://cumulusnetworks.com/products/cumulus-vx/

The first step is to configure the management interface of the Cumulus switches, edit /etc/network/interfaces and afterwards run “ifreload -a” to appy the config changes:

auto eth0
iface eth0
	address 192.168.100.20x/24
	gateway 192.168.100.2

Hosts file:

[spine]
spine-1 
spine-2
[leaf]
leaf-1 
leaf-2 

Before you start, you should push your ssh keys and set the hostname to prepare the switches.

Now we are ready to deploy the interface configuration for the Layer 2 Fabric, below the interfaces.yml file.

---
- hosts: all
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
    spine_interfaces:
      - { clag: bond1, desc: downlink-leaf, clagid: 1, port: swp1 swp2 }
    spine_bridge_ports: "peerlink bond1"
    bridge_vlans: "100-199"
    spine_vrf: "vrf-prod"
    spine_bridge:
      - { desc: web, vlan: 100, address: "{{ vlan100_address }}", address_virtual: "00:00:5e:00:01:00 10.1.0.254/24", vrf: vrf-prod }
      - { desc: app, vlan: 101, address: "{{ vlan101_address }}", address_virtual: "00:00:5e:00:01:01 10.1.1.254/24", vrf: vrf-prod }
      - { desc: db, vlan: 102, address: "{{ vlan102_address }}", address_virtual: "00:00:5e:00:01:02 10.1.2.254/24", vrf: vrf-prod }
    leaf_interfaces:
      - { clag: bond1, desc: uplink-spine, clagid: 1, port: swp1 swp2 }
    leaf_access_interfaces:
      - { desc: web-server, vlan: 100, port: swp3 }
      - { desc: app-server, vlan: 101, port: swp4 }
      - { desc: db-server, vlan: 102, port: swp5 }
    leaf_bridge_ports: "bond1 swp3 swp4 swp5"
  handlers:
    - name: ifreload
      command: ifreload -a
  tasks:
    - name: deploys spine interface configuration
      template: src=templates/spine_interfaces.j2 dest=/etc/network/interfaces
      when: "'spine' in group_names"
      notify: ifreload
    - name: deploys leaf interface configuration
      template: src=templates/leaf_interfaces.j2 dest=/etc/network/interfaces
      when: "'leaf' in group_names"
      notify: ifreload

I use Jinja2 templates for the interfaces configuration.

Here the output from the Ansible Playbook which only takes a few seconds to run:

[root@ansible cumulus]$ ansible-playbook interfaces.yml -i hosts

PLAY [all] *********************************************************************

TASK [deploys spine interface configuration] ***********************************
skipping: [leaf-2]
skipping: [leaf-1]
changed: [spine-2]
changed: [spine-1]

TASK [deploys leaf interface configuration] ************************************
skipping: [spine-1]
skipping: [spine-2]
changed: [leaf-1]
changed: [leaf-2]

RUNNING HANDLER [ifreload] *****************************************************
changed: [leaf-2]
changed: [leaf-1]
changed: [spine-2]
changed: [spine-1]

PLAY RECAP *********************************************************************
leaf-1                     : ok=2    changed=2    unreachable=0    failed=0
leaf-2                     : ok=2    changed=2    unreachable=0    failed=0
spine-1                    : ok=2    changed=2    unreachable=0    failed=0
spine-2                    : ok=2    changed=2    unreachable=0    failed=0

[root@ansible cumulus]$

Lets quickly verify the configuration:

cumulus@spine-1:~$ net show int

    Name           Master    Speed      MTU  Mode           Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  -------------  -------------  -------------  ---------------------------------
UP  lo             None      N/A      65536  Loopback                                     IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt           cumulus        eth0           IP: 192.168.100.205/24
UP  bond1          bridge    2G        1500  Bond/Trunk                                   Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                    Untagged Members: bond1, peerlink
UP  bridge-100-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.0.254/24
UP  bridge-101-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.1.254/24
UP  bridge-102-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.2.254/24
UP  bridge.100     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.0.252/24
UP  bridge.101     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.1.252/24
UP  bridge.102     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.2.252/24
UP  peerlink       bridge    1G        1500  Bond/Trunk                                   Bond Members: swp11(UP)
UP  peerlink.4094  None      1G        1500  SubInt/L3                                    IP: 169.254.1.1/30
UP  vrf-prod       None      N/A      65536  NotConfigured

cumulus@spine-1:~$

cumulus@spine-1:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.205/24
                                  ====          eth0          spine-2
                                  ====          eth0          leaf-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          leaf-1        Master: bond1(UP)
swp2         1G       BondMember  swp2          leaf-2        Master: bond1(UP)
swp11        1G       BondMember  swp11         spine-2       Master: peerlink(UP)
cumulus@spine-1:~$
cumulus@leaf-1:~$ net show int

    Name           Master    Speed      MTU  Mode        Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  ----------  -------------  -------------  --------------------------------
UP  lo             None      N/A      65536  Loopback                                  IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt        cumulus        eth0           IP: 192.168.100.207/24
UP  swp3           bridge    1G        1500  Access/L2                                 Untagged VLAN: 100
UP  swp4           bridge    1G        1500  Access/L2                                 Untagged VLAN: 101
UP  swp5           bridge    1G        1500  Access/L2                                 Untagged VLAN: 102
UP  bond1          bridge    2G        1500  Bond/Trunk                                Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                 Untagged Members: bond1, swp3-5
UP  peerlink       None      1G        1500  Bond                                      Bond Members: swp11(UP)
UP  peerlink.4093  None      1G        1500  SubInt/L3                                 IP: 169.254.1.1/30

cumulus@leaf-1:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.207/24
                                  ====          eth0          spine-2
                                  ====          eth0          spine-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          spine-1       Master: bond1(UP)
swp2         1G       BondMember  swp1          spine-2       Master: bond1(UP)
swp11        1G       BondMember  swp11         leaf-2        Master: peerlink(UP)
cumulus@leaf-1:~$

As you can see the configuration is correctly deployed and you can start testing.

The configuration for a real datacentre Fabric is of course, more complex (multiple VRFs, SVIs and complex Routing) but with Ansible you can quickly deploy and manage hundreds of switches.

In one of the next posts, I will write an Ansible Playbook for a Layer 3 datacentre Fabric configuration using BGP and ECMP routing on Quagga.

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.