NetBox Open Source DCIM and IPAM tool

I wanted to share some information about an open source tool I have found some time ago which helps you to keep track of your infrastructure assets and configuration items. The name is NetBox which is an DCIM (Datacenter infrastructure management) and IPAM (IP address management) tool. NetBox was started by the network engineering team from DigitalOcean, specifically to address the needs of network and infrastructure engineers.

We all know that documentation is something no one wants to do, and no one has time for. What makes NetBox interesting is that not only does it focus on infrastructure documentation with a clean web console, it also comes with a API to push changes via the API , or use NetBox as dynamic inventory for Ansible.

Here a few screenshots showing the look and feel from NetBox:

The rack overview:

The IPAM module:

Here is an example how to add a device via the REST API, very useful if you use ZTP (zero touch provisioning) and add your switches or servers automatically to NetBox or in your automation scripts when you deploy configurations:

[email protected]:~$ curl -X POST -H "Authorization: Token fde02a67ca0c248bf5695bbf5cd56975add33655" -H "Content-Type: application/json" -H "Accept: application/json; indent=4" http://localhost:80/api/dcim/devices/ --data '{ "nae": "server-9", "display_name": "server-9", "device_type": 5, "device_role": 8 , "site": 1 }'
{
    "id": 21,
    "name": "server-9",
    "device_type": 5,
    "device_role": 8,
    "tenant": null,
    "platform": null,
    "serial": "",
    "asset_tag": null,
    "site": 1,
    "rack": null,
    "position": null,
    "face": null,
    "status": 1,
    "primary_ip4": null,
    "primary_ip6": null,
    "cluster": null,
    "virtual_chassis": null,
    "vc_position": null,
    "vc_priority": null,
    "comments": "",
    "created": "2018-04-16",
    "last_updated": "2018-04-16T14:40:47.787862Z"
}
[email protected]:~$

In the web console you see the device I have just added via the REST API:

On the main NetBox Github repository page you find links for a Ansible Role or Vagrant environment.

I am personally very interested in using NetBox as dynamic inventory with Ansible. I will write a separate article about this in the coming weeks.

Please share your feedback and leave a comment.

Getting started with Ansible AWX (Open Source Tower version)

Ansible released AWX a few weeks ago, an open source (community supported) version of their commercial Ansible Tower product. This is a web-based graphical interface to manage Ansible playbooks, inventories, and schedule jobs to run playbooks.

The Github repository you find here: https://github.com/ansible/awx

Let’s start with the installation of Ansible AWX, very easy because everything is dockerized and see the install guide for more information.

Modify the inventory file under the installer folder and change the Postgres data folder which is otherwise located under /tmp, also change Postgres DB username and password if needed. I would recommend binding AWX to localhost and put an Nginx reverse proxy in front with SSL encryption.

Changes in the inventory file:

postgres_data_dir=/var/lib/postgresql/data/
host_port=127.0.0.1:8052

Start the build of the Docker container:

ansible-playbook -i inventory install.yml

After the Ansible Playbook run completes, you see the following Docker container:

[email protected]:~/awx/installer$ docker ps
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                NAMES
26a73c91cb04        ansible/awx_task:latest   "/tini -- /bin/sh ..."   2 days ago          Up 24 hours         8052/tcp                             awx_task
07774696a7f2        ansible/awx_web:latest    "/tini -- /bin/sh ..."   2 days ago          Up 24 hours         127.0.0.1:8052->8052/tcp             awx_web
981f4f02c759        memcached:alpine          "docker-entrypoint..."   2 days ago          Up 24 hours         11211/tcp                            memcached
4f4a3141b54d        rabbitmq:3                "docker-entrypoint..."   2 days ago          Up 24 hours         4369/tcp, 5671-5672/tcp, 25672/tcp   rabbitmq
faf07f7b4682        postgres:9.6              "docker-entrypoint..."   2 days ago          Up 24 hours         5432/tcp                             postgres
[email protected]:~/awx/installer$

Install Nginx:

sudo apt-get update
sudo apt-get install nginx
sudo rm /etc/nginx/sites-enabled/default

Create Nginx vhosts configuration:

sudo vi /etc/nginx/sites-available/awx
server {
    listen 443 ssl;
    server_name awx.domain.com;

    ssl on;
    ssl_certificate /etc/nginx/ssl/awx.domain.com-cert.pem;
    ssl_certificate_key /etc/nginx/ssl/awx.domain.com-key.pem;

    location / {
        proxy_pass http://127.0.0.1:8052;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Create symlink in sites enable to point to awx config:

sudo ln -s /etc/nginx/sites-available/awx /etc/nginx/sites-enabled/awx

Reload Nginx to apply configuration:

sudo systemctl reload nginx

Afterwards you are able to login with username “admin” and password “password”:

I created a simple job for testing with AWX, you first start to create a project, credentials and inventories. The project points to your Git repository:

Under the job you configure which project, credentials and inventories to use:

Once saved you can manually trigger the job, it first pulls the latest playbook from your version control repository and afterwards executes the configured Ansible playbook:

The job details look very similar if you run an playbook on the CLI:

Ansible AWX is a very useful tool if you need to manage different Ansible playbooks and do job scheduling if you are not already using other tools like Jenkins or Gitlab-CI. But even then it is a good addition to use AWX to run ad-hoc playbooks.

Leave a comment

Cumulus Linux network simulation using Vagrant

I was using GNS3 for quite some time but it was not very flexible if you quickly wanted to test something and even more complicated if you used a different computer or shared your projects.

I spend some time with Vagrant to build a virtual Cumulus Linux lab environment which can run basically on every computer. Simulating network environments is the future when you want to test and validate your automation scripts.

My lab diagram:

I created different topology.dot files and used the Cumulus topology converter on Github to create my lab with Virtualbox or Libvirt (KVM). I did some modification to the initialise scripts for the switches and the management server. Everything you find in my Github repo https://github.com/berndonline/cumulus-lab-vagrant.

The topology file basically defines your network and the converter creates the Vagrantfile.

In the management topology file you have all servers (incl. management) like in the network diagram above. The Cumulus switches you can only access via the management server.

Very similar to the topology-mgmt.dot but in this one the management server is running Cumulus NetQ which you need to first import into your Vagrant. Here the link to the Cumulus NetQ demo on Github.

In this topology file you find a basic staging lab without servers where you can access the Cumulus switches directly via their Vagrant IP. I mainly use this to quickly test something like updating Cumulus switches or validating Ansible playbooks.

In this topology file you find a basic production lab where you can access the Cumulus switches directly via their Vagrant IP and have Cumulus NetQ as management server.

Basically to convert a topology into a Vagrantfile you just need to run the following command:

python topology_converter.py topology-staging.dot -p libvirt --ansible-hostfile

I use KVM in my example and want that Vagrant creates an Ansible inventory file and run playbooks directly agains the switches.

Check the status of the vagrant environment:

[email protected]:~/cumulus-lab-vagrant$ vagrant status
Current machine states:

spine-1                   not created (libvirt)
spine-2                   not created (libvirt)
leaf-1                    not created (libvirt)
leaf-3                    not created (libvirt)
leaf-2                    not created (libvirt)
leaf-4                    not created (libvirt)
mgmt-1                    not created (libvirt)
edge-2                    not created (libvirt)
edge-1                    not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
[email protected]:~/cumulus-lab-vagrant$

To start the devices run:

vagrant up

If you use the topology files with management server you need to start first the management server and then the management switch before you boot the rest of the switches:

vagrant up mgmt-server
vagrant up mgmt-1
vagrant up

The switches will pull some part of their configuration from the management server.

Output if you start the environment:

[email protected]:~/cumulus-lab-vagrant$ vagrant up spine-1
Bringing machine 'spine-1' up with 'libvirt' provider...
==> spine-1: Creating image (snapshot of base box volume).
==> spine-1: Creating domain with the following settings...
==> spine-1:  -- Name:              cumulus-lab-vagrant_spine-1
==> spine-1:  -- Domain type:       kvm
==> spine-1:  -- Cpus:              1
==> spine-1:  -- Feature:           acpi
==> spine-1:  -- Feature:           apic
==> spine-1:  -- Feature:           pae
==> spine-1:  -- Memory:            512M
==> spine-1:  -- Management MAC:
==> spine-1:  -- Loader:
==> spine-1:  -- Base box:          CumulusCommunity/cumulus-vx
==> spine-1:  -- Storage pool:      default
==> spine-1:  -- Image:             /var/lib/libvirt/images/cumulus-lab-vagrant_spine-1.img (4G)
==> spine-1:  -- Volume Cache:      default
==> spine-1:  -- Kernel:
==> spine-1:  -- Initrd:
==> spine-1:  -- Graphics Type:     vnc
==> spine-1:  -- Graphics Port:     5900
==> spine-1:  -- Graphics IP:       127.0.0.1
==> spine-1:  -- Graphics Password: Not defined
==> spine-1:  -- Video Type:        cirrus
==> spine-1:  -- Video VRAM:        9216
==> spine-1:  -- Sound Type:
==> spine-1:  -- Keymap:            en-us
==> spine-1:  -- TPM Path:
==> spine-1:  -- INPUT:             type=mouse, bus=ps2
==> spine-1: Creating shared folders metadata...
==> spine-1: Starting domain.
==> spine-1: Waiting for domain to get an IP address...
==> spine-1: Waiting for SSH to become available...
    spine-1:
    spine-1: Vagrant insecure key detected. Vagrant will automatically replace
    spine-1: this with a newly generated keypair for better security.
    spine-1:
    spine-1: Inserting generated public key within guest...
    spine-1: Removing insecure key from the guest if it's present...
    spine-1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> spine-1: Setting hostname...
==> spine-1: Configuring and enabling network interfaces...
....
==> spine-1: #################################
==> spine-1:   Running Switch Post Config (config_vagrant_switch.sh)
==> spine-1: #################################
==> spine-1:  ###Creating SSH keys for cumulus user ###
==> spine-1: #################################
==> spine-1:    Finished
==> spine-1: #################################
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: a0:00:00:00:00:21 --> eth0
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:30 --> swp1
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:04 --> swp2
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:26 --> swp3
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:0a --> swp4
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:22 --> swp51
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:0d --> swp52
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:10 --> swp53
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:23 --> swp54
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: Vagrant interface = eth1
==> spine-1: #### UDEV Rules (/etc/udev/rules.d/70-persistent-net.rules) ####
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="a0:00:00:00:00:21", NAME="eth0", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:30", NAME="swp1", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:04", NAME="swp2", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:26", NAME="swp3", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:0a", NAME="swp4", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:22", NAME="swp51", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:0d", NAME="swp52", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:10", NAME="swp53", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:23", NAME="swp54", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{ifindex}=="2", NAME="eth1", SUBSYSTEMS=="pci"
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1: ### RUNNING CUMULUS EXTRA CONFIG ###
==> spine-1:   INFO: Detected a 3.x Based Release
==> spine-1: ### Disabling default remap on Cumulus VX...
==> spine-1: ### Disabling ZTP service...
==> spine-1: Removed symlink /etc/systemd/system/multi-user.target.wants/ztp.service.
==> spine-1: ### Resetting ZTP to work next boot...
==> spine-1: Created symlink from /etc/systemd/system/multi-user.target.wants/ztp.service to /lib/systemd/system/ztp.service.
==> spine-1:   INFO: Detected Cumulus Linux v3.3.2 Release
==> spine-1: ### Fixing ONIE DHCP to avoid Vagrant Interface ###
==> spine-1:      Note: Installing from ONIE will undo these changes.
==> spine-1: ### Giving Vagrant User Ability to Run NCLU Commands ###
==> spine-1: ### DONE ###
==> spine-1: ### Rebooting Device to Apply Remap...

At the end you are able to connect to the Cumulus switch:

[email protected]:~/cumulus-lab-vagrant$ vagrant ssh spine-1

Welcome to Cumulus VX (TM)

Cumulus VX (TM) is a community supported virtual appliance designed for
experiencing, testing and prototyping Cumulus Networks' latest technology.
For any questions or technical support, visit our community site at:
http://community.cumulusnetworks.com

The registered trademark Linux (R) is used pursuant to a sublicense from LMI,
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide
basis.
[email protected]:~$

To destroy the Vagrant environment:

[email protected]:~/cumulus-lab-vagrant$ vagrant destroy spine-1
==> spine-2: Remove stale volume...
==> spine-2: Domain is not created. Please run `vagrant up` first.
==> spine-1: Removing domain...

My goal is to adopt some NetDevOps practice and use this in networking = NetOps, currently working on an Continuous Integration and Delivery (CI/CD) pipeline for Cumulus Linux network environments. The Vagrant lab was one of the prerequisites to simulate the changes before deploying this to production but more will follow in my next blog post.

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Cumulus Networks NetQ telemetry-based validation system

I had some time to play around with the new NetQ tool from Cumulus which checks your Cumulus Linux switch fabric.

I did some testing with my Cumulus Layer 2 Fabric example: Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

You need to download the NetQ VM from Cumulus as VMware or VirtualBox template: here

It is a great tool to centrally check your Cumulus switches and keep history about changes in your environment. NetQ can send out notification about changes in your fabric which is nice because you are always up-to-date what is going on in your network.

Installing NetQ agent on a Cumulus Linux Switch:

[email protected]:~$ sudo apt-get update
[email protected]:~$ sudo apt-get install cumulus-netq -y

Configuring the NetQ Agent on a switch:

[email protected]:~$ sudo systemctl restart rsyslog
[email protected]:~$ netq add server 192.168.100.133
[email protected]:~$ netq agent restart

I will write a small Ansible script in the next days to automate the agent installation and configuration.

Connect to Cumulus NetQ VM and check agent connectivity

[email protected]:~$ netq-shell

Welcome to Cumulus (R) NetQ Command Line Interface
TIP: Type `netq help` to get started.

[email protected]:/$ netq show agents
Node     Status    Sys Uptime    Agent Uptime
-------  --------  ------------  --------------
leaf-1   Fresh     1h ago        1h ago
leaf-2   Fresh     1h ago        1h ago
spine-1  Fresh     1h ago        1h ago
spine-2  Fresh     1h ago        1h ago
[email protected]:/$

Basic Show Commands:

[email protected]:/$ netq show clag
Matching CLAG session records are:
Node             Peer             SysMac            State Backup #Links #Dual Last Changed
---------------- ---------------- ----------------- ----- ------ ------ ----- --------------
leaf-1           leaf-2(P)        44:38:39:ff:40:93 up    up     1      1     8m ago
leaf-2(P)        leaf-1           44:38:39:ff:40:93 up    up     1      1     8m ago
spine-1(P)       spine-2          44:38:39:ff:40:94 up    up     1      1     8m ago
spine-2          spine-1(P)       44:38:39:ff:40:94 up    up     1      1     9m ago
[email protected]:/$
 
[email protected]:/$ netq show lldp
LLDP peer info for *:*
Node     Interface    LLDP Peer    Peer Int    Last Changed
-------  -----------  -----------  ----------  --------------
leaf-1   eth0         cumulus      eth0        1h ago
leaf-1   eth0         leaf-2       eth0        1h ago
leaf-1   eth0         spine-1      eth0        1h ago
leaf-1   eth0         spine-2      eth0        1h ago
leaf-1   swp1         spine-1      swp1        1h ago
leaf-1   swp11        leaf-2       swp11       9m ago
leaf-1   swp2         spine-2      swp1        1h ago
leaf-2   eth0         cumulus      eth0        1h ago
leaf-2   eth0         leaf-1       eth0        1h ago
leaf-2   eth0         spine-1      eth0        1h ago
leaf-2   eth0         spine-2      eth0        1h ago
leaf-2   swp1         spine-2      swp2        1h ago
leaf-2   swp11        leaf-1       swp11       8m ago
leaf-2   swp2         spine-1      swp2        1h ago
spine-1  eth0         cumulus      eth0        1h ago
spine-1  eth0         leaf-1       eth0        1h ago
spine-1  eth0         leaf-2       eth0        1h ago
spine-1  eth0         spine-2      eth0        1h ago
spine-1  swp1         leaf-1       swp1        1h ago
spine-1  swp11        spine-2      swp11       1h ago
spine-1  swp2         leaf-2       swp2        8m ago
spine-2  eth0         cumulus      eth0        1h ago
spine-2  eth0         leaf-1       eth0        1h ago
spine-2  eth0         leaf-2       eth0        1h ago
spine-2  eth0         spine-1      eth0        1h ago
spine-2  swp1         leaf-1       swp2        1h ago
spine-2  swp11        spine-1      swp11       1h ago
spine-2  swp2         leaf-2       swp1        8m ago
[email protected]:/$
 
[email protected]:/$ netq show interfaces type bond
Matching interface records are:
Node             Interface        Type     State Details                     Last Changed
---------------- ---------------- -------- ----- --------------------------- --------------
leaf-1           bond1            bond     up    Slave: swp1(spine-1:swp1),  10m ago
                                                 Slave: swp2(spine-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-1           peerlink         bond     up    Slave: swp11(leaf-2:swp11), 10m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
leaf-2           bond1            bond     up    Slave: swp1(spine-2:swp2),  10m ago
                                                 Slave: swp2(spine-1:swp2),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-2           peerlink         bond     up    Slave: swp11(leaf-1:swp11), 10m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
spine-1          bond1            bond     up    Slave: swp1(leaf-1:swp1),   10m ago
                                                 Slave: swp2(leaf-2:swp2),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
spine-1          peerlink         bond     up    Slave: swp11(spine-2:swp11) 1h ago
                                                 , VLANs:  100-199,
                                                 PVID: 1, Master: bridge,
                                                 MTU: 1500
spine-2          bond1            bond     up    Slave: swp1(leaf-1:swp2),   10m ago
                                                 Slave: swp2(leaf-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
spine-2          peerlink         bond     up    Slave: swp11(spine-1:swp11) 1h ago
                                                 , VLANs:  100-199,
                                                 PVID: 1, Master: bridge,
                                                 MTU: 1500
[email protected]:/$
 
[email protected]:/$ netq show ip routes
Matching IP route records are:
Origin Table            IP               Node             Nexthops                   Last Changed
------ ---------------- ---------------- ---------------- -------------------------- ----------------
1      default          169.254.1.0/30   leaf-1           peerlink.4093              11m ago
1      default          169.254.1.0/30   leaf-2           peerlink.4093              11m ago
1      default          169.254.1.0/30   spine-1          peerlink.4094              1h ago
1      default          169.254.1.0/30   spine-2          peerlink.4094              1h ago
1      default          169.254.1.1/32   leaf-1           peerlink.4093              11m ago
1      default          169.254.1.1/32   spine-1          peerlink.4094              1h ago
1      default          169.254.1.2/32   leaf-2           peerlink.4093              11m ago
1      default          169.254.1.2/32   spine-2          peerlink.4094              1h ago
1      default          192.168.100.0/24 leaf-1           eth0                       1h ago
1      default          192.168.100.0/24 leaf-2           eth0                       1h ago
1      default          192.168.100.0/24 spine-1          eth0                       1h ago
1      default          192.168.100.0/24 spine-2          eth0                       1h ago
1      default          192.168.100.205/ spine-1          eth0                       1h ago
                        32
1      default          192.168.100.206/ spine-2          eth0                       1h ago
                        32
1      default          192.168.100.207/ leaf-1           eth0                       1h ago
                        32
1      default          192.168.100.208/ leaf-2           eth0                       1h ago
                        32
0      vrf-prod         0.0.0.0/0        spine-1          Blackhole                  1h ago
0      vrf-prod         0.0.0.0/0        spine-2          Blackhole                  1h ago
1      vrf-prod         10.1.0.0/24      spine-1          bridge.100                 1h ago
1      vrf-prod         10.1.0.0/24      spine-2          bridge.100                 1h ago
1      vrf-prod         10.1.0.252/32    spine-1          bridge.100                 1h ago
1      vrf-prod         10.1.0.253/32    spine-2          bridge.100                 1h ago
1      vrf-prod         10.1.0.254/32    spine-1          bridge.100                 1h ago
1      vrf-prod         10.1.0.254/32    spine-2          bridge.100                 1h ago
1      vrf-prod         10.1.1.0/24      spine-1          bridge.101                 1h ago
1      vrf-prod         10.1.1.0/24      spine-2          bridge.101                 1h ago
1      vrf-prod         10.1.1.252/32    spine-1          bridge.101                 1h ago
1      vrf-prod         10.1.1.253/32    spine-2          bridge.101                 1h ago
1      vrf-prod         10.1.1.254/32    spine-1          bridge.101                 1h ago
1      vrf-prod         10.1.1.254/32    spine-2          bridge.101                 1h ago
1      vrf-prod         10.1.2.0/24      spine-1          bridge.102                 1h ago
1      vrf-prod         10.1.2.0/24      spine-2          bridge.102                 1h ago
1      vrf-prod         10.1.2.252/32    spine-1          bridge.102                 1h ago
1      vrf-prod         10.1.2.253/32    spine-2          bridge.102                 1h ago
1      vrf-prod         10.1.2.254/32    spine-1          bridge.102                 1h ago
1      vrf-prod         10.1.2.254/32    spine-2          bridge.102                 1h ago
[email protected]:/$

See Changes in Switch Fabric:

[email protected]:/$ netq leaf-1 show interfaces type bond
Matching interface records are:
Node             Interface        Type     State Details                     Last Changed
---------------- ---------------- -------- ----- --------------------------- --------------
leaf-1           bond1            bond     up    Slave: swp1(spine-1:swp1),  2s ago
                                                 Slave: swp2(spine-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-1           peerlink         bond     up    Slave: swp11(leaf-2:swp11), 21m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
[email protected]:/$
 
[email protected]:~$ sudo ifdown bond1
[email protected]:~$
 
[email protected]:/$ netq leaf-1 show interfaces type bond
Matching interface records are:
Node             Interface        Type     State Details                     Last Changed
---------------- ---------------- -------- ----- --------------------------- --------------
leaf-1           peerlink         bond     up    Slave: swp11(leaf-2:swp11), 22m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
[email protected]:/$
 
[email protected]:/$ netq leaf-1 show interfaces type bond changes
Matching interface records are:
Node             Interface        Type     State Details                     DbState Last Changed
---------------- ---------------- -------- ----- --------------------------- ------- --------------
leaf-1           bond1            bond     down  VLANs: , PVID: 0,           Del     21s ago
                                                 Master: bridge, MTU: 1500
leaf-1           bond1            bond     down  Slave: swp1(),              Add     21s ago
                                                 Slave: swp2(),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-1           bond1            bond     up    Slave: swp1(spine-1:swp1),  Add     1m ago
                                                 Slave: swp2(spine-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500 

More information you can find in the Cumulus NetQ documentation: https://docs.cumulusnetworks.com/display/NETQ/NetQ

Ansible Semaphore

I spend lot of time working with Ansible in the last weeks to automate the deployment of Cisco router or Cumulus switches. (Waiting for Ansible 2.2 to support Cisco ASA devices..)
Ansible is a great tool but if you have multiple YAML files and various roles it can get pretty messy and would be nice to have central tool to trigger your tasks and structure your environment variables or inventories.

I exactly found this tool with Ansible Semaphore: https://github.com/ansible-semaphore/semaphore

The install is pretty easy and provides an API to trigger your tasks remotely.

You can create different projects and include your Ansible YAML files.

screen-shot-2016-10-15-at-21-59-57

The source is a Git repository where your files are stored:

screen-shot-2016-10-15-at-21-58-18

Here your environment variables:

screen-shot-2016-10-15-at-21-58-43

Inventory definition:

screen-shot-2016-10-15-at-21-59-02

Finally the you can execute your Ansible YAML files via the Web UI or API:

screen-shot-2016-10-15-at-22-00-26

screen-shot-2016-10-15-at-22-00-53

Have fun playing around with Semaphore 🙂