Cumulus Linux network simulation using Vagrant

I was using GNS3 for quite some time but it was not very flexible if you quickly wanted to test something and even more complicated if you used a different computer or shared your projects.

I spend some time with Vagrant to build a virtual Cumulus Linux lab environment which can run basically on every computer. Simulating network environments is the future when you want to test and validate your automation scripts.

My lab diagram:

I created different topology.dot files and used the Cumulus topology converter on Github to create my lab with Virtualbox or Libvirt (KVM). I did some modification to the initialise scripts for the switches and the management server. Everything you find in my Github repo https://github.com/berndonline/cumulus-lab-vagrant.

The topology file basically defines your network and the converter creates the Vagrantfile.

In the management topology file you have all servers (incl. management) like in the network diagram above. The Cumulus switches you can only access via the management server.

Very similar to the topology-mgmt.dot but in this one the management server is running Cumulus NetQ which you need to first import into your Vagrant. Here the link to the Cumulus NetQ demo on Github.

In this topology file you find a basic staging lab without servers where you can access the Cumulus switches directly via their Vagrant IP. I mainly use this to quickly test something like updating Cumulus switches or validating Ansible playbooks.

In this topology file you find a basic production lab where you can access the Cumulus switches directly via their Vagrant IP and have Cumulus NetQ as management server.

Basically to convert a topology into a Vagrantfile you just need to run the following command:

python topology_converter.py topology-staging.dot -p libvirt --ansible-hostfile

I use KVM in my example and want that Vagrant creates an Ansible inventory file and run playbooks directly agains the switches.

Check the status of the vagrant environment:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant status
Current machine states:

spine-1                   not created (libvirt)
spine-2                   not created (libvirt)
leaf-1                    not created (libvirt)
leaf-3                    not created (libvirt)
leaf-2                    not created (libvirt)
leaf-4                    not created (libvirt)
mgmt-1                    not created (libvirt)
edge-2                    not created (libvirt)
edge-1                    not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
berndonline@lab:~/cumulus-lab-vagrant$

To start the devices run:

vagrant up

If you use the topology files with management server you need to start first the management server and then the management switch before you boot the rest of the switches:

vagrant up mgmt-server
vagrant up mgmt-1
vagrant up

The switches will pull some part of their configuration from the management server.

Output if you start the environment:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant up spine-1
Bringing machine 'spine-1' up with 'libvirt' provider...
==> spine-1: Creating image (snapshot of base box volume).
==> spine-1: Creating domain with the following settings...
==> spine-1:  -- Name:              cumulus-lab-vagrant_spine-1
==> spine-1:  -- Domain type:       kvm
==> spine-1:  -- Cpus:              1
==> spine-1:  -- Feature:           acpi
==> spine-1:  -- Feature:           apic
==> spine-1:  -- Feature:           pae
==> spine-1:  -- Memory:            512M
==> spine-1:  -- Management MAC:
==> spine-1:  -- Loader:
==> spine-1:  -- Base box:          CumulusCommunity/cumulus-vx
==> spine-1:  -- Storage pool:      default
==> spine-1:  -- Image:             /var/lib/libvirt/images/cumulus-lab-vagrant_spine-1.img (4G)
==> spine-1:  -- Volume Cache:      default
==> spine-1:  -- Kernel:
==> spine-1:  -- Initrd:
==> spine-1:  -- Graphics Type:     vnc
==> spine-1:  -- Graphics Port:     5900
==> spine-1:  -- Graphics IP:       127.0.0.1
==> spine-1:  -- Graphics Password: Not defined
==> spine-1:  -- Video Type:        cirrus
==> spine-1:  -- Video VRAM:        9216
==> spine-1:  -- Sound Type:
==> spine-1:  -- Keymap:            en-us
==> spine-1:  -- TPM Path:
==> spine-1:  -- INPUT:             type=mouse, bus=ps2
==> spine-1: Creating shared folders metadata...
==> spine-1: Starting domain.
==> spine-1: Waiting for domain to get an IP address...
==> spine-1: Waiting for SSH to become available...
    spine-1:
    spine-1: Vagrant insecure key detected. Vagrant will automatically replace
    spine-1: this with a newly generated keypair for better security.
    spine-1:
    spine-1: Inserting generated public key within guest...
    spine-1: Removing insecure key from the guest if it's present...
    spine-1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> spine-1: Setting hostname...
==> spine-1: Configuring and enabling network interfaces...
....
==> spine-1: #################################
==> spine-1:   Running Switch Post Config (config_vagrant_switch.sh)
==> spine-1: #################################
==> spine-1:  ###Creating SSH keys for cumulus user ###
==> spine-1: #################################
==> spine-1:    Finished
==> spine-1: #################################
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: a0:00:00:00:00:21 --> eth0
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:30 --> swp1
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:04 --> swp2
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:26 --> swp3
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:0a --> swp4
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:22 --> swp51
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:0d --> swp52
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:10 --> swp53
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:23 --> swp54
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: Vagrant interface = eth1
==> spine-1: #### UDEV Rules (/etc/udev/rules.d/70-persistent-net.rules) ####
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="a0:00:00:00:00:21", NAME="eth0", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:30", NAME="swp1", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:04", NAME="swp2", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:26", NAME="swp3", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:0a", NAME="swp4", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:22", NAME="swp51", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:0d", NAME="swp52", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:10", NAME="swp53", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:23", NAME="swp54", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{ifindex}=="2", NAME="eth1", SUBSYSTEMS=="pci"
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1: ### RUNNING CUMULUS EXTRA CONFIG ###
==> spine-1:   INFO: Detected a 3.x Based Release
==> spine-1: ### Disabling default remap on Cumulus VX...
==> spine-1: ### Disabling ZTP service...
==> spine-1: Removed symlink /etc/systemd/system/multi-user.target.wants/ztp.service.
==> spine-1: ### Resetting ZTP to work next boot...
==> spine-1: Created symlink from /etc/systemd/system/multi-user.target.wants/ztp.service to /lib/systemd/system/ztp.service.
==> spine-1:   INFO: Detected Cumulus Linux v3.3.2 Release
==> spine-1: ### Fixing ONIE DHCP to avoid Vagrant Interface ###
==> spine-1:      Note: Installing from ONIE will undo these changes.
==> spine-1: ### Giving Vagrant User Ability to Run NCLU Commands ###
==> spine-1: ### DONE ###
==> spine-1: ### Rebooting Device to Apply Remap...

At the end you are able to connect to the Cumulus switch:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant ssh spine-1

Welcome to Cumulus VX (TM)

Cumulus VX (TM) is a community supported virtual appliance designed for
experiencing, testing and prototyping Cumulus Networks' latest technology.
For any questions or technical support, visit our community site at:
http://community.cumulusnetworks.com

The registered trademark Linux (R) is used pursuant to a sublicense from LMI,
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide
basis.
vagrant@cumulus:~$

To destroy the Vagrant environment:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant destroy spine-1
==> spine-2: Remove stale volume...
==> spine-2: Domain is not created. Please run `vagrant up` first.
==> spine-1: Removing domain...

My goal is to adopt some NetDevOps practice and use this in networking = NetOps, currently working on an Continuous Integration and Delivery (CI/CD) pipeline for Cumulus Linux network environments. The Vagrant lab was one of the prerequisites to simulate the changes before deploying this to production but more will follow in my next blog post.

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Cumulus Networks NetQ telemetry-based validation system

I had some time to play around with the new NetQ tool from Cumulus which checks your Cumulus Linux switch fabric.

I did some testing with my Cumulus Layer 2 Fabric example: Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

You need to download the NetQ VM from Cumulus as VMware or VirtualBox template: here

It is a great tool to centrally check your Cumulus switches and keep history about changes in your environment. NetQ can send out notification about changes in your fabric which is nice because you are always up-to-date what is going on in your network.

Installing NetQ agent on a Cumulus Linux Switch:

cumulus@spine-1:~$ sudo apt-get update
cumulus@spine-1:~$ sudo apt-get install cumulus-netq -y

Configuring the NetQ Agent on a switch:

cumulus@spine-1:~$ sudo systemctl restart rsyslog
cumulus@spine-1:~$ netq add server 192.168.100.133
cumulus@spine-1:~$ netq agent restart

I will write a small Ansible script in the next days to automate the agent installation and configuration.

Connect to Cumulus NetQ VM and check agent connectivity

admin@cumulus:~$ netq-shell

Welcome to Cumulus (R) NetQ Command Line Interface
TIP: Type `netq help` to get started.

netq@dc9163c7044e:/$ netq show agents
Node     Status    Sys Uptime    Agent Uptime
-------  --------  ------------  --------------
leaf-1   Fresh     1h ago        1h ago
leaf-2   Fresh     1h ago        1h ago
spine-1  Fresh     1h ago        1h ago
spine-2  Fresh     1h ago        1h ago
netq@dc9163c7044e:/$

Basic Show Commands:

netq@dc9163c7044e:/$ netq show clag
Matching CLAG session records are:
Node             Peer             SysMac            State Backup #Links #Dual Last Changed
---------------- ---------------- ----------------- ----- ------ ------ ----- --------------
leaf-1           leaf-2(P)        44:38:39:ff:40:93 up    up     1      1     8m ago
leaf-2(P)        leaf-1           44:38:39:ff:40:93 up    up     1      1     8m ago
spine-1(P)       spine-2          44:38:39:ff:40:94 up    up     1      1     8m ago
spine-2          spine-1(P)       44:38:39:ff:40:94 up    up     1      1     9m ago
netq@dc9163c7044e:/$
 
netq@dc9163c7044e:/$ netq show lldp
LLDP peer info for *:*
Node     Interface    LLDP Peer    Peer Int    Last Changed
-------  -----------  -----------  ----------  --------------
leaf-1   eth0         cumulus      eth0        1h ago
leaf-1   eth0         leaf-2       eth0        1h ago
leaf-1   eth0         spine-1      eth0        1h ago
leaf-1   eth0         spine-2      eth0        1h ago
leaf-1   swp1         spine-1      swp1        1h ago
leaf-1   swp11        leaf-2       swp11       9m ago
leaf-1   swp2         spine-2      swp1        1h ago
leaf-2   eth0         cumulus      eth0        1h ago
leaf-2   eth0         leaf-1       eth0        1h ago
leaf-2   eth0         spine-1      eth0        1h ago
leaf-2   eth0         spine-2      eth0        1h ago
leaf-2   swp1         spine-2      swp2        1h ago
leaf-2   swp11        leaf-1       swp11       8m ago
leaf-2   swp2         spine-1      swp2        1h ago
spine-1  eth0         cumulus      eth0        1h ago
spine-1  eth0         leaf-1       eth0        1h ago
spine-1  eth0         leaf-2       eth0        1h ago
spine-1  eth0         spine-2      eth0        1h ago
spine-1  swp1         leaf-1       swp1        1h ago
spine-1  swp11        spine-2      swp11       1h ago
spine-1  swp2         leaf-2       swp2        8m ago
spine-2  eth0         cumulus      eth0        1h ago
spine-2  eth0         leaf-1       eth0        1h ago
spine-2  eth0         leaf-2       eth0        1h ago
spine-2  eth0         spine-1      eth0        1h ago
spine-2  swp1         leaf-1       swp2        1h ago
spine-2  swp11        spine-1      swp11       1h ago
spine-2  swp2         leaf-2       swp1        8m ago
netq@dc9163c7044e:/$
 
netq@dc9163c7044e:/$ netq show interfaces type bond
Matching interface records are:
Node             Interface        Type     State Details                     Last Changed
---------------- ---------------- -------- ----- --------------------------- --------------
leaf-1           bond1            bond     up    Slave: swp1(spine-1:swp1),  10m ago
                                                 Slave: swp2(spine-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-1           peerlink         bond     up    Slave: swp11(leaf-2:swp11), 10m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
leaf-2           bond1            bond     up    Slave: swp1(spine-2:swp2),  10m ago
                                                 Slave: swp2(spine-1:swp2),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-2           peerlink         bond     up    Slave: swp11(leaf-1:swp11), 10m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
spine-1          bond1            bond     up    Slave: swp1(leaf-1:swp1),   10m ago
                                                 Slave: swp2(leaf-2:swp2),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
spine-1          peerlink         bond     up    Slave: swp11(spine-2:swp11) 1h ago
                                                 , VLANs:  100-199,
                                                 PVID: 1, Master: bridge,
                                                 MTU: 1500
spine-2          bond1            bond     up    Slave: swp1(leaf-1:swp2),   10m ago
                                                 Slave: swp2(leaf-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
spine-2          peerlink         bond     up    Slave: swp11(spine-1:swp11) 1h ago
                                                 , VLANs:  100-199,
                                                 PVID: 1, Master: bridge,
                                                 MTU: 1500
netq@dc9163c7044e:/$
 
netq@dc9163c7044e:/$ netq show ip routes
Matching IP route records are:
Origin Table            IP               Node             Nexthops                   Last Changed
------ ---------------- ---------------- ---------------- -------------------------- ----------------
1      default          169.254.1.0/30   leaf-1           peerlink.4093              11m ago
1      default          169.254.1.0/30   leaf-2           peerlink.4093              11m ago
1      default          169.254.1.0/30   spine-1          peerlink.4094              1h ago
1      default          169.254.1.0/30   spine-2          peerlink.4094              1h ago
1      default          169.254.1.1/32   leaf-1           peerlink.4093              11m ago
1      default          169.254.1.1/32   spine-1          peerlink.4094              1h ago
1      default          169.254.1.2/32   leaf-2           peerlink.4093              11m ago
1      default          169.254.1.2/32   spine-2          peerlink.4094              1h ago
1      default          192.168.100.0/24 leaf-1           eth0                       1h ago
1      default          192.168.100.0/24 leaf-2           eth0                       1h ago
1      default          192.168.100.0/24 spine-1          eth0                       1h ago
1      default          192.168.100.0/24 spine-2          eth0                       1h ago
1      default          192.168.100.205/ spine-1          eth0                       1h ago
                        32
1      default          192.168.100.206/ spine-2          eth0                       1h ago
                        32
1      default          192.168.100.207/ leaf-1           eth0                       1h ago
                        32
1      default          192.168.100.208/ leaf-2           eth0                       1h ago
                        32
0      vrf-prod         0.0.0.0/0        spine-1          Blackhole                  1h ago
0      vrf-prod         0.0.0.0/0        spine-2          Blackhole                  1h ago
1      vrf-prod         10.1.0.0/24      spine-1          bridge.100                 1h ago
1      vrf-prod         10.1.0.0/24      spine-2          bridge.100                 1h ago
1      vrf-prod         10.1.0.252/32    spine-1          bridge.100                 1h ago
1      vrf-prod         10.1.0.253/32    spine-2          bridge.100                 1h ago
1      vrf-prod         10.1.0.254/32    spine-1          bridge.100                 1h ago
1      vrf-prod         10.1.0.254/32    spine-2          bridge.100                 1h ago
1      vrf-prod         10.1.1.0/24      spine-1          bridge.101                 1h ago
1      vrf-prod         10.1.1.0/24      spine-2          bridge.101                 1h ago
1      vrf-prod         10.1.1.252/32    spine-1          bridge.101                 1h ago
1      vrf-prod         10.1.1.253/32    spine-2          bridge.101                 1h ago
1      vrf-prod         10.1.1.254/32    spine-1          bridge.101                 1h ago
1      vrf-prod         10.1.1.254/32    spine-2          bridge.101                 1h ago
1      vrf-prod         10.1.2.0/24      spine-1          bridge.102                 1h ago
1      vrf-prod         10.1.2.0/24      spine-2          bridge.102                 1h ago
1      vrf-prod         10.1.2.252/32    spine-1          bridge.102                 1h ago
1      vrf-prod         10.1.2.253/32    spine-2          bridge.102                 1h ago
1      vrf-prod         10.1.2.254/32    spine-1          bridge.102                 1h ago
1      vrf-prod         10.1.2.254/32    spine-2          bridge.102                 1h ago
netq@dc9163c7044e:/$

See Changes in Switch Fabric:

netq@dc9163c7044e:/$ netq leaf-1 show interfaces type bond
Matching interface records are:
Node             Interface        Type     State Details                     Last Changed
---------------- ---------------- -------- ----- --------------------------- --------------
leaf-1           bond1            bond     up    Slave: swp1(spine-1:swp1),  2s ago
                                                 Slave: swp2(spine-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-1           peerlink         bond     up    Slave: swp11(leaf-2:swp11), 21m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
netq@dc9163c7044e:/$
 
cumulus@leaf-1:~$ sudo ifdown bond1
cumulus@leaf-1:~$
 
netq@dc9163c7044e:/$ netq leaf-1 show interfaces type bond
Matching interface records are:
Node             Interface        Type     State Details                     Last Changed
---------------- ---------------- -------- ----- --------------------------- --------------
leaf-1           peerlink         bond     up    Slave: swp11(leaf-2:swp11), 22m ago
                                                 VLANs: , PVID: 0,
                                                 Master: peerlink, MTU: 1500
netq@dc9163c7044e:/$
 
netq@dc9163c7044e:/$ netq leaf-1 show interfaces type bond changes
Matching interface records are:
Node             Interface        Type     State Details                     DbState Last Changed
---------------- ---------------- -------- ----- --------------------------- ------- --------------
leaf-1           bond1            bond     down  VLANs: , PVID: 0,           Del     21s ago
                                                 Master: bridge, MTU: 1500
leaf-1           bond1            bond     down  Slave: swp1(),              Add     21s ago
                                                 Slave: swp2(),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500
leaf-1           bond1            bond     up    Slave: swp1(spine-1:swp1),  Add     1m ago
                                                 Slave: swp2(spine-2:swp1),
                                                 VLANs:  100-199, PVID: 1,
                                                 Master: bridge, MTU: 1500 

More information you can find in the Cumulus NetQ documentation: https://docs.cumulusnetworks.com/display/NETQ/NetQ

Ansible Semaphore

I spend lot of time working with Ansible in the last weeks to automate the deployment of Cisco router or Cumulus switches. (Waiting for Ansible 2.2 to support Cisco ASA devices..)
Ansible is a great tool but if you have multiple YAML files and various roles it can get pretty messy and would be nice to have central tool to trigger your tasks and structure your environment variables or inventories.

I exactly found this tool with Ansible Semaphore: https://github.com/ansible-semaphore/semaphore

The install is pretty easy and provides an API to trigger your tasks remotely.

You can create different projects and include your Ansible YAML files.

screen-shot-2016-10-15-at-21-59-57

The source is a Git repository where your files are stored:

screen-shot-2016-10-15-at-21-58-18

Here your environment variables:

screen-shot-2016-10-15-at-21-58-43

Inventory definition:

screen-shot-2016-10-15-at-21-59-02

Finally the you can execute your Ansible YAML files via the Web UI or API:

screen-shot-2016-10-15-at-22-00-26

screen-shot-2016-10-15-at-22-00-53

Have fun playing around with Semaphore 🙂

Cisco IOS automation with Ansible

Bin a long time since I wrote my last post, I am pretty busy with work redesigning the data centres for my employer. Implementing as well an SDN Software-defined Network from VMware NSX but more about this later.

Ansible released some weeks ago new core modules which allows you to push directly configuration to Cisco IOS devices. More information you find here: https://docs.ansible.com/ansible/list_of_network_modules.html

I created a small automation lab in GNS3 to test the deployment of configs via Ansible to the two Cisco routers you see below. I am running VMware Fusion and used the vmnet2 (192.168.100.0/24) network for management because I run there my CentOS VM from where I deploy the configuration.

Don’t forget you need to pre-configure your Cisco router that you can connect via SSH to deploy the configuration.

Here the folder and file script structure of my Ansible example, under roles you have the different tasks I would like to execute common and logging but as well dependencies writecfg which saves the running-config to startup-config:

site.yml
hosts
group_vars/all.yml
roles/common/meta/main.yml
roles/common/task/main.yml
roles/common/templates/common.j2
roles/logging/meta/main.yml
roles/logging/tasks/main.yml
roles/logging/templates/common.j2
roles/writecfg/handlers/main.yml

The site.yml is the main script which I execute with Ansible which includes different roles for common and logging configuration:

- name: Cisco baseline configuration
  connection: local
  hosts: ios 
  gather_facts: false

  roles:
    - role: common
      tags: common
    - role: logging
      tags: logging

In the hosts file, I define the hostname and IP addresses of my IOS devices

[ios]
rtr01 device_ip=192.168.100.130
rtr02 device_ip=192.168.100.132

The file group_vars/all.yml defines variables which I used when the script is executed:

---
username: "ansible"
password: "cisco"
secret: "cisco"
logserver: 192.168.100.131

Under the roles/../meta/main.yml I set a dependency on the writecfg handler to save the configuration later when I change anything on the device.

Under the roles/../tasks/main.yml I define the module which I want to execute and the template I would like to deploy

Under the roles/../templates/.. you find the Jinja2 template files which include the commands.

Under roles/writecfg/handler/main.yml is the dependencies I have with the two roles common and logging to save the configuration if something is changed on the router.

To execute the cisco-baseline Ansible script just execute the following command and see the result:

[user@desktop cisco-baseline]$ ansible-playbook site.yml -i hosts

PLAY [Ensure basic configuration of switches] **********************************

TASK [common : ensure common configuration exists] *****************************
ok: [rtr02]
ok: [rtr01]

TASK [logging : ensure logging configuration exists] ***************************
changed: [rtr02]
changed: [rtr01]

RUNNING HANDLER [writecfg : write config] **************************************
ok: [rtr01]
ok: [rtr02]

PLAY RECAP *********************************************************************
rtr01                      : ok=3    changed=1    unreachable=0    failed=0
rtr02                      : ok=3    changed=1    unreachable=0    failed=0

[user@desktop cisco-baseline]$

Read my new posts about Ansible Playbook for Cisco ASAv Firewall Topology or Ansible Playbook for Cisco BGP Routing Topology.

Uptime – simple http monitoring utility

I found an very interesting http monitoring tool called Uptime using node.js and mongoDB. I directly installed Uptime on one of my Linux servers and from the first look I find it really cool 🙂 before you start you need to get node.js and mongoDB installed on your server and the rest is then very easy.

Ones the Uptime is running you can access the web interface and create the first checks, here some screenshots:

Here you create your http checks and define some settings:

Detailed check overview with graphs:

If you are interested then have a look here: http://fzaninotto.github.com/uptime/