Cisco ASAv network simulation using Vagrant

After creating IOSv and IOS XE Vagrant images, now doing the same for Cisco ASAv. Like in my last post same basic idea to create an simulated on-demand  network environment for continuous integration testing.

You need to buy the Cisco ASAv to get access to the KVM image on the Cisco website!

The Cisco ASAv is pretty easy because you can get QCOW2 images directly on the Cisco website, there are a few changes you need to do before you can use this together with Vagrant.

Boot the ASAv QCOW2 image on KVM and add the configuration below:

conf t
interface Management0/0
 nameif management
 security-level 0
 ip address dhcp
 no shutdown
 exit

hostname asa
domain-name lab.local
username vagrant password vagrant privilege 15
aaa authentication ssh console LOCAL
aaa authorization exec LOCAL auto-enable
ssh version 2
ssh timeout 60
ssh key-exchange group dh-group14-sha1
ssh 0 0 management

username vagrant attributes
  service-type admin
  ssh authentication publickey AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ==

Now the image is ready to use with Vagrant. Create an instance folder under the user vagrant directory and copy the QCOW2 image. As well create an metadata.json file:

mkdir -p ~/.vagrant.d/boxes/asav/0/libvirt/
cp ASAv.qcow2 ~/.vagrant.d/boxes/asav/0/libvirt/box.img
printf '{"provider":"libvirt","format":"qcow2","virtual_size":2}' > metadata.json

Create a Vagrantfile with the needed configuration and boot up the VMs. You have to start the VMs one by one.

berndonline@lab:~/asa-lab-vagrant$ vagrant status
Current machine states:

asa-1                     not created (libvirt)
asa-2                     not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
berndonline@lab:~/asa-lab-vagrant$ vagrant up asa-1
Bringing machine 'asa-1' up with 'libvirt' provider...
==> asa-1: Creating image (snapshot of base box volume).
==> asa-1: Creating domain with the following settings...
==> asa-1:  -- Name:              asa-lab-vagrant_asa-1
==> asa-1:  -- Domain type:       kvm
==> asa-1:  -- Cpus:              1
==> asa-1:  -- Feature:           acpi
==> asa-1:  -- Feature:           apic
==> asa-1:  -- Feature:           pae
==> asa-1:  -- Memory:            2048M
==> asa-1:  -- Management MAC:
==> asa-1:  -- Loader:
==> asa-1:  -- Base box:          asav
==> asa-1:  -- Storage pool:      default
==> asa-1:  -- Image:             /var/lib/libvirt/images/asa-lab-vagrant_asa-1.img (8G)
==> asa-1:  -- Volume Cache:      default
==> asa-1:  -- Kernel:
==> asa-1:  -- Initrd:
==> asa-1:  -- Graphics Type:     vnc
==> asa-1:  -- Graphics Port:     5900
==> asa-1:  -- Graphics IP:       127.0.0.1
==> asa-1:  -- Graphics Password: Not defined
==> asa-1:  -- Video Type:        cirrus
==> asa-1:  -- Video VRAM:        9216
==> asa-1:  -- Sound Type:
==> asa-1:  -- Keymap:            en-us
==> asa-1:  -- TPM Path:
==> asa-1:  -- INPUT:             type=mouse, bus=ps2
==> asa-1: Creating shared folders metadata...
==> asa-1: Starting domain.
==> asa-1: Waiting for domain to get an IP address...
==> asa-1: Waiting for SSH to become available...
==> asa-1: Configuring and enabling network interfaces...
    asa-1: SSH address: 10.255.1.238:22
    asa-1: SSH username: vagrant
    asa-1: SSH auth method: private key
    asa-1: Warning: Connection refused. Retrying...
==> asa-1: Running provisioner: ansible...
    asa-1: Running ansible-playbook...

PLAY [all] *********************************************************************

PLAY RECAP *********************************************************************

berndonline@lab:~/asa-lab-vagrant$ vagrant up asa-2
Bringing machine 'asa-2' up with 'libvirt' provider...
==> asa-2: Creating image (snapshot of base box volume).
==> asa-2: Creating domain with the following settings...
==> asa-2:  -- Name:              asa-lab-vagrant_asa-2
==> asa-2:  -- Domain type:       kvm
==> asa-2:  -- Cpus:              1
==> asa-2:  -- Feature:           acpi
==> asa-2:  -- Feature:           apic
==> asa-2:  -- Feature:           pae
==> asa-2:  -- Memory:            2048M
==> asa-2:  -- Management MAC:
==> asa-2:  -- Loader:
==> asa-2:  -- Base box:          asav
==> asa-2:  -- Storage pool:      default
==> asa-2:  -- Image:             /var/lib/libvirt/images/asa-lab-vagrant_asa-2.img (8G)
==> asa-2:  -- Volume Cache:      default
==> asa-2:  -- Kernel:
==> asa-2:  -- Initrd:
==> asa-2:  -- Graphics Type:     vnc
==> asa-2:  -- Graphics Port:     5900
==> asa-2:  -- Graphics IP:       127.0.0.1
==> asa-2:  -- Graphics Password: Not defined
==> asa-2:  -- Video Type:        cirrus
==> asa-2:  -- Video VRAM:        9216
==> asa-2:  -- Sound Type:
==> asa-2:  -- Keymap:            en-us
==> asa-2:  -- TPM Path:
==> asa-2:  -- INPUT:             type=mouse, bus=ps2
==> asa-2: Creating shared folders metadata...
==> asa-2: Starting domain.
==> asa-2: Waiting for domain to get an IP address...
==> asa-2: Waiting for SSH to become available...
==> asa-2: Configuring and enabling network interfaces...
    asa-2: SSH address: 10.255.1.131:22
    asa-2: SSH username: vagrant
    asa-2: SSH auth method: private key
==> asa-2: Running provisioner: ansible...
    asa-2: Running ansible-playbook...

PLAY [all] *********************************************************************

PLAY RECAP *********************************************************************

berndonline@lab:~/asa-lab-vagrant$ vagrant status
Current machine states:

asa-1                     running (libvirt)
asa-2                     running (libvirt)

berndonline@lab:~/asa-lab-vagrant$

After the VMs are successfully booted you can connect with vagrant ssh:

berndonline@lab:~/asa-lab-vagrant$ vagrant ssh asa-1
Type help or '?' for a list of available commands.
asa# show version

Cisco Adaptive Security Appliance Software Version 9.6(2)
Device Manager Version 7.6(2)

Compiled on Tue 23-Aug-16 18:38 PDT by builders
System image file is "boot:/asa962-smp-k8.bin"
Config file at boot was "startup-config"

asa up 10 mins 31 secs

Hardware:   ASAv, 2048 MB RAM, CPU Xeon E5 series 3600 MHz,
Model Id:   ASAv10
Internal ATA Compact Flash, 8192MB
Slot 1: ATA Compact Flash, 8192MB
BIOS Flash Firmware Hub @ 0x0, 0KB

....

Configuration has not been modified since last system restart.
asa# exit

Logoff

Connection to 10.255.1.238 closed by remote host.
Connection to 10.255.1.238 closed.
berndonline@lab:~/asa-lab-vagrant$ vagrant destroy
==> asa-2: Removing domain...
==> asa-2: Running triggers after destroy...
Removing known host entries
==> asa-1: Removing domain...
==> asa-1: Running triggers after destroy...
Removing known host entries
berndonline@lab:~/asa-lab-vagrant$

Here I have a virtual ASAv environment which I can spin-up and down as needed for automation testing.

The example Vagrantfile you can find in my Github repository:

https://github.com/berndonline/asa-lab-vagrant/blob/master/Vagrantfile

Read my new post about an Ansible Playbook for Cisco ASAv Firewall Topology

Cisco IOSv and XE network simulation using Vagrant

Here some interesting things I did with on-demand network simulation of Cisco IOSv and IOS XE using Vagrant. Yes, Cisco has is own product for network simulation called Cisco VIRL (Cisco Virtual Internet Routing Lab) but this is not as flexible and on-demand like using Vagrant and KVM. One of the reason was to do some continuous integration testing, same what I did with Cumulus Linux: Continuous Integration and Delivery for Networking with Cumulus Linux

You need to have an active Cisco VIRL subscription to download the VMDK images or buy the Cisco CSR1000V to get access to the ISO on the Cisco website!

IOS XE was the easiest because I found a Github repository to convert an existing CSR1000V ISO to vbox image to use with Vagrant. The only thing I needed to do was to converting the vbox image to KVM using vagrant mutate.

berndonline@lab:~/cisco-lab-vagrant$ vagrant status
Current machine states:

rtr-1                     not created (libvirt)
rtr-2                     not created (libvirt)

berndonline@lab:~/cisco-lab-vagrant$ vagrant up
Bringing machine 'rtr-1' up with 'libvirt' provider...
Bringing machine 'rtr-2' up with 'libvirt' provider...
==> rtr-1: Creating image (snapshot of base box volume).
==> rtr-2: Creating image (snapshot of base box volume).
==> rtr-1: Creating domain with the following settings...
==> rtr-1:  -- Name:              cisco-lab-vagrant_rtr-1
==> rtr-2: Creating domain with the following settings...
==> rtr-1:  -- Domain type:       kvm
==> rtr-2:  -- Name:              cisco-lab-vagrant_rtr-2
==> rtr-1:  -- Cpus:              1
==> rtr-2:  -- Domain type:       kvm
==> rtr-1:  -- Feature:           acpi
==> rtr-2:  -- Cpus:              1
==> rtr-2:  -- Feature:           acpi
==> rtr-2:  -- Feature:           apic
==> rtr-1:  -- Feature:           apic
==> rtr-2:  -- Feature:           pae
==> rtr-1:  -- Feature:           pae
==> rtr-2:  -- Memory:            2048M
==> rtr-2:  -- Management MAC:
==> rtr-2:  -- Loader:
==> rtr-1:  -- Memory:            2048M
==> rtr-2:  -- Base box:          iosxe

....

==> rtr-1: Waiting for SSH to become available...
==> rtr-2: Waiting for SSH to become available...
==> rtr-1: Configuring and enabling network interfaces...
==> rtr-2: Configuring and enabling network interfaces...
    rtr-1: SSH address: 10.255.1.84:22
    rtr-1: SSH username: vagrant
    rtr-1: SSH auth method: private key
    rtr-2: SSH address: 10.255.1.208:22
    rtr-2: SSH username: vagrant
    rtr-2: SSH auth method: private key
==> rtr-1: Running provisioner: ansible...
    rtr-1: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [run show version on remote devices] **************************************
==> rtr-2: Running provisioner: ansible...
    rtr-2: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [run show version on remote devices] **************************************
ok: [rtr-1]

PLAY RECAP *********************************************************************
rtr-1                      : ok=1    changed=0    unreachable=0    failed=0

ok: [rtr-2]

PLAY RECAP *********************************************************************
rtr-2                      : ok=1    changed=0    unreachable=0    failed=0
berndonline@lab:~/cisco-lab-vagrant$ vagrant status
Current machine states:

rtr-1                     running (libvirt)
rtr-2                     running (libvirt)

berndonline@lab:~/cisco-lab-vagrant$

Afterwards you can connect with vagrant ssh to your virtual IOS XE VM:

berndonline@lab:~/cisco-lab-vagrant$ vagrant ssh rtr-1

csr1kv#show version
Cisco IOS XE Software, Version 03.16.00.S - Extended Support Release
Cisco IOS Software, CSR1000V Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 15.5(3)S, RELEASE SOFTWARE (fc6)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2015 by Cisco Systems, Inc.
Compiled Sun 26-Jul-15 20:16 by mcpre

Cisco IOS-XE software, Copyright (c) 2005-2015 by cisco Systems, Inc.
All rights reserved.  Certain components of Cisco IOS-XE software are
licensed under the GNU General Public License ("GPL") Version 2.0.  The
software code licensed under GPL Version 2.0 is free software that comes
with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such
GPL code under the terms of GPL Version 2.0.  For more details, see the
documentation or "License Notice" file accompanying the IOS-XE software,
or the applicable URL provided on the flyer accompanying the IOS-XE
software.

ROM: IOS-XE ROMMON

csr1kv uptime is 9 minutes
Uptime for this control processor is 10 minutes
System returned to ROM by reload
System image file is "bootflash:packages.conf"
Last reload reason: 

....

berndonline@lab:~/cisco-lab-vagrant$ vagrant destroy
==> rtr-2: Removing domain...
==> rtr-1: Removing domain...
berndonline@lab:~/cisco-lab-vagrant$

Running IOSv on KVM wasn’t that easy because you only get VMDK (Virtual Machine Disk) which you need to convert to a QCOW2 image. The next step is to boot the QCOW2 image and add some additional configuration changes before you can use this with Vagrant. Give the VM at least 2048 MB and min. 1 vCPU.

Ones the VM is booted, connect and add the following configuration below. You need to create an vagrant user and add the ssh key from Vagrant, additionally create an EEM applet to generate the rsa key during boot otherwise Vagrant cannot connect to the VM. Afterwards save the running-config and turn off the VM:

conf t
ip vrf vrf-mgmt
	rd 1:1
	exit

interface Gig0/0
 description management
 ip vrf forwarding vrf-mgmt
 ip address dhcp
 no shutdown
 exit

ip domain-name lab.local

aaa new-model
aaa authentication login default local
aaa authorization exec default local 

username vagrant privilege 15 secret vagrant

crypto key generate rsa general-keys modulus 2048 

ip ssh version 2
ip ssh authentication-retries 5

ip ssh pubkey-chain
   username vagrant
    key-hash ssh-rsa DD3BB82E850406E9ABFFA80AC0046ED6
    exit
   exit

line vty 0 4
 exec-timeout 0 0
 transport input ssh
 exit

shell processing full

event manager session cli username vagrant
event manager applet EEM_SSH_Keygen authorization bypass

event syslog pattern SYS-5-RESTART
action 0.0 info type routername
action 0.1 set status none
action 1.0 cli command enable
action 2.0 cli command "show ip ssh | include ^SSH"
action 2.1 regexp "([ED][^ ]+)" \$_cli_result result status
action 2.2 syslog priority informational msg "SSH is currently \$status"
action 3.0 if \$status eq Disabled
action 3.1 cli command "configure terminal"
action 3.2 cli command "crypto key generate rsa usage-keys label SSHKEYS modulus 2048"
action 3.3 cli command "end"
action 3.4 cli command "copy run start"
action 3.5 syslog priority informational msg "SSH keys generated by EEM"
action 4.0 end
end

exit
write mem

Now the QCOW2 image is ready to use with Vagrant. Create an instance folder under the user vagrant directory and copy the QCOW2 image. As well create an metadata.json file:

mkdir -p ~/.vagrant.d/boxes/iosv/0/libvirt/
cp IOSv.qcow2 ~/.vagrant.d/boxes/iosv/0/libvirt/box.img
printf '{"provider":"libvirt","format":"qcow2","virtual_size":2}' > metadata.json

The IOSv image is ready to use with Vagrant, just create an Vagrantfile with the needed configuration and boot up the VMs.

berndonline@lab:~/cisco-lab-vagrant$ vagrant status
Current machine states:

rtr-1                     not created (libvirt)
rtr-2                     not created (libvirt)

berndonline@lab:~/cisco-lab-vagrant$ vagrant up
Bringing machine 'rtr-1' up with 'libvirt' provider...
Bringing machine 'rtr-2' up with 'libvirt' provider...
==> rtr-2: Creating image (snapshot of base box volume).
==> rtr-1: Creating image (snapshot of base box volume).
==> rtr-2: Creating domain with the following settings...
==> rtr-1: Creating domain with the following settings...
==> rtr-2:  -- Name:              cisco-lab-vagrant_rtr-2
==> rtr-2:  -- Domain type:       kvm
==> rtr-1:  -- Name:              cisco-lab-vagrant_rtr-1
==> rtr-2:  -- Cpus:              1
==> rtr-1:  -- Domain type:       kvm
==> rtr-2:  -- Feature:           acpi
==> rtr-1:  -- Cpus:              1
==> rtr-2:  -- Feature:           apic
==> rtr-1:  -- Feature:           acpi
==> rtr-2:  -- Feature:           pae
==> rtr-1:  -- Feature:           apic
==> rtr-2:  -- Memory:            2048M
==> rtr-1:  -- Feature:           pae
==> rtr-2:  -- Management MAC:
==> rtr-1:  -- Memory:            2048M
==> rtr-2:  -- Loader:
==> rtr-1:  -- Management MAC:
==> rtr-2:  -- Base box:          iosv
==> rtr-1:  -- Loader:
==> rtr-1:  -- Base box:          iosv

....

==> rtr-2: Waiting for SSH to become available...
==> rtr-1: Waiting for SSH to become available...
==> rtr-2: Configuring and enabling network interfaces...
==> rtr-1: Configuring and enabling network interfaces...
    rtr-2: SSH address: 10.255.1.234:22
    rtr-2: SSH username: vagrant
    rtr-2: SSH auth method: private key
    rtr-1: SSH address: 10.255.1.237:22
    rtr-1: SSH username: vagrant
    rtr-1: SSH auth method: private key
==> rtr-2: Running provisioner: ansible...
    rtr-2: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [run show version on remote devices] **************************************
Thursday 26 October 2017  18:21:22 +0200 (0:00:00.015)       0:00:00.015 ******
==> rtr-1: Running provisioner: ansible...
    rtr-1: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [run show version on remote devices] **************************************
Thursday 26 October 2017  18:21:23 +0200 (0:00:00.014)       0:00:00.014 ******
ok: [rtr-2]

PLAY RECAP *********************************************************************
rtr-2                      : ok=1    changed=0    unreachable=0    failed=0

Thursday 26 October 2017  18:21:24 +0200 (0:00:01.373)       0:00:01.388 ******
===============================================================================
run show version on remote devices -------------------------------------- 1.37s
ok: [rtr-1]

PLAY RECAP *********************************************************************
rtr-1                      : ok=1    changed=0    unreachable=0    failed=0

Thursday 26 October 2017  18:21:24 +0200 (0:00:01.380)       0:00:01.395 ******
===============================================================================
run show version on remote devices -------------------------------------- 1.38s
berndonline@lab:~/cisco-lab-vagrant$

After the VMs are successfully booted you can connect again with vagrant ssh:

berndonline@lab:~/cisco-lab-vagrant$ vagrant ssh rtr-1
**************************************************************************
* IOSv is strictly limited to use for evaluation, demonstration and IOS  *
* education. IOSv is provided as-is and is not supported by Cisco's      *
* Technical Advisory Center. Any use or disclosure, in whole or in part, *
* of the IOSv Software or Documentation to any third party for any       *
* purposes is expressly prohibited except as otherwise authorized by     *
* Cisco in writing.                                                      *
**************************************************************************
router#show version
Cisco IOS Software, IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(2)T, RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2016 by Cisco Systems, Inc.
Compiled Tue 22-Mar-16 16:19 by prod_rel_team

ROM: Bootstrap program is IOSv

router uptime is 1 minute
System returned to ROM by reload
System image file is "flash0:/vios-adventerprisek9-m"
Last reload reason: Unknown reason

....

berndonline@lab:~/cisco-lab-vagrant$ vagrant destroy
==> rtr-2: Removing domain...
==> rtr-1: Removing domain...
berndonline@lab:~/cisco-lab-vagrant$

Basically thats it, your on-demand IOSv and IOS XE lab using Vagrant, ready for some automation and continuous integration testing.

The example Vagrantfiles you can find in my Github repository:

https://github.com/berndonline/cisco-lab-vagrant/blob/master/Vagrantfile-IOSXE

https://github.com/berndonline/cisco-lab-vagrant/blob/master/Vagrantfile-IOSv

Cumulus Linux network simulation using Vagrant

I was using GNS3 for quite some time but it was not very flexible if you quickly wanted to test something and even more complicated if you used a different computer or shared your projects.

I spend some time with Vagrant to build a virtual Cumulus Linux lab environment which can run basically on every computer. Simulating network environments is the future when you want to test and validate your automation scripts.

My lab diagram:

I created different topology.dot files and used the Cumulus topology converter on Github to create my lab with Virtualbox or Libvirt (KVM). I did some modification to the initialise scripts for the switches and the management server. Everything you find in my Github repo https://github.com/berndonline/cumulus-lab-vagrant.

The topology file basically defines your network and the converter creates the Vagrantfile.

In the management topology file you have all servers (incl. management) like in the network diagram above. The Cumulus switches you can only access via the management server.

Very similar to the topology-mgmt.dot but in this one the management server is running Cumulus NetQ which you need to first import into your Vagrant. Here the link to the Cumulus NetQ demo on Github.

In this topology file you find a basic staging lab without servers where you can access the Cumulus switches directly via their Vagrant IP. I mainly use this to quickly test something like updating Cumulus switches or validating Ansible playbooks.

In this topology file you find a basic production lab where you can access the Cumulus switches directly via their Vagrant IP and have Cumulus NetQ as management server.

Basically to convert a topology into a Vagrantfile you just need to run the following command:

python topology_converter.py topology-staging.dot -p libvirt --ansible-hostfile

I use KVM in my example and want that Vagrant creates an Ansible inventory file and run playbooks directly agains the switches.

Check the status of the vagrant environment:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant status
Current machine states:

spine-1                   not created (libvirt)
spine-2                   not created (libvirt)
leaf-1                    not created (libvirt)
leaf-3                    not created (libvirt)
leaf-2                    not created (libvirt)
leaf-4                    not created (libvirt)
mgmt-1                    not created (libvirt)
edge-2                    not created (libvirt)
edge-1                    not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
berndonline@lab:~/cumulus-lab-vagrant$

To start the devices run:

vagrant up

If you use the topology files with management server you need to start first the management server and then the management switch before you boot the rest of the switches:

vagrant up mgmt-server
vagrant up mgmt-1
vagrant up

The switches will pull some part of their configuration from the management server.

Output if you start the environment:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant up spine-1
Bringing machine 'spine-1' up with 'libvirt' provider...
==> spine-1: Creating image (snapshot of base box volume).
==> spine-1: Creating domain with the following settings...
==> spine-1:  -- Name:              cumulus-lab-vagrant_spine-1
==> spine-1:  -- Domain type:       kvm
==> spine-1:  -- Cpus:              1
==> spine-1:  -- Feature:           acpi
==> spine-1:  -- Feature:           apic
==> spine-1:  -- Feature:           pae
==> spine-1:  -- Memory:            512M
==> spine-1:  -- Management MAC:
==> spine-1:  -- Loader:
==> spine-1:  -- Base box:          CumulusCommunity/cumulus-vx
==> spine-1:  -- Storage pool:      default
==> spine-1:  -- Image:             /var/lib/libvirt/images/cumulus-lab-vagrant_spine-1.img (4G)
==> spine-1:  -- Volume Cache:      default
==> spine-1:  -- Kernel:
==> spine-1:  -- Initrd:
==> spine-1:  -- Graphics Type:     vnc
==> spine-1:  -- Graphics Port:     5900
==> spine-1:  -- Graphics IP:       127.0.0.1
==> spine-1:  -- Graphics Password: Not defined
==> spine-1:  -- Video Type:        cirrus
==> spine-1:  -- Video VRAM:        9216
==> spine-1:  -- Sound Type:
==> spine-1:  -- Keymap:            en-us
==> spine-1:  -- TPM Path:
==> spine-1:  -- INPUT:             type=mouse, bus=ps2
==> spine-1: Creating shared folders metadata...
==> spine-1: Starting domain.
==> spine-1: Waiting for domain to get an IP address...
==> spine-1: Waiting for SSH to become available...
    spine-1:
    spine-1: Vagrant insecure key detected. Vagrant will automatically replace
    spine-1: this with a newly generated keypair for better security.
    spine-1:
    spine-1: Inserting generated public key within guest...
    spine-1: Removing insecure key from the guest if it's present...
    spine-1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> spine-1: Setting hostname...
==> spine-1: Configuring and enabling network interfaces...
....
==> spine-1: #################################
==> spine-1:   Running Switch Post Config (config_vagrant_switch.sh)
==> spine-1: #################################
==> spine-1:  ###Creating SSH keys for cumulus user ###
==> spine-1: #################################
==> spine-1:    Finished
==> spine-1: #################################
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: a0:00:00:00:00:21 --> eth0
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:30 --> swp1
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:04 --> swp2
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:26 --> swp3
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:0a --> swp4
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:22 --> swp51
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:0d --> swp52
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:10 --> swp53
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: 44:38:39:00:00:23 --> swp54
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1:   INFO: Adding UDEV Rule: Vagrant interface = eth1
==> spine-1: #### UDEV Rules (/etc/udev/rules.d/70-persistent-net.rules) ####
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="a0:00:00:00:00:21", NAME="eth0", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:30", NAME="swp1", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:04", NAME="swp2", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:26", NAME="swp3", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:0a", NAME="swp4", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:22", NAME="swp51", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:0d", NAME="swp52", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:10", NAME="swp53", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:23", NAME="swp54", SUBSYSTEMS=="pci"
==> spine-1: ACTION=="add", SUBSYSTEM=="net", ATTR{ifindex}=="2", NAME="eth1", SUBSYSTEMS=="pci"
==> spine-1: Running provisioner: shell...
    spine-1: Running: inline script
==> spine-1: ### RUNNING CUMULUS EXTRA CONFIG ###
==> spine-1:   INFO: Detected a 3.x Based Release
==> spine-1: ### Disabling default remap on Cumulus VX...
==> spine-1: ### Disabling ZTP service...
==> spine-1: Removed symlink /etc/systemd/system/multi-user.target.wants/ztp.service.
==> spine-1: ### Resetting ZTP to work next boot...
==> spine-1: Created symlink from /etc/systemd/system/multi-user.target.wants/ztp.service to /lib/systemd/system/ztp.service.
==> spine-1:   INFO: Detected Cumulus Linux v3.3.2 Release
==> spine-1: ### Fixing ONIE DHCP to avoid Vagrant Interface ###
==> spine-1:      Note: Installing from ONIE will undo these changes.
==> spine-1: ### Giving Vagrant User Ability to Run NCLU Commands ###
==> spine-1: ### DONE ###
==> spine-1: ### Rebooting Device to Apply Remap...

At the end you are able to connect to the Cumulus switch:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant ssh spine-1

Welcome to Cumulus VX (TM)

Cumulus VX (TM) is a community supported virtual appliance designed for
experiencing, testing and prototyping Cumulus Networks' latest technology.
For any questions or technical support, visit our community site at:
http://community.cumulusnetworks.com

The registered trademark Linux (R) is used pursuant to a sublicense from LMI,
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide
basis.
vagrant@cumulus:~$

To destroy the Vagrant environment:

berndonline@lab:~/cumulus-lab-vagrant$ vagrant destroy spine-1
==> spine-2: Remove stale volume...
==> spine-2: Domain is not created. Please run `vagrant up` first.
==> spine-1: Removing domain...

My goal is to adopt some NetDevOps practice and use this in networking = NetOps, currently working on an Continuous Integration and Delivery (CI/CD) pipeline for Cumulus Linux network environments. The Vagrant lab was one of the prerequisites to simulate the changes before deploying this to production but more will follow in my next blog post.

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Here a basic Ansible Playbook for a Cumulus Linux lab which I use for testing. Similar spine and leaf configuration I used in my recent data centre redesign, the playbook includes one VRF and all SVIs are joined the VRF.

I use the Cumulus VX appliance under GNS3 which you can get for free at Cumulus: https://cumulusnetworks.com/products/cumulus-vx/

The first step is to configure the management interface of the Cumulus switches, edit /etc/network/interfaces and afterwards run “ifreload -a” to appy the config changes:

auto eth0
iface eth0
	address 192.168.100.20x/24
	gateway 192.168.100.2

Hosts file:

[spine]
spine-1 
spine-2
[leaf]
leaf-1 
leaf-2 

Before you start, you should push your ssh keys and set the hostname to prepare the switches.

Now we are ready to deploy the interface configuration for the Layer 2 Fabric, below the interfaces.yml file.

---
- hosts: all
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
    spine_interfaces:
      - { clag: bond1, desc: downlink-leaf, clagid: 1, port: swp1 swp2 }
    spine_bridge_ports: "peerlink bond1"
    bridge_vlans: "100-199"
    spine_vrf: "vrf-prod"
    spine_bridge:
      - { desc: web, vlan: 100, address: "{{ vlan100_address }}", address_virtual: "00:00:5e:00:01:00 10.1.0.254/24", vrf: vrf-prod }
      - { desc: app, vlan: 101, address: "{{ vlan101_address }}", address_virtual: "00:00:5e:00:01:01 10.1.1.254/24", vrf: vrf-prod }
      - { desc: db, vlan: 102, address: "{{ vlan102_address }}", address_virtual: "00:00:5e:00:01:02 10.1.2.254/24", vrf: vrf-prod }
    leaf_interfaces:
      - { clag: bond1, desc: uplink-spine, clagid: 1, port: swp1 swp2 }
    leaf_access_interfaces:
      - { desc: web-server, vlan: 100, port: swp3 }
      - { desc: app-server, vlan: 101, port: swp4 }
      - { desc: db-server, vlan: 102, port: swp5 }
    leaf_bridge_ports: "bond1 swp3 swp4 swp5"
  handlers:
    - name: ifreload
      command: ifreload -a
  tasks:
    - name: deploys spine interface configuration
      template: src=templates/spine_interfaces.j2 dest=/etc/network/interfaces
      when: "'spine' in group_names"
      notify: ifreload
    - name: deploys leaf interface configuration
      template: src=templates/leaf_interfaces.j2 dest=/etc/network/interfaces
      when: "'leaf' in group_names"
      notify: ifreload

I use Jinja2 templates for the interfaces configuration.

Here the output from the Ansible Playbook which only takes a few seconds to run:

[root@ansible cumulus]$ ansible-playbook interfaces.yml -i hosts

PLAY [all] *********************************************************************

TASK [deploys spine interface configuration] ***********************************
skipping: [leaf-2]
skipping: [leaf-1]
changed: [spine-2]
changed: [spine-1]

TASK [deploys leaf interface configuration] ************************************
skipping: [spine-1]
skipping: [spine-2]
changed: [leaf-1]
changed: [leaf-2]

RUNNING HANDLER [ifreload] *****************************************************
changed: [leaf-2]
changed: [leaf-1]
changed: [spine-2]
changed: [spine-1]

PLAY RECAP *********************************************************************
leaf-1                     : ok=2    changed=2    unreachable=0    failed=0
leaf-2                     : ok=2    changed=2    unreachable=0    failed=0
spine-1                    : ok=2    changed=2    unreachable=0    failed=0
spine-2                    : ok=2    changed=2    unreachable=0    failed=0

[root@ansible cumulus]$

Lets quickly verify the configuration:

cumulus@spine-1:~$ net show int

    Name           Master    Speed      MTU  Mode           Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  -------------  -------------  -------------  ---------------------------------
UP  lo             None      N/A      65536  Loopback                                     IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt           cumulus        eth0           IP: 192.168.100.205/24
UP  bond1          bridge    2G        1500  Bond/Trunk                                   Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                    Untagged Members: bond1, peerlink
UP  bridge-100-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.0.254/24
UP  bridge-101-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.1.254/24
UP  bridge-102-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.2.254/24
UP  bridge.100     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.0.252/24
UP  bridge.101     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.1.252/24
UP  bridge.102     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.2.252/24
UP  peerlink       bridge    1G        1500  Bond/Trunk                                   Bond Members: swp11(UP)
UP  peerlink.4094  None      1G        1500  SubInt/L3                                    IP: 169.254.1.1/30
UP  vrf-prod       None      N/A      65536  NotConfigured

cumulus@spine-1:~$

cumulus@spine-1:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.205/24
                                  ====          eth0          spine-2
                                  ====          eth0          leaf-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          leaf-1        Master: bond1(UP)
swp2         1G       BondMember  swp2          leaf-2        Master: bond1(UP)
swp11        1G       BondMember  swp11         spine-2       Master: peerlink(UP)
cumulus@spine-1:~$
cumulus@leaf-1:~$ net show int

    Name           Master    Speed      MTU  Mode        Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  ----------  -------------  -------------  --------------------------------
UP  lo             None      N/A      65536  Loopback                                  IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt        cumulus        eth0           IP: 192.168.100.207/24
UP  swp3           bridge    1G        1500  Access/L2                                 Untagged VLAN: 100
UP  swp4           bridge    1G        1500  Access/L2                                 Untagged VLAN: 101
UP  swp5           bridge    1G        1500  Access/L2                                 Untagged VLAN: 102
UP  bond1          bridge    2G        1500  Bond/Trunk                                Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                 Untagged Members: bond1, swp3-5
UP  peerlink       None      1G        1500  Bond                                      Bond Members: swp11(UP)
UP  peerlink.4093  None      1G        1500  SubInt/L3                                 IP: 169.254.1.1/30

cumulus@leaf-1:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.207/24
                                  ====          eth0          spine-2
                                  ====          eth0          spine-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          spine-1       Master: bond1(UP)
swp2         1G       BondMember  swp1          spine-2       Master: bond1(UP)
swp11        1G       BondMember  swp11         leaf-2        Master: peerlink(UP)
cumulus@leaf-1:~$

As you can see the configuration is correctly deployed and you can start testing.

The configuration for a real datacentre Fabric is of course, more complex (multiple VRFs, SVIs and complex Routing) but with Ansible you can quickly deploy and manage hundreds of switches.

In one of the next posts, I will write an Ansible Playbook for a Layer 3 datacentre Fabric configuration using BGP and ECMP routing on Quagga.

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Ansible Playbook for Cisco Lab

From my recent posts, you can see that I use Ansible a lot for automating the device configuration deployment. Here my firewall lab (Cisco routers and Cisco ASA firewall) which I use to test different things in GNS3:

Before you can start deploying configs via Ansible you need to manually configure your management interfaces and device remote access. I run VMware Fusion Pro and use my VMNET2 network as management network because I have additional VMs for Ansible and Monitoring.

Here the config to prep your Cisco routers that you can afterwards deploy the rest of the config via Ansible:

conf t
ip vrf vrf-mgmt
	rd 1:1
	exit

interface Ethernet1/0
 description management
 ip vrf forwarding vrf-mgmt
 ip address 192.168.100.201 255.255.255.0
 no shutdown
 exit

ip domain-name localdomain

aaa new-model
aaa authentication login default local
aaa authorization exec default local 

username ansible privilege 15 secret 5 $1$xAJX$D99QcH02Splr1L3ktrvh41

crypto key generate rsa general-keys modulus 2048 

ip ssh version 2
ip ssh authentication-retries 5

line vty 0 4
 transport input ssh
 exit

exit
write mem

The same you need to do for your Cisco ASA firewall:

conf t
enable password 2KFQnbNIdI.2KYOU encrypted

interface Management0/0
 nameif management
 security-level 0
 ip address 192.168.100.204 255.255.255.0
 
aaa authentication ssh console LOCAL

ssh 0.0.0.0 0.0.0.0 management

username ansible password xsxRJKdxDzf9Ctr8 encrypted privilege 15
exit
write mem

Now you are ready to deploy the basic lab configuration to all the devices but before we start we need hosts and vars files and the main Ansible Playbook (yaml) file.

In the host’s file I define all the interface variables, there are different ways of doing it but this one is the easiest.

./hosts

[router]
inside
dmz
outside
[firewall]
firewall

In the group_vars file is the global variables.

./group_vars/all.yml

---
username: "ansible"
password: "cisco"
secret: "cisco"
default_gw_inside: "10.1.255.1"
default_gw_dmz: "10.1.255.33"
default_gw_firewall: "217.110.110.254"

Here the Ansible Playbook with the basic device configuration:

./interfaces.yml

- name: Deploy Cisco lab configuration part 1
  connection: local
  hosts: router
  gather_facts: false
  vars:
    cli:
      username: "{{ username }}"
      password: "{{ password }}"
      host: "{{ device_ip }}"
  tasks:
    - name: deploy inside router configuration
      when: ansible_host not in "outside"
      ios_config:
        provider: "{{ cli }}"
        before:
          - "default interface {{ item.interface }}"
        lines:
          - "ip address {{ item.address }}"
        after:
          - no shutdown
        parents: "interface {{ item.interface }}"
        match: strict
      with_items:
        - { interface : Ethernet0/0, address : "{{ eth_00_ip }} {{ eth_00_mask }}" }
        - { interface : Ethernet0/1, address : "{{ eth_01_ip }} {{ eth_01_mask }}" }
    - name: deploy outside router configuration
      when: ansible_host not in "inside,dmz"
      ios_config:
        provider: "{{ cli }}"
        before:
          - "default interface {{ item.interface }}"
        lines:
          - "ip address {{ item.address }}"
        after:
          - no shutdown
        parents: "interface {{ item.interface }}"
        match: strict
      with_items:
        - { interface : Ethernet0/0, address : "{{ eth_00_ip }} {{ eth_00_mask }}" }
        - { interface : Ethernet0/1, address : "{{ eth_01_ip }}" }

- name: Deploy Cisco lab configuration part 2
  connection: local
  hosts: firewall
  gather_facts: false
  vars:
      cli:
       username: "{{ username }}"
       password: "{{ password }}"
       auth_pass: "{{ secret }}"
       authorize: yes
       host: "{{ device_ip }}"
  tasks:
    - name: deploy firewall configuration
      when: ansible_host not in "inside,dmz,outside"
      asa_config:
        provider: "{{ cli }}"
        lines:
          - "nameif {{ item.nameif }}"
          - "ip address {{ item.address }}"
        after:
          - no shutdown
        parents: "interface {{ item.interface }}"
        match: line
      with_items:
        - { interface : GigabitEthernet0/0, nameif : "{{ eth_00_nameif }}", address : "{{ eth_00_ip }} {{ eth_00_mask }}" }
        - { interface : GigabitEthernet0/1, nameif : "{{ eth_01_nameif }}", address : "{{ eth_01_ip }} {{ eth_01_mask }}" }
        - { interface : GigabitEthernet0/2, nameif : "{{ eth_02_nameif }}", address : "{{ eth_02_ip }} {{ eth_02_mask }}" }

In the playbook, I needed to separate the outside router because one interface is configured to dhcp otherwise I could have used only one task for all three routers.

The 2nd part is for the Cisco ASA firewall configuration because it uses a different Ansible module and variables.

Now let us deploy the config and see the output from Ansible:

[berndonline@ansible firewall]$ ansible-playbook interfaces.yml -i hosts

PLAY [Deploy firewall lab configuration part 1] ********************************

TASK [deploy inside router configuration] **************************************
skipping: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp '})
skipping: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
changed: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
changed: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
changed: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254 255.255.255.0'})
changed: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254 255.255.255.0'})

TASK [deploy outside router configuration] *************************************
skipping: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254'})
skipping: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
changed: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
changed: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp'})

PLAY [Deploy firewall lab configuration part 2] ********************************

TASK [deploy firewall configuration] *******************************************
changed: [firewall] => (item={u'interface': u'GigabitEthernet0/0', u'nameif': u'inside', u'address': u'10.1.255.1 255.255.255.240'})
changed: [firewall] => (item={u'interface': u'GigabitEthernet0/1', u'nameif': u'dmz', u'address': u'10.1.255.33 255.255.255.240'})
changed: [firewall] => (item={u'interface': u'GigabitEthernet0/2', u'nameif': u'outside', u'address': u'217.110.110.1 255.255.255.0'})

PLAY RECAP *********************************************************************
dmz                        : ok=1    changed=1    unreachable=0    failed=0
firewall                   : ok=1    changed=1    unreachable=0    failed=0
inside                     : ok=1    changed=1    unreachable=0    failed=0
outside                    : ok=1    changed=1    unreachable=0    failed=0

[berndonline@ansible firewall]$

Quick check if Ansible deployed the interface configuration:

inside#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
Ethernet0/0                10.1.255.2      YES manual up                    up
Ethernet0/1                10.1.0.254      YES manual up                    up
Ethernet1/0                192.168.100.201 YES NVRAM  up                    up
inside#

dmz#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
Ethernet0/0                10.1.255.34     YES manual up                    up
Ethernet0/1                10.1.1.254      YES manual up                    up
Ethernet1/0                192.168.100.202 YES NVRAM  up                    up
dmz#

outside#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
Ethernet0/0                217.110.110.254 YES manual up                    up
Ethernet0/1                172.16.191.23   YES DHCP   up                    up
Ethernet1/0                192.168.100.203 YES NVRAM  up                    up
outside#

firewall# sho ip address
Current IP Addresses:
Interface                Name                   IP address      Subnet mask     Method
GigabitEthernet0/0       inside                 10.1.255.1      255.255.255.240 manual
GigabitEthernet0/1       dmz                    10.1.255.33     255.255.255.240 manual
GigabitEthernet0/2       outside                217.110.110.1   255.255.255.0   manual
Management0/0            management             192.168.100.204 255.255.255.0   CONFIG
firewall#

As you can see Ansible deployed the interface configuration correctly. If I run Ansible again nothing will be deployed because the configuration is already present:

[berndonline@ansible firewall]$ ansible-playbook interfaces.yml -i hosts

PLAY [Deploy firewall lab configuration part 1] ********************************

TASK [deploy inside router configuration] **************************************
skipping: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp '})
skipping: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
ok: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
ok: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254 255.255.255.0'})
ok: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
ok: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254 255.255.255.0'})

TASK [deploy outside router configuration] *************************************
skipping: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254'})
skipping: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
ok: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
ok: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp'})

PLAY [Deploy firewall lab configuration part 2] ********************************

TASK [deploy firewall configuration] *******************************************
ok: [firewall] => (item={u'interface': u'GigabitEthernet0/0', u'nameif': u'inside', u'address': u'10.1.255.1 255.255.255.240'})
ok: [firewall] => (item={u'interface': u'GigabitEthernet0/1', u'nameif': u'dmz', u'address': u'10.1.255.33 255.255.255.240'})
ok: [firewall] => (item={u'interface': u'GigabitEthernet0/2', u'nameif': u'outside', u'address': u'217.110.110.1 255.255.255.0'})

PLAY RECAP *********************************************************************
dmz                        : ok=1    changed=0    unreachable=0    failed=0
firewall                   : ok=1    changed=0    unreachable=0    failed=0
inside                     : ok=1    changed=0    unreachable=0    failed=0
outside                    : ok=1    changed=0    unreachable=0    failed=0

[berndonline@ansible firewall]$

In my GNS3 labs, I normally not save the device configuration except the management IPs because with Ansible I can deploy everything again within seconds and use different Playbooks depending what I want to test. It gets even cooler if you use Semaphore (see my blog post: Ansible Semaphore) because you just click ones on the Playbook you want to deploy.

Comment below if you have questions or problems.

Read my new posts about Ansible Playbook for Cisco ASAv Firewall Topology or Ansible Playbook for Cisco BGP Routing Topology.