Continuous Integration and Delivery for Networking with Cumulus Linux

Continuous Integration – Continuous Delivery (CICD) is becoming more and more popular for network automation but the problem is how to validate your scripts and stage the configuration because you don’t want to deploy untested code to a production system. Especially in networking that could be pretty destructive if you made a mistake which could cause a loss in connectivity.

I spend some days working on a Cumulus Linux lab using Vagrant which I use to stage configuration. You find the basic Ansible playbook and the gitlab-ci configuration for the Cumulus lab in my Github repo: cumulus-lab-provision

For the continuous integration and delivery (CI/CD) pipeline I am using Gitlab.com and their Gitlab-runner which is running on my server. I will not get into too much detail what is needed on the server, basically it runs vargant, libvirt (kvm), virtualbox, ansible and the gitlab-runner.

  • You need to register your Gitlab-runner with the Gitlab repository.

  • The next step is to create your .gitlab-ci.yml which defines your CI-pipeline.
---
stages:
    - validate ansible
    - staging
    - production
validate:
    stage: validate ansible
    script:
        - bash ./linter.sh
staging:
    before_script:
        - git clone https://github.com/berndonline/cumulus-lab-vagrant.git
        - cd cumulus-lab-vagrant/
        - python ./topology_converter.py ./topology-staging.dot
          -p libvirt --ansible-hostfile
    stage: staging
    script:
        - bash ../staging.sh
production:
    before_script:
        - git clone https://github.com/berndonline/cumulus-lab-vagrant.git
        - cd cumulus-lab-vagrant/
        - python ./topology_converter.py ./topology-production.dot
          -p libvirt --ansible-hostfile
    stage: production
    when: manual
    script:
        - bash ../production.sh
    only:
        - master

In the gitlab-ci you see that I clone the cumulus vagrant lab which I use to spin-up a virtual staging environment and run the Ansible playbook against the virtual lab. The production stage is in my example also a vagrant environment because I had no physical switches for testing.

  • Basically any commit or merge in the Gitlab repo triggers the pipeline which I define in the gitlab-ci.

  • You can see the details in the running job. The first stage is only to validate that the YAML files have the correct syntax.

  • Here the details of the running job of staging and when everything goes well the job succeeded.

  • The last stage is production which needs to be triggered manually.

  • After the changes run through all defined stages you see that you successfully validate, staged and deployed your configuration to a cumulus production system.

This is a complete different way of working for a network engineer but the way it goes in fully automated datacenter network environments. It gets very powerful when you combine this with the Cumulus NetQ server to validate the state of your switch fabric after you run changes in production.

The next topic I am working on, is using Cumulus NetQ to validate configuration changes.

Here again my two repositories I use:

https://github.com/berndonline/cumulus-lab-vagrant

https://github.com/berndonline/cumulus-lab-provision

Read my new posts about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation and BGP EVPN and VXLAN with Cumulus Linux.

Ansible Playbook for Cumulus NetQ Agent Installation

Here a short Ansible script to install the Cumulus NetQ agent on Cumulus Linux switches.

---
- hosts: spine leaf
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
  tasks:
    - name: Install cumulus-netq
      apt: name=cumulus-netq update_cache=yes state=present
      register: result

    - name: Restart Syslog service
      service: name=rsyslog state=restarted
      when: result.stdout is defined

    - pause: seconds=5

    - name: Add netq server IP addr
      command: netq config add server 192.168.100.133
      when: result.stdout is defined

    - name: Start netq-agent
      service: name=netq-agent state=restarted
      when: result.stdout is defined

Your NetQ VM needs to be reachable from the switches otherwise the command “netq add server…” will fail.

You find more information in the official Cumulus NetQ documentation:  https://docs.cumulusnetworks.com/display/NETQ/Getting+Started+with+NetQ

Ansible Playbook for Cumulus Linux (Layer 3 Fabric)

Like promised, here a basic Ansible Playbook for a Cumulus Linux Layer 3 Fabric running BGP which you see in large-scale data centre deployments.

You push the layer 2 network as close as possible to the server and use ECMP (Equal-cost multi-path) routing to distribute your traffic via multiple uplinks.

These kind of network designs are highly scalable and in my example a 2-Tier deployment but you can easily use 3-Tiers where the Leaf switches become the distribution layer and you add additional ToR (Top of Rack) switches.

Here some interesting information about Facebook’s next-generation data centre fabric: Introducing data center fabric, the next-generation Facebook data center network

I use the same hosts file like from my previous blog post Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Hosts file:

[spine]
spine-1
spine-2
[leaf]
leaf-1
leaf-2

 

Ansible Playbook:

---
- hosts: all
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
    spine_interfaces:
      - { port: swp1, desc: leaf-1, address: "{{ swp1_address}}" }
      - { port: swp2, desc: leaf-2, address: "{{ swp2_address}}" }
      - { port: swp6, desc: layer3_peerlink, address: "{{ peer_address}}" }
    leaf_interfaces:
      - { port: swp1, desc: spine-1, address: "{{ swp1_address}}" }
      - { port: swp2, desc: spine-2, address: "{{ swp2_address}}" }      
  handlers:
    - name: ifreload
      command: ifreload -a
    - name: restart quagga
      service: name=quagga state=restarted
  tasks:
    - name: deploys spine interface configuration
      template: src=templates/spine_routing_interfaces.j2 dest=/etc/network/interfaces
      when: "'spine' in group_names"
      notify: ifreload
    - name: deploys leaf interface configuration
      template: src=templates/leaf_routing_interfaces.j2 dest=/etc/network/interfaces
      when: "'leaf' in group_names"
      notify: ifreload
    - name: deploys quagga configuration
      template: src=templates/quagga.conf.j2 dest=/etc/quagga/Quagga.conf
      notify: restart quagga

Let’s run the Playbook and see the output:

[[email protected] cumulus]$ ansible-playbook routing.yml -i hosts

PLAY [all] *********************************************************************

TASK [deploys spine interface configuration] ***********************************
skipping: [leaf-2]
skipping: [leaf-1]
changed: [spine-2]
changed: [spine-1]

TASK [deploys leaf interface configuration] ************************************
skipping: [spine-1]
skipping: [spine-2]
changed: [leaf-2]
changed: [leaf-1]

TASK [deploys quagga configuration] ********************************************
changed: [leaf-2]
changed: [spine-2]
changed: [spine-1]
changed: [leaf-1]

RUNNING HANDLER [ifreload] *****************************************************
changed: [leaf-2]
changed: [leaf-1]
changed: [spine-2]
changed: [spine-1]

RUNNING HANDLER [restart quagga] ***********************************************
changed: [leaf-1]
changed: [leaf-2]
changed: [spine-1]
changed: [spine-2]

PLAY RECAP *********************************************************************
leaf-1                     : ok=4    changed=4    unreachable=0    failed=0
leaf-2                     : ok=4    changed=4    unreachable=0    failed=0
spine-1                    : ok=4    changed=4    unreachable=0    failed=0
spine-2                    : ok=4    changed=4    unreachable=0    failed=0

[[email protected] cumulus]$

To verify the configuration let’s look at the BGP routes on the leaf switches:

[email protected]:/home/cumulus# net show route bgp
RIB entry for bgp
=================
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, P - PIM, T - Table, v - VNC,
       V - VPN,
       > - selected route, * - FIB route

B>* 10.0.0.0/30 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                    via 10.0.1.5, swp2, 00:02:14
B   10.0.1.0/30 [20/0] via 10.0.1.1 inactive, 00:02:14
                       via 10.0.1.5, swp2, 00:02:14
B   10.0.1.4/30 [20/0] via 10.0.1.5 inactive, 00:02:14
                       via 10.0.1.1, swp1, 00:02:14
B>* 10.0.2.0/30 [20/0] via 10.0.1.5, swp2, 00:02:14
  *                    via 10.0.1.1, swp1, 00:02:14
B>* 10.0.2.4/30 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                    via 10.0.1.5, swp2, 00:02:14
B>* 10.200.0.0/24 [20/0] via 10.0.1.1, swp1, 00:02:14
  *                      via 10.0.1.5, swp2, 00:02:14
[email protected]:/home/cumulus#
[email protected]:/home/cumulus# net show route bgp
RIB entry for bgp
=================
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, P - PIM, T - Table, v - VNC,
       V - VPN,
       > - selected route, * - FIB route

B>* 10.0.0.0/30 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                    via 10.0.2.1, swp2, 00:02:22
B>* 10.0.1.0/30 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                    via 10.0.2.1, swp2, 00:02:22
B>* 10.0.1.4/30 [20/0] via 10.0.2.1, swp2, 00:02:22
  *                    via 10.0.2.5, swp1, 00:02:22
B   10.0.2.0/30 [20/0] via 10.0.2.1 inactive, 00:02:22
                       via 10.0.2.5, swp1, 00:02:22
B   10.0.2.4/30 [20/0] via 10.0.2.5 inactive, 00:02:22
                       via 10.0.2.1, swp2, 00:02:22
B>* 10.100.0.0/24 [20/0] via 10.0.2.5, swp1, 00:02:22
  *                      via 10.0.2.1, swp2, 00:02:22
[email protected]:/home/cumulus#

Have fun!

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Ansible Playbook for Cumulus Linux (Layer 2 Fabric)

Here a basic Ansible Playbook for a Cumulus Linux lab which I use for testing. Similar spine and leaf configuration I used in my recent data centre redesign, the playbook includes one VRF and all SVIs are joined the VRF.

I use the Cumulus VX appliance under GNS3 which you can get for free at Cumulus: https://cumulusnetworks.com/products/cumulus-vx/

The first step is to configure the management interface of the Cumulus switches, edit /etc/network/interfaces and afterwards run “ifreload -a” to appy the config changes:

auto eth0
iface eth0
	address 192.168.100.20x/24
	gateway 192.168.100.2

Hosts file:

[spine]
spine-1 
spine-2
[leaf]
leaf-1 
leaf-2 

Before you start, you should push your ssh keys and set the hostname to prepare the switches.

Now we are ready to deploy the interface configuration for the Layer 2 Fabric, below the interfaces.yml file.

---
- hosts: all
  remote_user: cumulus
  gather_facts: no
  become: yes
  vars:
    ansible_become_pass: "CumulusLinux!"
    spine_interfaces:
      - { clag: bond1, desc: downlink-leaf, clagid: 1, port: swp1 swp2 }
    spine_bridge_ports: "peerlink bond1"
    bridge_vlans: "100-199"
    spine_vrf: "vrf-prod"
    spine_bridge:
      - { desc: web, vlan: 100, address: "{{ vlan100_address }}", address_virtual: "00:00:5e:00:01:00 10.1.0.254/24", vrf: vrf-prod }
      - { desc: app, vlan: 101, address: "{{ vlan101_address }}", address_virtual: "00:00:5e:00:01:01 10.1.1.254/24", vrf: vrf-prod }
      - { desc: db, vlan: 102, address: "{{ vlan102_address }}", address_virtual: "00:00:5e:00:01:02 10.1.2.254/24", vrf: vrf-prod }
    leaf_interfaces:
      - { clag: bond1, desc: uplink-spine, clagid: 1, port: swp1 swp2 }
    leaf_access_interfaces:
      - { desc: web-server, vlan: 100, port: swp3 }
      - { desc: app-server, vlan: 101, port: swp4 }
      - { desc: db-server, vlan: 102, port: swp5 }
    leaf_bridge_ports: "bond1 swp3 swp4 swp5"
  handlers:
    - name: ifreload
      command: ifreload -a
  tasks:
    - name: deploys spine interface configuration
      template: src=templates/spine_interfaces.j2 dest=/etc/network/interfaces
      when: "'spine' in group_names"
      notify: ifreload
    - name: deploys leaf interface configuration
      template: src=templates/leaf_interfaces.j2 dest=/etc/network/interfaces
      when: "'leaf' in group_names"
      notify: ifreload

I use Jinja2 templates for the interfaces configuration.

Here the output from the Ansible Playbook which only takes a few seconds to run:

[[email protected] cumulus]$ ansible-playbook interfaces.yml -i hosts

PLAY [all] *********************************************************************

TASK [deploys spine interface configuration] ***********************************
skipping: [leaf-2]
skipping: [leaf-1]
changed: [spine-2]
changed: [spine-1]

TASK [deploys leaf interface configuration] ************************************
skipping: [spine-1]
skipping: [spine-2]
changed: [leaf-1]
changed: [leaf-2]

RUNNING HANDLER [ifreload] *****************************************************
changed: [leaf-2]
changed: [leaf-1]
changed: [spine-2]
changed: [spine-1]

PLAY RECAP *********************************************************************
leaf-1                     : ok=2    changed=2    unreachable=0    failed=0
leaf-2                     : ok=2    changed=2    unreachable=0    failed=0
spine-1                    : ok=2    changed=2    unreachable=0    failed=0
spine-2                    : ok=2    changed=2    unreachable=0    failed=0

[[email protected] cumulus]$

Lets quickly verify the configuration:

[email protected]:~$ net show int

    Name           Master    Speed      MTU  Mode           Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  -------------  -------------  -------------  ---------------------------------
UP  lo             None      N/A      65536  Loopback                                     IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt           cumulus        eth0           IP: 192.168.100.205/24
UP  bond1          bridge    2G        1500  Bond/Trunk                                   Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                    Untagged Members: bond1, peerlink
UP  bridge-100-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.0.254/24
UP  bridge-101-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.1.254/24
UP  bridge-102-v0  vrf-prod  N/A       1500  Interface/L3                                 IP: 10.1.2.254/24
UP  bridge.100     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.0.252/24
UP  bridge.101     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.1.252/24
UP  bridge.102     vrf-prod  N/A       1500  SVI/L3                                       IP: 10.1.2.252/24
UP  peerlink       bridge    1G        1500  Bond/Trunk                                   Bond Members: swp11(UP)
UP  peerlink.4094  None      1G        1500  SubInt/L3                                    IP: 169.254.1.1/30
UP  vrf-prod       None      N/A      65536  NotConfigured

[email protected]:~$

[email protected]:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.205/24
                                  ====          eth0          spine-2
                                  ====          eth0          leaf-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          leaf-1        Master: bond1(UP)
swp2         1G       BondMember  swp2          leaf-2        Master: bond1(UP)
swp11        1G       BondMember  swp11         spine-2       Master: peerlink(UP)
[email protected]:~$
[email protected]:~$ net show int

    Name           Master    Speed      MTU  Mode        Remote Host    Remote Port    Summary
--  -------------  --------  -------  -----  ----------  -------------  -------------  --------------------------------
UP  lo             None      N/A      65536  Loopback                                  IP: 127.0.0.1/8, ::1/128
UP  eth0           None      1G        1500  Mgmt        cumulus        eth0           IP: 192.168.100.207/24
UP  swp3           bridge    1G        1500  Access/L2                                 Untagged VLAN: 100
UP  swp4           bridge    1G        1500  Access/L2                                 Untagged VLAN: 101
UP  swp5           bridge    1G        1500  Access/L2                                 Untagged VLAN: 102
UP  bond1          bridge    2G        1500  Bond/Trunk                                Bond Members: swp1(UP), swp2(UP)
UP  bridge         None      N/A       1500  Bridge/L2                                 Untagged Members: bond1, swp3-5
UP  peerlink       None      1G        1500  Bond                                      Bond Members: swp11(UP)
UP  peerlink.4093  None      1G        1500  SubInt/L3                                 IP: 169.254.1.1/30

[email protected]:~$ net show lldp

LocalPort    Speed    Mode        RemotePort    RemoteHost    Summary
-----------  -------  ----------  ------------  ------------  ----------------------
eth0         1G       Mgmt        eth0          cumulus       IP: 192.168.100.207/24
                                  ====          eth0          spine-2
                                  ====          eth0          spine-1
                                  ====          eth0          leaf-2
swp1         1G       BondMember  swp1          spine-1       Master: bond1(UP)
swp2         1G       BondMember  swp1          spine-2       Master: bond1(UP)
swp11        1G       BondMember  swp11         leaf-2        Master: peerlink(UP)
[email protected]:~$

As you can see the configuration is correctly deployed and you can start testing.

The configuration for a real datacentre Fabric is of course, more complex (multiple VRFs, SVIs and complex Routing) but with Ansible you can quickly deploy and manage hundreds of switches.

In one of the next posts, I will write an Ansible Playbook for a Layer 3 datacentre Fabric configuration using BGP and ECMP routing on Quagga.

Read my new post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Ansible Playbook for Cisco Lab

From my recent posts, you can see that I use Ansible a lot for automating the device configuration deployment. Here my firewall lab (Cisco routers and Cisco ASA firewall) which I use to test different things in GNS3:

Before you can start deploying configs via Ansible you need to manually configure your management interfaces and device remote access. I run VMware Fusion Pro and use my VMNET2 network as management network because I have additional VMs for Ansible and Monitoring.

Here the config to prep your Cisco routers that you can afterwards deploy the rest of the config via Ansible:

conf t
ip vrf vrf-mgmt
	rd 1:1
	exit

interface Ethernet1/0
 description management
 ip vrf forwarding vrf-mgmt
 ip address 192.168.100.201 255.255.255.0
 no shutdown
 exit

ip domain-name localdomain

aaa new-model
aaa authentication login default local
aaa authorization exec default local 

username ansible privilege 15 secret 5 $1$xAJX$D99QcH02Splr1L3ktrvh41

crypto key generate rsa general-keys modulus 2048 

ip ssh version 2
ip ssh authentication-retries 5

line vty 0 4
 transport input ssh
 exit

exit
write mem

The same you need to do for your Cisco ASA firewall:

conf t
enable password 2KFQnbNIdI.2KYOU encrypted

interface Management0/0
 nameif management
 security-level 0
 ip address 192.168.100.204 255.255.255.0
 
aaa authentication ssh console LOCAL

ssh 0.0.0.0 0.0.0.0 management

username ansible password xsxRJKdxDzf9Ctr8 encrypted privilege 15
exit
write mem

Now you are ready to deploy the basic lab configuration to all the devices but before we start we need hosts and vars files and the main Ansible Playbook (yaml) file.

In the host’s file I define all the interface variables, there are different ways of doing it but this one is the easiest.

./hosts

[router]
inside
dmz
outside
[firewall]
firewall

In the group_vars file is the global variables.

./group_vars/all.yml

---
username: "ansible"
password: "cisco"
secret: "cisco"
default_gw_inside: "10.1.255.1"
default_gw_dmz: "10.1.255.33"
default_gw_firewall: "217.110.110.254"

Here the Ansible Playbook with the basic device configuration:

./interfaces.yml

- name: Deploy Cisco lab configuration part 1
  connection: local
  hosts: router
  gather_facts: false
  vars:
    cli:
      username: "{{ username }}"
      password: "{{ password }}"
      host: "{{ device_ip }}"
  tasks:
    - name: deploy inside router configuration
      when: ansible_host not in "outside"
      ios_config:
        provider: "{{ cli }}"
        before:
          - "default interface {{ item.interface }}"
        lines:
          - "ip address {{ item.address }}"
        after:
          - no shutdown
        parents: "interface {{ item.interface }}"
        match: strict
      with_items:
        - { interface : Ethernet0/0, address : "{{ eth_00_ip }} {{ eth_00_mask }}" }
        - { interface : Ethernet0/1, address : "{{ eth_01_ip }} {{ eth_01_mask }}" }
    - name: deploy outside router configuration
      when: ansible_host not in "inside,dmz"
      ios_config:
        provider: "{{ cli }}"
        before:
          - "default interface {{ item.interface }}"
        lines:
          - "ip address {{ item.address }}"
        after:
          - no shutdown
        parents: "interface {{ item.interface }}"
        match: strict
      with_items:
        - { interface : Ethernet0/0, address : "{{ eth_00_ip }} {{ eth_00_mask }}" }
        - { interface : Ethernet0/1, address : "{{ eth_01_ip }}" }

- name: Deploy Cisco lab configuration part 2
  connection: local
  hosts: firewall
  gather_facts: false
  vars:
      cli:
       username: "{{ username }}"
       password: "{{ password }}"
       auth_pass: "{{ secret }}"
       authorize: yes
       host: "{{ device_ip }}"
  tasks:
    - name: deploy firewall configuration
      when: ansible_host not in "inside,dmz,outside"
      asa_config:
        provider: "{{ cli }}"
        lines:
          - "nameif {{ item.nameif }}"
          - "ip address {{ item.address }}"
        after:
          - no shutdown
        parents: "interface {{ item.interface }}"
        match: line
      with_items:
        - { interface : GigabitEthernet0/0, nameif : "{{ eth_00_nameif }}", address : "{{ eth_00_ip }} {{ eth_00_mask }}" }
        - { interface : GigabitEthernet0/1, nameif : "{{ eth_01_nameif }}", address : "{{ eth_01_ip }} {{ eth_01_mask }}" }
        - { interface : GigabitEthernet0/2, nameif : "{{ eth_02_nameif }}", address : "{{ eth_02_ip }} {{ eth_02_mask }}" }

In the playbook, I needed to separate the outside router because one interface is configured to dhcp otherwise I could have used only one task for all three routers.

The 2nd part is for the Cisco ASA firewall configuration because it uses a different Ansible module and variables.

Now let us deploy the config and see the output from Ansible:

[[email protected] firewall]$ ansible-playbook interfaces.yml -i hosts

PLAY [Deploy firewall lab configuration part 1] ********************************

TASK [deploy inside router configuration] **************************************
skipping: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp '})
skipping: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
changed: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
changed: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
changed: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254 255.255.255.0'})
changed: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254 255.255.255.0'})

TASK [deploy outside router configuration] *************************************
skipping: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254'})
skipping: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
changed: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
changed: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp'})

PLAY [Deploy firewall lab configuration part 2] ********************************

TASK [deploy firewall configuration] *******************************************
changed: [firewall] => (item={u'interface': u'GigabitEthernet0/0', u'nameif': u'inside', u'address': u'10.1.255.1 255.255.255.240'})
changed: [firewall] => (item={u'interface': u'GigabitEthernet0/1', u'nameif': u'dmz', u'address': u'10.1.255.33 255.255.255.240'})
changed: [firewall] => (item={u'interface': u'GigabitEthernet0/2', u'nameif': u'outside', u'address': u'217.110.110.1 255.255.255.0'})

PLAY RECAP *********************************************************************
dmz                        : ok=1    changed=1    unreachable=0    failed=0
firewall                   : ok=1    changed=1    unreachable=0    failed=0
inside                     : ok=1    changed=1    unreachable=0    failed=0
outside                    : ok=1    changed=1    unreachable=0    failed=0

[[email protected] firewall]$

Quick check if Ansible deployed the interface configuration:

inside#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
Ethernet0/0                10.1.255.2      YES manual up                    up
Ethernet0/1                10.1.0.254      YES manual up                    up
Ethernet1/0                192.168.100.201 YES NVRAM  up                    up
inside#

dmz#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
Ethernet0/0                10.1.255.34     YES manual up                    up
Ethernet0/1                10.1.1.254      YES manual up                    up
Ethernet1/0                192.168.100.202 YES NVRAM  up                    up
dmz#

outside#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
Ethernet0/0                217.110.110.254 YES manual up                    up
Ethernet0/1                172.16.191.23   YES DHCP   up                    up
Ethernet1/0                192.168.100.203 YES NVRAM  up                    up
outside#

firewall# sho ip address
Current IP Addresses:
Interface                Name                   IP address      Subnet mask     Method
GigabitEthernet0/0       inside                 10.1.255.1      255.255.255.240 manual
GigabitEthernet0/1       dmz                    10.1.255.33     255.255.255.240 manual
GigabitEthernet0/2       outside                217.110.110.1   255.255.255.0   manual
Management0/0            management             192.168.100.204 255.255.255.0   CONFIG
firewall#

As you can see Ansible deployed the interface configuration correctly. If I run Ansible again nothing will be deployed because the configuration is already present:

[[email protected] firewall]$ ansible-playbook interfaces.yml -i hosts

PLAY [Deploy firewall lab configuration part 1] ********************************

TASK [deploy inside router configuration] **************************************
skipping: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp '})
skipping: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
ok: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
ok: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254 255.255.255.0'})
ok: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
ok: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254 255.255.255.0'})

TASK [deploy outside router configuration] *************************************
skipping: [inside] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.0.254'})
skipping: [inside] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.2 255.255.255.240'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/1', u'address': u'10.1.1.254'})
skipping: [dmz] => (item={u'interface': u'Ethernet0/0', u'address': u'10.1.255.34 255.255.255.240'})
ok: [outside] => (item={u'interface': u'Ethernet0/0', u'address': u'217.110.110.254 255.255.255.0'})
ok: [outside] => (item={u'interface': u'Ethernet0/1', u'address': u'dhcp'})

PLAY [Deploy firewall lab configuration part 2] ********************************

TASK [deploy firewall configuration] *******************************************
ok: [firewall] => (item={u'interface': u'GigabitEthernet0/0', u'nameif': u'inside', u'address': u'10.1.255.1 255.255.255.240'})
ok: [firewall] => (item={u'interface': u'GigabitEthernet0/1', u'nameif': u'dmz', u'address': u'10.1.255.33 255.255.255.240'})
ok: [firewall] => (item={u'interface': u'GigabitEthernet0/2', u'nameif': u'outside', u'address': u'217.110.110.1 255.255.255.0'})

PLAY RECAP *********************************************************************
dmz                        : ok=1    changed=0    unreachable=0    failed=0
firewall                   : ok=1    changed=0    unreachable=0    failed=0
inside                     : ok=1    changed=0    unreachable=0    failed=0
outside                    : ok=1    changed=0    unreachable=0    failed=0

[[email protected] firewall]$

In my GNS3 labs, I normally not save the device configuration except the management IPs because with Ansible I can deploy everything again within seconds and use different Playbooks depending what I want to test. It gets even cooler if you use Semaphore (see my blog post: Ansible Semaphore) because you just click ones on the Playbook you want to deploy.

Comment below if you have questions or problems.

Read my new posts about Ansible Playbook for Cisco ASAv Firewall Topology or Ansible Playbook for Cisco BGP Routing Topology.