Ansible Automation with Cisco ASA Multi-Context Mode

I thought I’d share my experience using Ansible and Cisco ASA firewalls in multi-context mode. Right from the beginning I had a few issues deploying the configuration and the switch between the different security context didn’t work well. I got the error you see below when I tried to run a playbook. Other times the changeto context didn’t work well and applied the wrong config:

berndonline@lab:~$ ansible-playbook -i inventory site.yml --ask-vault-pass
Vault password:

PLAY [all] ***************************************************************************************************************************************************************************

TASK [hostname : set dns and hostname] ***********************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: error: [Errno 61] Connection refused
fatal: [fwcontext01]: FAILED! => {"changed": false, "err": "[Errno 61] Connection refused", "msg": "unable to connect to socket"}
ok: [fwcontext02]

TASK [interfaces : write interfaces config] ******************************************************************************************************************************************
ok: [fwcontext02]

....

After a bit of troubleshooting I found a workaround to limit the amount of processes Ansible use and set this limit to one in the Ansible.cfg. The default is five processes if forks is not defined as far as I remember.

[defaults]
inventory = ./inventory
host_key_checking=False
jinja2_extensions=jinja2.ext.do
forks = 1

In the example inventory file, the “inventory_hostname” variable represents the security context and as you see the “ansible_ssh_host” is set to the IP address of the admin context:

fwcontext01 ansible_ssh_host=192.168.0.1 ansible_ssh_port=22 ansible_ssh_user='ansible' ansible_ssh_pass='cisco'
fwcontext02 ansible_ssh_host=192.168.0.1 ansible_ssh_port=22 ansible_ssh_user='ansible' ansible_ssh_pass='cisco'

When you run the playbook again you can see that the playbook runs successfully but deploys the changes one by one to each firewall security context, the disadvantage is that the playbook takes much longer to run:

berndonline@lab:~$ ansible-playbook site.yml

PLAY [all] ***************************************************************************************************************************************************************************

TASK [hostname : set dns and hostname] ***********************************************************************************************************************************************
ok: [fwcontext01]
ok: [fwcontext02]

TASK [interfaces : write interfaces config] ******************************************************************************************************************************************
ok: [fwcontext01]
ok: [fwcontext02]

Example site.yml

---

- hosts: all
  connection: local
  gather_facts: 'no'

  vars:
    cli:
      username: "{{ ansible_ssh_user }}"
      password: "{{ ansible_ssh_pass }}"
      host: "{{ ansible_ssh_host }}"

  roles:
    - interfaces

In the example Interface role you see that the context is set to “inventory_hostname” variable:

---

- name: write interfaces config
  asa_config:
    src: "templates/interfaces.j2"
    provider: "{{ cli }}"
    context: "{{ inventory_hostname }}"
  register: result

- name: enable interfaces
  asa_config:
    parents: "interface {{ item.0 }}"
    lines: "no shutdown"
    match: none
    provider: "{{ cli }}"
    context: "{{ inventory_hostname }}"
  when: result.changed
  with_items:
    - "{{ interfaces.items() }}"

After modifying the forks, the Ansible playbook runs well with Cisco ASA in multi-context mode, like mentioned before it is a bit slow to deploy the configuration if I compare this to Cumulus Linux or any other Linux system.

Please share your feedback.

Leave a comment

Continuous Integration and Delivery for Networking with Cumulus Linux

Continuous Integration – Continuous Delivery (CICD) is becoming more and more popular for network automation but the problem is how to validate your scripts and stage the configuration because you don’t want to deploy untested code to a production system. Especially in networking that could be pretty destructive if you made a mistake which could cause a loss in connectivity.

I spend some days working on a Cumulus Linux lab using Vagrant which I use to stage configuration. You find the basic Ansible playbook and the gitlab-ci configuration for the Cumulus lab in my Github repo: cumulus-lab-provision

For the continuous integration and delivery (CI/CD) pipeline I am using Gitlab.com and their Gitlab-runner which is running on my server. I will not get into too much detail what is needed on the server, basically it runs vargant, libvirt (kvm), virtualbox, ansible and the gitlab-runner.

  • You need to register your Gitlab-runner with the Gitlab repository.

  • The next step is to create your .gitlab-ci.yml which defines your CI-pipeline.
---
stages:
    - validate ansible
    - staging
    - production
validate:
    stage: validate ansible
    script:
        - bash ./linter.sh
staging:
    before_script:
        - git clone https://github.com/berndonline/cumulus-lab-vagrant.git
        - cd cumulus-lab-vagrant/
        - python ./topology_converter.py ./topology-staging.dot
          -p libvirt --ansible-hostfile
    stage: staging
    script:
        - bash ../staging.sh
production:
    before_script:
        - git clone https://github.com/berndonline/cumulus-lab-vagrant.git
        - cd cumulus-lab-vagrant/
        - python ./topology_converter.py ./topology-production.dot
          -p libvirt --ansible-hostfile
    stage: production
    when: manual
    script:
        - bash ../production.sh
    only:
        - master

In the gitlab-ci you see that I clone the cumulus vagrant lab which I use to spin-up a virtual staging environment and run the Ansible playbook against the virtual lab. The production stage is in my example also a vagrant environment because I had no physical switches for testing.

  • Basically any commit or merge in the Gitlab repo triggers the pipeline which I define in the gitlab-ci.

  • You can see the details in the running job. The first stage is only to validate that the YAML files have the correct syntax.

  • Here the details of the running job of staging and when everything goes well the job succeeded.

  • The last stage is production which needs to be triggered manually.

  • After the changes run through all defined stages you see that you successfully validate, staged and deployed your configuration to a cumulus production system.

This is a complete different way of working for a network engineer but the way it goes in fully automated datacenter network environments. It gets very powerful when you combine this with the Cumulus NetQ server to validate the state of your switch fabric after you run changes in production.

The next topic I am working on, is using Cumulus NetQ to validate configuration changes.

Here again my two repositories I use:

https://github.com/berndonline/cumulus-lab-vagrant

https://github.com/berndonline/cumulus-lab-provision

Read my new posts about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation and BGP EVPN and VXLAN with Cumulus Linux.

Cisco ASA Virtual Context Mode

A single Cisco ASA or a cluster of two ASAs can be partition into multiple virtual firewalls known as security contexts. Each context has it’s own independent firewall, with its own security policy, interfaces, and administrators. These contexts are similar to having multiple standalone ASA devices. In combination with failover groups you can run a ASA cluster in active/active state and utilize both devices. Don’t forget when a failover happens that both failover groups need to run on a single device, keep enough ressources free on both devices and do not oversubscribe too much.

You have to look because there are limitation what features are supported in context mode. In version 8 is unsupport to use dynamic routing protocols, VPN, Threat Detection and Quality of Service. In version 9 are some changes and now dynamic routing protocols (not RIP or OSPFv3) and site-to-site IPsec VPNs are supported.

Here the configuration example how to set-up an Cisco ASA 5580 with 10 Gigabit Ethernet interfaces.

Enabling the context mode

mode noconfirm multiple

Physical interface configuration

interface GigabitEthernet4/2 
  description Failover 
  no shutdown 
  exit 

interface GigabitEthernet4/3 
  description Stateful 
  no shutdown 
  exit 

interface TenGigabitEthernet5/0 
  description TeTrunk-1st 
  no shutdown 
  exit 

interface TenGigabitEthernet5/1 
  description TeTrunk-2nd 
  no shutdown 
  exit

Redundant interface configuration

interface Redundant 1
  description Redundant-Trunk
  member-interface TenGigabitEthernet5/0
  member-interface TenGigabitEthernet5/1
  exit

interface Redundant 1.800
  vlan 800
  description Link-Outside1
  exit

interface Redundant 1.801
  vlan 801
  description Link-Outside2
  exit

interface Redundant 1.100
  vlan 100
  description Link-Inside1
  exit

interface Redundant 1.101
  vlan 101
  description Link-Inside2
  exit

interface Redundant 1.500
  vlan 500
  description Link-Management
  exit

Here you need to start configuring the ASA failover settings. Like you see in the failover group configuration that I put group 1 to the primary device and group 2 to the seconday device for active/active set-up, when I create the virtual security context I join them to the different failover groups.

failover group 1
  primary
  polltime interface 1 holdtime 5
  exit

failover group 2
  secondary
  polltime interface 1 holdtime 5
  exit

failover

failover lan unit primary
failover lan interface failover GigabitEthernet4/2
failover interface ip failover 169.254.0.1 255.255.255.0 standby 169.254.0.2

failover link stateful GigabitEthernet4/3
failover interface ip stateful 169.254.1.1 255.255.255.0 standby 169.254.1.2

failover polltime unit 2 holdtime 6
failover polltime interface 1 holdtime 5
failover timeout 0:00:00

failover active

Failover configuration on the seconday device

interface GigabitEthernet4/2
  description Failover
  no shutdown
  exit

failover lan unit secondary
failover lan interface failover GigabitEthernet4/2
failover interface ip failover 169.254.0.1 255.255.255.0 standby 169.254.0.2

failover

copy running-config startup-config

Now you start to set-up the virtual contexts and add the interfaces I configured before

admin-context admin-asa-01

context admin-asa-01
  allocate-interface Redundant1.500 Link-Management
  config-url disk0:/admin-asa-01.conf
  join-failover-group 1
  exit

context virtual-asa-02
  allocate-interface Redundant1.800 Link-Outside1
  allocate-interface Redundant1.100 Link-Inside1
  config-url disk0:/virtual-asa-02.conf
  join-failover-group 1
  exit

context virtual-asa-03
  allocate-interface Redundant1.801 Link-Outside2
  allocate-interface Redundant1.101 Link-Inside2
  config-url disk0:/virtual-asa-03.conf
  join-failover-group 2
  exit

In the end save the configuration

write memory all

Afterwards you can change to the configured contexts with the command

changeto context virtual-asa-02

and start configuring your virtual firewalls.

GNS3 Network Simulator

Found something really cool today 🙂

GNS3 is a graphical network simulator where you can set-up complex virtual networks and run Cisco and Juniper routers or switches. The best is that you can also integrate Qemu and Virtualbox into your virtual lab environment what I really love. You can easily test new configurations on devices without having to set-up all these in hardware.

The only little problem is that you need a quite power system to do all of that. Otherwise I tested GNS3 on an 3 year old laptop with Intel Core2Duo and 4 GB RAM and run up to 6 Cisco routers without any big problems what’s enough for me at the moment.

Ah I forgot, you can of course also use Wireshark to capture packets on an link between two devices.

Here the link to the website: www.gns3.net