Ansible Playbook for Arista vEOS BGP IP-Fabric

Over the Christmas holidays, I was working just for fun on an Arista vEOS Vagrant topology and Ansible Playbook. I reused my Ansible Playbook from my previous post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Arista only has a Virtualbox vEOS image and there is an ISO image to boot the virtual appliance which I don’t understand why they have done this, rather I prefer the way Cumulus provide their VX images for testing to use with Virtualbox or KVM.

I found an interesting blog post on how to run vEOS images with KVM (Libvirt). I tried it and I could run vEOS in KVM but unfortunately, it wasn’t  stable enough to run more complex virtual network topologies so I had to switch back to Virtualbox. I will give it a try again in a few month because I prefer KVM over Virtualbox.

Anyway, you’ll find more information about how to use vEOS with Virtualbox and Vagrant.

My Virtualbox Vagrantfile can be found in my Github repository: https://github.com/berndonline/arista-lab-vagrant

Network overview:

Ansible Playbook:

As I have mentioned before I tried to be close as possible to my Cumulus Linux Ansible Playbook and tried to keep the variables and roles the same. They are differences of course in the Jinja2 templates and tasks but the overall structure is similar.

Here you’ll find the repository with the Ansible Playbook: https://github.com/berndonline/arista-lab-provision

Because Arista didn’t prepare the images very well and only created a vagrant user without adding the ssh key for authentication I needed to use a CLI provider with a username and password. But this is only a minor issue otherwise it works the same. See the site.yml below:

---

- hosts: network

  connection: local
  gather_facts: 'False'

  vars:
    cli:
      username: vagrant
      password: vagrant

  roles:
    - leafgroups
    - hostname
    - interfaces
    - routing
    - ntp

In the roles, I have used the Arista EOS Ansible modules eos_config and eos_system.

Boot up the Vagrant environment and then run the Playbook afterwards:

PLAY [network] *****************************************************************

TASK [leafgroups : create leaf groups based on clag_pairs] *********************
ok: [leaf-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
skipping: [leaf-3] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
ok: [leaf-3] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
skipping: [leaf-4] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
ok: [leaf-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
ok: [leaf-4] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
skipping: [spine-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
skipping: [spine-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 

TASK [leafgroups : include leaf group variables] *******************************
ok: [leaf-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-3] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [leaf-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
skipping: [leaf-4] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
ok: [leaf-3] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
ok: [leaf-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
ok: [leaf-4] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
skipping: [spine-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 

TASK [hostname : write hostname and domain name] *******************************
changed: [leaf-4]
changed: [spine-1]
changed: [leaf-1]
changed: [leaf-3]
changed: [leaf-2]
changed: [spine-2]

TASK [interfaces : write interface configuration] ******************************
changed: [spine-1]
changed: [leaf-2]
changed: [leaf-4]
changed: [leaf-3]
changed: [leaf-1]
changed: [spine-2]

TASK [routing : write routing configuration] ***********************************
changed: [leaf-1]
changed: [leaf-4]
changed: [spine-1]
changed: [leaf-2]
changed: [leaf-3]
changed: [spine-2]

TASK [ntp : write ntp configuration] *******************************************
changed: [leaf-2] => (item=216.239.35.8)
changed: [leaf-1] => (item=216.239.35.8)
changed: [leaf-3] => (item=216.239.35.8)
changed: [spine-1] => (item=216.239.35.8)
changed: [leaf-4] => (item=216.239.35.8)
changed: [spine-2] => (item=216.239.35.8)

PLAY RECAP *********************************************************************
leaf-1                     : ok=6    changed=4    unreachable=0    failed=0   
leaf-2                     : ok=6    changed=4    unreachable=0    failed=0   
leaf-3                     : ok=6    changed=4    unreachable=0    failed=0   
leaf-4                     : ok=6    changed=4    unreachable=0    failed=0   
spine-1                    : ok=4    changed=4    unreachable=0    failed=0   
spine-2                    : ok=4    changed=4    unreachable=0    failed=0   

I didn’t use the leafgroups role for variables in my Playbook but I left it just in case.

Because Arista has nothing similar to Cumulus NetQ to validate the configuration I create a simple arista_check_icmp.yml playbook and use ping from the leaf switches to test if the configuration is successfully deployed.

PLAY [leaf] ********************************************************************

TASK [validate connection from leaf-1] *****************************************
skipping: [leaf-3] => (item=10.255.0.4) 
skipping: [leaf-3] => (item=10.255.0.5) 
skipping: [leaf-3] => (item=10.255.0.6) 
skipping: [leaf-2] => (item=10.255.0.4) 
skipping: [leaf-2] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.255.0.6) 
skipping: [leaf-3] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.255.0.4) 
skipping: [leaf-3] => (item=10.0.102.253) 
skipping: [leaf-3] => (item=10.0.102.254) 
skipping: [leaf-4] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.255.0.6) 
skipping: [leaf-2] => (item=10.0.102.253) 
skipping: [leaf-2] => (item=10.0.102.254) 
skipping: [leaf-4] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.0.102.253) 
skipping: [leaf-4] => (item=10.0.102.254) 
ok: [leaf-1] => (item=10.255.0.4)
ok: [leaf-1] => (item=10.255.0.5)
ok: [leaf-1] => (item=10.255.0.6)
ok: [leaf-1] => (item=10.0.102.252)
ok: [leaf-1] => (item=10.0.102.253)
ok: [leaf-1] => (item=10.0.102.254)

TASK [validate connection from leaf-2] *****************************************
skipping: [leaf-1] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.5) 
skipping: [leaf-3] => (item=10.255.0.5) 
skipping: [leaf-1] => (item=10.255.0.6) 
skipping: [leaf-3] => (item=10.255.0.6) 
skipping: [leaf-1] => (item=10.0.102.252) 
skipping: [leaf-1] => (item=10.0.102.253) 
skipping: [leaf-4] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.0.102.252) 
skipping: [leaf-1] => (item=10.0.102.254) 
skipping: [leaf-3] => (item=10.0.102.253) 
skipping: [leaf-3] => (item=10.0.102.254) 
skipping: [leaf-4] => (item=10.255.0.5) 
skipping: [leaf-4] => (item=10.255.0.6) 
skipping: [leaf-4] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.0.102.253) 
skipping: [leaf-4] => (item=10.0.102.254) 
ok: [leaf-2] => (item=10.255.0.3)
ok: [leaf-2] => (item=10.255.0.5)
ok: [leaf-2] => (item=10.255.0.6)
ok: [leaf-2] => (item=10.0.102.252)
ok: [leaf-2] => (item=10.0.102.253)
ok: [leaf-2] => (item=10.0.102.254)

TASK [validate connection from leaf-3] *****************************************
skipping: [leaf-1] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.4) 
skipping: [leaf-2] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.6) 
skipping: [leaf-1] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.255.0.4) 
skipping: [leaf-2] => (item=10.255.0.6) 
skipping: [leaf-1] => (item=10.0.101.253) 
skipping: [leaf-4] => (item=10.255.0.3) 
skipping: [leaf-2] => (item=10.0.101.252) 
skipping: [leaf-4] => (item=10.255.0.4) 
skipping: [leaf-1] => (item=10.0.101.254) 
skipping: [leaf-4] => (item=10.255.0.6) 
skipping: [leaf-2] => (item=10.0.101.253) 
skipping: [leaf-4] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.0.101.254) 
skipping: [leaf-4] => (item=10.0.101.253) 
skipping: [leaf-4] => (item=10.0.101.254) 
ok: [leaf-3] => (item=10.255.0.3)
ok: [leaf-3] => (item=10.255.0.4)
ok: [leaf-3] => (item=10.255.0.6)
ok: [leaf-3] => (item=10.0.101.252)
ok: [leaf-3] => (item=10.0.101.253)
ok: [leaf-3] => (item=10.0.101.254)

TASK [validate connection from leaf-4] *****************************************
skipping: [leaf-1] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.4) 
skipping: [leaf-3] => (item=10.255.0.4) 
skipping: [leaf-1] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.255.0.5) 
skipping: [leaf-3] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.255.0.4) 
skipping: [leaf-1] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.0.101.252) 
skipping: [leaf-3] => (item=10.0.101.253) 
skipping: [leaf-1] => (item=10.0.101.253) 
skipping: [leaf-1] => (item=10.0.101.254) 
skipping: [leaf-3] => (item=10.0.101.254) 
skipping: [leaf-2] => (item=10.0.101.253) 
skipping: [leaf-2] => (item=10.0.101.254) 
ok: [leaf-4] => (item=10.255.0.3)
ok: [leaf-4] => (item=10.255.0.4)
ok: [leaf-4] => (item=10.255.0.5)
ok: [leaf-4] => (item=10.0.101.252)
ok: [leaf-4] => (item=10.0.101.253)
ok: [leaf-4] => (item=10.0.101.254)

PLAY RECAP *********************************************************************
leaf-1                     : ok=1    changed=0    unreachable=0    failed=0   
leaf-2                     : ok=1    changed=0    unreachable=0    failed=0   
leaf-3                     : ok=1    changed=0    unreachable=0    failed=0   
leaf-4                     : ok=1    changed=0    unreachable=0    failed=0   

I don’t usually work with Arista devices and this was a try to use a different switch vendor but still keep using the type of Ansible Playbook.

Please tell me if you like it and share your feedback.

Leave a comment

Ansible Playbook for Cisco ASAv Firewall Topology

More about Ansible network automation with Cisco ASAv and continuous integration testing like in my previous posts using Vagrant and Gitlab-CI.

Network overview:

Here’s my Github repository where you can find the complete Ansible Playbook: https://github.com/berndonline/asa-lab-provision

Automating firewall configuration is not that easy and can get very complicated because you have different objects, access-lists and service policies to configure which all together makes the playbook complex rather than simple.

What you won’t find in my playbook is how to automate the cluster deployment because this wasn’t possible in my scenario using ASAv and Vagrant. I didn’t have physical Cisco ASA firewall on hand to do this but I might add this later in the coming months.

Let’s look at the different variable files I created; first the host_vars for asa-1.yml which is very similar to a Cisco router:

---

hostname: asa-1
domain_name: lab.local

interfaces:
  0/0:
    alias: connection rtr-1 inside
    nameif: inside
    security_level: 100
    address: 10.0.255.1
    mask: 255.255.255.0

  0/1:
    alias: connection rtr-2 outside
    nameif: outside
    security_level: 0
    address: 217.100.100.1
    mask: 255.255.255.0

routes:
  - route outside 0.0.0.0 0.0.0.0 217.100.100.254 1

I then use multiple files in group_vars for objects.ymlobject-groups.ymlaccess-lists.yml and nat.yml to configure specific firewall settings.

Roles:

  • Hostname: The task in main.yml uses the Ansible module asa_config and configures hostname and domain name.
  • Interfaces:  This role uses the Ansible module asa_config to deploy the template interfaces.j2 to configure the interfaces. In the main.yml is a second task to enable the interfaces when the previous template applied the configuration.
  • Routing: Similar to the interfaces role and uses also the asa_config module to deploy the template routing.j2 for the static routes
  • Objects: The first task in main.yml loads the objects.yml from group_vars, the second task deploys the template objects.j2.
  • Object-Groups: Uses same tasks in main.yml and template object-groups.j2 like the objects role but the commands are slightly different.
  • Access-Lists: One of the more complicated roles I needed to work on, in the main.yml are multiple tasks to load variables like in the previous roles, then runs a task to clear access-lists if the variable “override_acl” from access-lists.yml group_vars is set to “true” otherwise it skips the next tasks. When the variable are set to true and the access-lists are cleared it then writes new access-lists using the Ansible module asa_acl and finishes with a task to assigning the newly created access-lists to the interfaces.
  • NAT: This role is again similar to the objects role using a task main.yml to load variable file and deploys the template nat.j2. The NAT role uses object nat and only works if you created the object before in the objects group_vars.
  • Policy-Framework: Multiple tasks in main.yml first clears global policy and policy maps and afterwards recreates them. Similar approach like the access lists to keep it consistent.

Main Ansible Playbook site.yml

---

- hosts: asa-1

  connection: local
  user: vagrant
  gather_facts: 'no'

  roles:
    - hostname
    - interfaces
    - routing
    - objects
    - object-groups
    - access-lists
    - nat
    - policy-framework

When a change triggers the gitlab-ci pipeline it spins up the Vagrant instances and executes the main Ansible Playbook. After the Vagrant instances are booted, first the two router rtr-1 and rtr-2 need to be configured with cisco_router_config.yml, then afterwards the main site.yml will be run.

Once the main playbook finishes for the Cisco ASA a last connectivity check will be execute using the playbook asa_check_icmp.yml. Just a simple ping to see if the base configuration is applied correctly.

If everything goes well, like in this example, the job is successful:

I will continue to improve the Playbook and the CICD pipeline so come back later to check it out.

Leave a comment

Ansible Playbook for Cisco BGP Routing Topology

This is my Ansible Playbook for a simple Cisco BGP routing topology and using a CICD pipeline for integration testing. The virtual network environment is created on-demand by using Vagrant, see my post about Cisco IOSv and XE network simulation using Vagrant.

Network overview:

Here’s my Github repository where you can find the complete Ansible Playbook: https://github.com/berndonline/cisco-lab-provision

You can find all the variables for the interface and routing configuration under host_vars. Below is an example for router rtr-1:

---

hostname: rtr-1
domain_name: lab.local

loopback:
  address: 10.255.0.1
  mask: 255.255.255.255

interfaces:
  0/1:
    alias: connection rtr-2
    address: 10.0.255.1
    mask: 255.255.255.252

  0/2:
    alias: connection rtr-3
    address: 10.0.255.5
    mask: 255.255.255.252

bgp:
  asn: 65001
  neighbor:
    - {address: 10.0.255.2, remote_as: 65000}
    - {address: 10.0.255.6, remote_as: 65000}
  networks:
    - {network: 10.0.255.0, mask: 255.255.255.252}
    - {network: 10.0.255.4, mask: 255.255.255.252}
    - {network: 10.255.0.1, mask: 255.255.255.255}
  maxpath: 2

Roles:

  • Hostname: The task in main.yml uses the Ansible module ios_system and configures hostname, domain name and disables dns lookups.
  • Interfaces: This role uses the Ansible module ios_config to deploy the template interfaces.j2 to configure the interfaces. In the main.yml is a second task to enable the interfaces when the previous template applied the configuration.
  • Routing: Very similar to the interfaces role and uses also the ios_config module to deploy the template routing.j2 for the BGP routing configuration.

Main Ansible Playbook site.yml:

---

- hosts: all

  connection: local
  user: vagrant
  gather_facts: 'no'

  roles:
    - hostname
    - interfaces
    - routing

When a change triggers the gitlab-ci pipeline it spins up the Vagrant instances and executes the main Ansible Playbook:

After the main site.yml ran, a second Playbook is executed for basic connectivity testing cisco_check_icmp.yml. This uses the Ansible module ios_ping and can be useful in my case to validate if the configuration was correctly applied:

If everything goes well, like in this example, the job is successful:

I will continue to improve the Playbook and the CICD pipeline so come back later to check it out.

Leave a comment

Getting started with Ansible AWX (Open Source Tower version)

Ansible released AWX a few weeks ago, an open source (community supported) version of their commercial Ansible Tower product. This is a web-based graphical interface to manage Ansible playbooks, inventories, and schedule jobs to run playbooks.

The Github repository you find here: https://github.com/ansible/awx

Let’s start with the installation of Ansible AWX, very easy because everything is dockerized and see the install guide for more information.

Modify the inventory file under the installer folder and change the Postgres data folder which is otherwise located under /tmp, also change Postgres DB username and password if needed. I would recommend binding AWX to localhost and put an Nginx reverse proxy in front with SSL encryption.

Changes in the inventory file:

postgres_data_dir=/var/lib/postgresql/data/
host_port=127.0.0.1:8052

Start the build of the Docker container:

ansible-playbook -i inventory install.yml

After the Ansible Playbook run completes, you see the following Docker container:

berndonline@lab:~/awx/installer$ docker ps
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                NAMES
26a73c91cb04        ansible/awx_task:latest   "/tini -- /bin/sh ..."   2 days ago          Up 24 hours         8052/tcp                             awx_task
07774696a7f2        ansible/awx_web:latest    "/tini -- /bin/sh ..."   2 days ago          Up 24 hours         127.0.0.1:8052->8052/tcp             awx_web
981f4f02c759        memcached:alpine          "docker-entrypoint..."   2 days ago          Up 24 hours         11211/tcp                            memcached
4f4a3141b54d        rabbitmq:3                "docker-entrypoint..."   2 days ago          Up 24 hours         4369/tcp, 5671-5672/tcp, 25672/tcp   rabbitmq
faf07f7b4682        postgres:9.6              "docker-entrypoint..."   2 days ago          Up 24 hours         5432/tcp                             postgres
berndonline@lab:~/awx/installer$

Install Nginx:

sudo apt-get update
sudo apt-get install nginx
sudo rm /etc/nginx/sites-enabled/default

Create Nginx vhosts configuration:

sudo vi /etc/nginx/sites-available/awx
server {
    listen 443 ssl;
    server_name awx.domain.com;

    ssl on;
    ssl_certificate /etc/nginx/ssl/awx.domain.com-cert.pem;
    ssl_certificate_key /etc/nginx/ssl/awx.domain.com-key.pem;

    location / {
        proxy_pass http://127.0.0.1:8052;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Create symlink in sites enable to point to awx config:

sudo ln -s /etc/nginx/sites-available/awx /etc/nginx/sites-enabled/awx

Reload Nginx to apply configuration:

sudo systemctl reload nginx

Afterwards you are able to login with username “admin” and password “password”:

I created a simple job for testing with AWX, you first start to create a project, credentials and inventories. The project points to your Git repository:

Under the job you configure which project, credentials and inventories to use:

Once saved you can manually trigger the job, it first pulls the latest playbook from your version control repository and afterwards executes the configured Ansible playbook:

The job details look very similar if you run an playbook on the CLI:

Ansible AWX is a very useful tool if you need to manage different Ansible playbooks and do job scheduling if you are not already using other tools like Jenkins or Gitlab-CI. But even then it is a good addition to use AWX to run ad-hoc playbooks.

Check out my new articles about Automate Ansible AWX configuration using Tower-CLI and Build Ansible Tower Container.

Leave a comment

Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation

This is my Ansible Playbook for a Cumulus Linux BGP IP-Fabric using BGP unnumbered and Cumulus NetQ to validate the configuration in a CICD pipeline. I use the same CICD pipeline from my previous post about Continuous Integration and Delivery for Networking with Cumulus Linux but added the Cumulus NetQ validation in the production stage to check BGP and CLAG configuration.

Network overview:

Here’s my Github repository where you find the complete Ansible Playbook: https://github.com/berndonline/cumulus-lab-provision

The variables are split between group_vars and host_vars. Still need to see if I can find a better way for the variables because interface settings for spine and edge switches are in group_vars, and for leaf switches the interface configuration is per host in host_vars. Not ideal at the moment, it should be the same for all devices.

Roles:

  • Hostname: This task changes the hostname
  • Interfaces: This creates the interfaces and bridge (only leafs and edges) configuration. The task uses templates interfaces.j2 and interfaces_config.j2 to create the configuration files under /etc/network/…
  • Routing: The template frr.j2 creates the FRR (Free Range Routing) configuration file. FRR replaces Quagga since Cumulus Linux version 3.4.x
  • PTM: Uses as well an template topology.j2 to generate the topology file for the Prescriptive Topology Manager (PTM)
  • NTP: Ntp and timezone settings

In most of the cases I use Jinja2 templates to generate configuration files. The site.yml is otherwise very simple. It executes the different roles, and triggers the handlers if a change is made by a role.

---

- hosts: network
  strategy: free

  user: cumulus
  become: 'True'
  gather_facts: 'False'

  handlers:
    - name: reload networking
      command: "{{item}}"
      with_items:
        - ifreload -a
        - sleep 10

    - name: reload frr
      service: name=frr state=reloaded

    - name: apply hostname
      command: hostname -F /etc/hostname

    - name: restart netq agent
      command: netq config agent restart

    - name: reload ptmd
      service: name=ptmd state=reloaded

    - name: apply timezone
      command: /usr/sbin/dpkg-reconfigure --frontend noninteractive tzdata

    - name: restart ntp
      service: name=ntp state=restarted

  roles:
    - hostname
    - interfaces
    - routing
    - ptm
    - ntp

Like mentioned in previous posts, I use Gitlab-CI for my Continuous Integration / Continuous Delivery (CICD) pipeline to simulate changes against a virtual Cumulus Linux network using Vagrant. You can find more information about the pipeline configuration in the .gitlab-ci.yml.

Changes in the staging branch will spin-up the Vagrant environment but only executes the the Ansible Playbook:

Cumulus NetQ configuration validation in production:

The production stage in the pipeline spins-up the Vagrant environment and executes the Ansible Playbook, then continues executing the two NetQ checks netq_check_bgp.yml and netq_check_clag.yml to validate the BGP and CLAG configuration:

The result will look like this when all stages finish successfully:

I will continue to improve the Playbook and the CICD pipeline so come back later to check it out.

In my repository I have some other useful Playbooks for config backup and restore but also to collect and remove cl-support.

config_backup.yml

config_restore.yml

cl-support_get.yml

cl-support_remove.yml

Please tell me if you like it and share your feedback.

See my new post about BGP EVPN and VXLAN with Cumulus Linux

Leave a comment