Deploy OpenShift 3.7 Origin Cluster with Ansible

Something completely different to my more network related posts, this time it is about Platform as a Service with OpenShift Origin. There is a big push for containerized platform services from development.

I was testing the official OpenShift Origin Ansible Playbook to install a small 5 node cluster and created an OpenShift Vagrant environment for this.

Cluster overview:

I recommend having a look at the official RedHat OpenShift documentation to understand the architecture because it is quite a complex platform.

As a pre-requisite, you need to install the vagrant hostmanager because Openshift needs to resolve hostnames and I don’t want to install a separate DNS server. Here you find more information: https://github.com/devopsgroup-io/vagrant-hostmanager

vagrant plugin install vagrant-hostmanager

sudo bash -c 'cat << EOF > /etc/sudoers.d/vagrant_hostmanager2
Cmnd_Alias VAGRANT_HOSTMANAGER_UPDATE = /bin/cp <your-home-folder>/.vagrant.d/tmp/hosts.local /etc/hosts
%sudo ALL=(root) NOPASSWD: VAGRANT_HOSTMANAGER_UPDATE
EOF'

Next, clone my Vagrant repository and the official OpenShift Origin ansible:

git clone [email protected]:berndonline/openshift-origin-vagrant.git
git clone [email protected]:openshift/openshift-ansible.git

Let’s start first by booting the OpenShift vagrant environment:

cd openshift-origin-vagrant/
./vagrant_up.sh

The vagrant host manager will update dynamically the /etc/hosts file on both the Guest and the Host machine:

...
## vagrant-hostmanager-start id: 55ed9acf-25e9-4b19-bfab-e0812a292dc0
10.255.1.81	origin-master

10.255.1.231	origin-etcd

10.255.1.182	origin-infra

10.255.1.72	origin-node-1

10.255.1.145	origin-node-2

## vagrant-hostmanager-end
...

Let’s have a quick look at the OpenShift inventory file. This has settings for the different node types and custom OpenShift and Vagrant variables. You need to modify a few things like public hostname and default subdomain:

OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=vagrant
ansible_become=yes

deployment_type=origin
openshift_release=v3.7.0
containerized=true
openshift_install_examples=true
enable_excluders=false
openshift_check_min_host_memory_gb=4
openshift_disable_check=docker_image_availability,docker_storage,disk_availability

# use htpasswd authentication with demo/demo
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'demo': '$apr1$.MaA77kd$Rlnn6RXq9kCjnEfh5I3w/.'}

# put the router on dedicated infra node
openshift_hosted_router_selector='region=infra'
openshift_master_default_subdomain=origin.paas.domain.com

# put the image registry on dedicated infra node
openshift_hosted_registry_selector='region=infra'

# project pods should be placed on primary nodes
osm_default_node_selector='region=primary'

# Vagrant variables
ansible_port='22' 
ansible_user='vagrant'
ansible_ssh_private_key_file='/home/berndonline/.vagrant.d/insecure_private_key'

[masters]
origin-master  openshift_public_hostname="console.paas.domain.com"

[etcd]
origin-etcd

[nodes]
# master needs to be included in the node to be configured in the SDN
origin-master
origin-infra openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
origin-node-[1:2] openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

Now that we are ready, we need to check out the latest release and execute the Ansible Playbook:

cd openshift-ansible/
git checkout release-3.7
ansible-playbook ./playbooks/byo/config.yml -i ../openshift-origin-vagrant/inventory

The playbook takes forever to run, so do something else for the next 10 to 15 mins.

...

PLAY RECAP **********************************************************************************************************************************************************
localhost                  : ok=13   changed=0    unreachable=0    failed=0
origin-etcd                : ok=147  changed=47   unreachable=0    failed=0
origin-infra               : ok=202  changed=61   unreachable=0    failed=0
origin-master              : ok=561  changed=224  unreachable=0    failed=0
origin-node                : ok=201  changed=61   unreachable=0    failed=0


INSTALLER STATUS ****************************************************************************************************************************************************
Initialization             : Complete
Health Check               : Complete
etcd Install               : Complete
Master Install             : Complete
Master Additional Install  : Complete
Node Install               : Complete
Hosted Install             : Complete
Service Catalog Install    : Complete

Sunday 21 January 2018  20:55:16 +0100 (0:00:00.011)       0:11:56.549 ********
===============================================================================
etcd : Pull etcd container ---------------------------------------------------------------------------------------------------------------------------------- 79.51s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ----------------------------------------------------------------------------- 31.54s
openshift_node : Pre-pull node image when containerized ----------------------------------------------------------------------------------------------------- 31.28s
template_service_broker : Verify that TSB is running -------------------------------------------------------------------------------------------------------- 30.87s
docker : Install Docker ------------------------------------------------------------------------------------------------------------------------------------- 30.41s
docker : Install Docker ------------------------------------------------------------------------------------------------------------------------------------- 26.32s
openshift_cli : Pull CLI Image ------------------------------------------------------------------------------------------------------------------------------ 23.03s
openshift_service_catalog : wait for api server to be ready ------------------------------------------------------------------------------------------------- 21.32s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ----------------------------------------------------------------------------- 16.27s
restart master api ------------------------------------------------------------------------------------------------------------------------------------------ 10.69s
restart master controllers ---------------------------------------------------------------------------------------------------------------------------------- 10.62s
openshift_node : Start and enable node ---------------------------------------------------------------------------------------------------------------------- 10.42s
openshift_node : Start and enable node ---------------------------------------------------------------------------------------------------------------------- 10.30s
openshift_master : Start and enable master api on first master ---------------------------------------------------------------------------------------------- 10.21s
openshift_master : Start and enable master controller service ----------------------------------------------------------------------------------------------- 10.19s
os_firewall : Install iptables packages --------------------------------------------------------------------------------------------------------------------- 10.15s
os_firewall : Wait 10 seconds after disabling firewalld ----------------------------------------------------------------------------------------------------- 10.07s
os_firewall : need to pause here, otherwise the iptables service starting can sometimes cause ssh to fail --------------------------------------------------- 10.05s
openshift_node : Pre-pull node image when containerized ------------------------------------------------------------------------------------------------------ 7.85s
openshift_service_catalog : oc_process ----------------------------------------------------------------------------------------------------------------------- 7.44s

To publish both the openshift_public_hostname and openshift_master_default_subdomain, I have a Nginx reverse proxy running and publish 8443 from the origin-master and 80, 443 from the origin-infra nodes.

Here a Nginx example:

server {
  listen 8443 ssl;
  listen [::]:8443 ssl;
  server_name console.paas.domain.com;

  ssl on;
  ssl_certificate /etc/nginx/ssl/paas.domain.com-cert.pem;
  ssl_certificate_key /etc/nginx/ssl/paas.domain.com-key.pem;

  access_log  /var/log/nginx/openshift-console_access.log;
  error_log   /var/log/nginx/openshift-console_error.log;

location / {
  proxy_pass https://10.255.1.81:8443;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection 'upgrade';
  proxy_set_header Host $host;
  proxy_cache_bypass $http_upgrade;

  }
}

I will try to write more about OpenShift and Platform as a Service and how to deploy small applications like WordPress.

Have fun testing OpenShift and please share your feedback.

Leave a comment

Ansible Playbook for VyOS and BGP Routing

I am currently looking into different possibilities for Open Source alternatives to commercial routers from Cisco or Juniper to use in Amazon AWS Transit VPCs. One option is to completely build the software router by myself with a Debian Linux, FRR (Free Range Routing) and StrongSwan, read my post about the self-build software router: Open Source Routing GRE over IPSec with StrongSwan and Cisco IOS-XE

A few years back I was working with Juniper JunOS routers and I thought I’d give VyOS a try because the command line which is very similar.

I replicated the same Vagrant topology for my Ansible Playbook for Cisco BGP Routing Topology but used VyOS instead of Cisco.

Network overview:

Here are the repositories for the Vagrant topology https://github.com/berndonline/vyos-lab-vagrant and the Ansible Playbook https://github.com/berndonline/vyos-lab-provision.

The Ansible Playbook site.yml is very simple, using the Ansible vyos_system for changing the hostname and the module vyos_config for interface and routing configuration:

---

- hosts: all

  connection: local
  user: '{{ ansible_ssh_user }}'
  gather_facts: 'no'

  roles:
    - hostname
    - interfaces
    - routing

Here is an example from host_vars rtr-1.yml:

---

hostname: rtr-1
domain_name: lab.local

loopback:
  dum0:
    alias: dummy loopback0
    address: 10.255.0.1
    mask: /32

interfaces:
  eth1:
    alias: connection rtr-2
    address: 10.0.255.1
    mask: /30

  eth2:
    alias: connection rtr-3
    address: 10.0.255.5
    mask: /30

bgp:
  asn: 65001
  neighbor:
    - {address: 10.0.255.2, remote_as: 65000}
    - {address: 10.0.255.6, remote_as: 65000}
  networks:
    - {network: 10.0.255.0, mask: /30}
    - {network: 10.0.255.4, mask: /30}
    - {network: 10.255.0.1, mask: /32}
  maxpath: 2

The template interfaces.j2 for the interface configuration:

{% if loopback is defined %}
{% for port, value in loopback.items() %}
set interfaces dummy {{ port }} address '{{ value.address }}{{ value.mask }}'
set interfaces dummy {{ port }} description '{{ value.alias }}'
{% endfor %}
{% endif %}

{% if interfaces is defined %}
{% for port, value in interfaces.items() %}
set interfaces ethernet {{ port }} address '{{ value.address }}{{ value.mask }}'
set interfaces ethernet {{ port }} description '{{ value.alias }}'
{% endfor %}
{% endif %}

This is the template routing.j2 for the routing configuration:

{% if bgp is defined %}
{% if bgp.maxpath is defined %}
set protocols bgp {{ bgp.asn }} maximum-paths ebgp '{{ bgp.maxpath }}'
{% endif %}
{% for item in bgp.neighbor %}
set protocols bgp {{ bgp.asn }} neighbor {{ item.address }} ebgp-multihop '2'
set protocols bgp {{ bgp.asn }} neighbor {{ item.address }} remote-as '{{ item.remote_as }}'
{% endfor %}
{% for item in bgp.networks %}
set protocols bgp {{ bgp.asn }} network '{{ item.network }}{{ item.mask }}'
{% endfor %}
set protocols bgp {{ bgp.asn }} parameters router-id '{{ loopback.dum0.address }}'
{% endif %}

The output of the running Ansible Playbook:

PLAY [all] *********************************************************************

TASK [hostname : write hostname and domain-name] *******************************
changed: [rtr-3]
changed: [rtr-2]
changed: [rtr-4]
changed: [rtr-1]

TASK [interfaces : write interfaces config] ************************************
changed: [rtr-4]
changed: [rtr-1]
changed: [rtr-3]
changed: [rtr-2]

TASK [routing : write routing config] ******************************************
changed: [rtr-2]
changed: [rtr-4]
changed: [rtr-3]
changed: [rtr-1]

PLAY RECAP *********************************************************************
rtr-1                      : ok=3    changed=3    unreachable=0    failed=0   
rtr-2                      : ok=3    changed=3    unreachable=0    failed=0   
rtr-3                      : ok=3    changed=3    unreachable=0    failed=0   
rtr-4                      : ok=3    changed=3    unreachable=0    failed=0   

Like in all my other Ansible Playbooks I use some kind of validation, a simple ping check vyos_check_icmp.yml to see if the configuration is correctly deployed:

---

- hosts: all

  connection: local
  user: '{{ ansible_ssh_user }}'
  gather_facts: 'no'

  tasks:
    - name: validate connection from rtr-1
      vyos_command:
        commands: 'ping {{ item }} count 4'
      when: "'rtr-1' in inventory_hostname"
      with_items:
        - '10.0.255.2'
        - '10.0.255.6'

    - name: validate connection from rtr-2
      vyos_command:
        commands: 'ping {{ item }} count 4'
      when: "'rtr-2' in inventory_hostname"
      with_items:
        - '10.0.255.1'
        - '10.0.254.1'
        - '10.0.253.2'
...

The output of the icmp validation Playbook:

PLAY [all] *********************************************************************

TASK [validate connection from rtr-1] ******************************************
skipping: [rtr-3] => (item=10.0.255.2) 
skipping: [rtr-3] => (item=10.0.255.6) 
skipping: [rtr-2] => (item=10.0.255.2) 
skipping: [rtr-2] => (item=10.0.255.6) 
skipping: [rtr-4] => (item=10.0.255.2) 
skipping: [rtr-4] => (item=10.0.255.6) 
ok: [rtr-1] => (item=10.0.255.2)
ok: [rtr-1] => (item=10.0.255.6)

TASK [validate connection from rtr-2] ******************************************
skipping: [rtr-3] => (item=10.0.255.1) 
skipping: [rtr-3] => (item=10.0.254.1) 
skipping: [rtr-1] => (item=10.0.255.1) 
skipping: [rtr-3] => (item=10.0.253.2) 
skipping: [rtr-1] => (item=10.0.254.1) 
skipping: [rtr-1] => (item=10.0.253.2) 
skipping: [rtr-4] => (item=10.0.255.1) 
skipping: [rtr-4] => (item=10.0.254.1) 
skipping: [rtr-4] => (item=10.0.253.2) 
ok: [rtr-2] => (item=10.0.255.1)
ok: [rtr-2] => (item=10.0.254.1)
ok: [rtr-2] => (item=10.0.253.2)

TASK [validate connection from rtr-3] ******************************************
skipping: [rtr-1] => (item=10.0.255.5) 
skipping: [rtr-1] => (item=10.0.254.5) 
skipping: [rtr-2] => (item=10.0.255.5) 
skipping: [rtr-1] => (item=10.0.253.1) 
skipping: [rtr-2] => (item=10.0.254.5) 
skipping: [rtr-2] => (item=10.0.253.1) 
skipping: [rtr-4] => (item=10.0.255.5) 
skipping: [rtr-4] => (item=10.0.254.5) 
skipping: [rtr-4] => (item=10.0.253.1) 
ok: [rtr-3] => (item=10.0.255.5)
ok: [rtr-3] => (item=10.0.254.5)
ok: [rtr-3] => (item=10.0.253.1)

TASK [validate connection from rtr-4] ******************************************
skipping: [rtr-3] => (item=10.0.254.2) 
skipping: [rtr-3] => (item=10.0.254.6) 
skipping: [rtr-1] => (item=10.0.254.2) 
skipping: [rtr-1] => (item=10.0.254.6) 
skipping: [rtr-2] => (item=10.0.254.2) 
skipping: [rtr-2] => (item=10.0.254.6) 
ok: [rtr-4] => (item=10.0.254.2)
ok: [rtr-4] => (item=10.0.254.6)

TASK [validate bgp connection from rtr-1] **************************************
skipping: [rtr-3] => (item=10.255.0.4) 
skipping: [rtr-2] => (item=10.255.0.4) 
skipping: [rtr-4] => (item=10.255.0.4) 
ok: [rtr-1] => (item=10.255.0.4)

TASK [validate bgp connection from rtr-4] **************************************
skipping: [rtr-3] => (item=10.255.0.1) 
skipping: [rtr-1] => (item=10.255.0.1) 
skipping: [rtr-2] => (item=10.255.0.1) 
ok: [rtr-4] => (item=10.255.0.1)

PLAY RECAP *********************************************************************
rtr-1                      : ok=2    changed=0    unreachable=0    failed=0   
rtr-2                      : ok=1    changed=0    unreachable=0    failed=0   
rtr-3                      : ok=1    changed=0    unreachable=0    failed=0   
rtr-4                      : ok=2    changed=0    unreachable=0    failed=0   

As you see, the configuration is successfully deployed and BGP connectivity between the nodes.

Leave a comment

BGP EVPN and VXLAN with Cumulus Linux

I did some updates on my Cumulus Linux Vagrant topology and added new functions to my post about an Ansible Playbook for the Cumulus Linux BGP IP-Fabric.

To the Vagrant topology, I added 6x servers and per clag-pair, each server is connected to a VLAN and the second server is connected to a VXLAN.

Here are the links to the repositories where you find the Ansible Playbook https://github.com/berndonline/cumulus-lab-provision and the Vagrantfile https://github.com/berndonline/cumulus-lab-vagrant

In the Ansible Playbook, I added BGP EVPN and one VXLAN which spreads over all Leaf and Edge switches. VXLAN routing is happening on the Edge switches into the rest of the virtual data centre network.

Here is an example of the additional variables I added to edge-1 for BGP EVPN and VXLAN:

group_vars/edge.yml:

clagd_vxlan_anycast_ip: 10.255.100.1

The VXLAN anycast IP is needed in BGP for EVPN and the same IP is shared between edge-1 and edge-2. The same is for the other leaf switches, per clag pair they share the same anycast IP address.

host_vars/edge-1.yml:

---

loopback: 10.255.0.3/32

bgp_fabric:
  asn: 65001
  router_id: 10.255.0.3
  neighbor:
    - swp51
    - swp52
  networks:
    - 10.0.4.0/24
    - 10.255.0.3/32
    - 10.255.100.1/32
    - 10.0.255.0/28
  evpn: true
  advertise_vni: true

peerlink:
  bond_slaves: swp53 swp54
  mtu: 9216
  vlan: 4094
  address: 169.254.1.1/30
  clagd_peer_ip: 169.254.1.2
  clagd_backup_ip: 192.168.100.4
  clagd_sys_mac: 44:38:39:FF:40:94
  clagd_priority: 4096

bridge:
  ports: peerlink vxlan10201
  vids: 901 201

vlans:
  901:
    alias: edge-transit-901
    vipv4: 10.0.255.14/28
    vmac: 00:00:5e:00:09:01
    pipv4: 10.0.255.12/28
  201:
    alias: prod-server-10201
    vipv4: 10.0.4.254/24
    vmac: 00:00:00:00:02:01
    pipv4: 10.0.4.252/24
    vlan_id: 201
    vlan_raw_device: bridge

vxlans:
  10201:
    alias: prod-server-10201
    vxlan_local_tunnelip: 10.255.0.3
    bridge_access: 201
    bridge_learning: 'off'
    bridge_arp_nd_suppress: 'on'

On the Edge switches, because of VXLAN routing, you find a mapping between VXLAN 10201 to VLAN 201 which has VRR running.

I needed to do some modifications to the interfaces template interfaces_config.j2:

{% if loopback is defined %}
auto lo
iface lo inet loopback
    address {{ loopback }}
{% if clagd_vxlan_anycast_ip is defined %}
    clagd-vxlan-anycast-ip {{ clagd_vxlan_anycast_ip }}
{% endif %}
{% endif %}
...
{% if bridge is defined %}
{% for vxlan_id, value in vxlans.items() %}
auto vxlan{{ vxlan_id }}
iface vxlan{{ vxlan_id }}
    alias {{ value.alias }}
    vxlan-id {{ vxlan_id }}
    vxlan-local-tunnelip {{ value.vxlan_local_tunnelip }}
    bridge-access {{ value.bridge_access }}
    bridge-learning {{ value.bridge_learning }}
    bridge-arp-nd-suppress {{ value.bridge_arp_nd_suppress }}
    mstpctl-bpduguard yes
    mstpctl-portbpdufilter yes

{% endfor %}
{% endif %}

There were also some modifications needed to the FRR template frr.j2 to add EVPN to the BGP configuration:

...
{% if bgp_fabric.evpn is defined %}
 address-family ipv6 unicast
  neighbor fabric activate
 exit-address-family
 !
 address-family l2vpn evpn
  neighbor fabric activate
{% if bgp_fabric.advertise_vni is defined %}
  advertise-all-vni
{% endif %}
 exit-address-family
{% endif %}
{% endif %}
...

For more detailed information about EVPN and VXLAN routing on Cumulus Linux, I recommend reading the documentation Ethernet Virtual Private Network – EVPN and VXLAN Routing.

Have fun testing the new features in my Ansible Playbook and please share your feedback.

Leave a comment

Ansible Playbook for Arista vEOS BGP IP-Fabric

Over the Christmas holidays, I was working just for fun on an Arista vEOS Vagrant topology and Ansible Playbook. I reused my Ansible Playbook from my previous post about an Ansible Playbook for Cumulus Linux BGP IP-Fabric and Cumulus NetQ Validation.

Arista only has a Virtualbox vEOS image and there is an ISO image to boot the virtual appliance which I don’t understand why they have done this, rather I prefer the way Cumulus provide their VX images for testing to use with Virtualbox or KVM.

I found an interesting blog post on how to run vEOS images with KVM (Libvirt). I tried it and I could run vEOS in KVM but unfortunately, it wasn’t  stable enough to run more complex virtual network topologies so I had to switch back to Virtualbox. I will give it a try again in a few month because I prefer KVM over Virtualbox.

Anyway, you’ll find more information about how to use vEOS with Virtualbox and Vagrant.

My Virtualbox Vagrantfile can be found in my Github repository: https://github.com/berndonline/arista-lab-vagrant

Network overview:

Ansible Playbook:

As I have mentioned before I tried to be close as possible to my Cumulus Linux Ansible Playbook and tried to keep the variables and roles the same. They are differences of course in the Jinja2 templates and tasks but the overall structure is similar.

Here you’ll find the repository with the Ansible Playbook: https://github.com/berndonline/arista-lab-provision

Because Arista didn’t prepare the images very well and only created a vagrant user without adding the ssh key for authentication I needed to use a CLI provider with a username and password. But this is only a minor issue otherwise it works the same. See the site.yml below:

---

- hosts: network

  connection: local
  gather_facts: 'False'

  vars:
    cli:
      username: vagrant
      password: vagrant

  roles:
    - leafgroups
    - hostname
    - interfaces
    - routing
    - ntp

In the roles, I have used the Arista EOS Ansible modules eos_config and eos_system.

Boot up the Vagrant environment and then run the Playbook afterwards:

PLAY [network] *****************************************************************

TASK [leafgroups : create leaf groups based on clag_pairs] *********************
ok: [leaf-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
skipping: [leaf-3] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
ok: [leaf-3] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
skipping: [leaf-4] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
ok: [leaf-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
ok: [leaf-4] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
skipping: [spine-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
skipping: [spine-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 

TASK [leafgroups : include leaf group variables] *******************************
ok: [leaf-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-3] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [leaf-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
skipping: [leaf-4] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-1] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-1] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
ok: [leaf-3] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
ok: [leaf-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2']))
skipping: [leaf-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 
ok: [leaf-4] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4']))
skipping: [spine-2] => (item=(u'leafgroup1', [u'leaf-1', u'leaf-2'])) 
skipping: [spine-2] => (item=(u'leafgroup2', [u'leaf-3', u'leaf-4'])) 

TASK [hostname : write hostname and domain name] *******************************
changed: [leaf-4]
changed: [spine-1]
changed: [leaf-1]
changed: [leaf-3]
changed: [leaf-2]
changed: [spine-2]

TASK [interfaces : write interface configuration] ******************************
changed: [spine-1]
changed: [leaf-2]
changed: [leaf-4]
changed: [leaf-3]
changed: [leaf-1]
changed: [spine-2]

TASK [routing : write routing configuration] ***********************************
changed: [leaf-1]
changed: [leaf-4]
changed: [spine-1]
changed: [leaf-2]
changed: [leaf-3]
changed: [spine-2]

TASK [ntp : write ntp configuration] *******************************************
changed: [leaf-2] => (item=216.239.35.8)
changed: [leaf-1] => (item=216.239.35.8)
changed: [leaf-3] => (item=216.239.35.8)
changed: [spine-1] => (item=216.239.35.8)
changed: [leaf-4] => (item=216.239.35.8)
changed: [spine-2] => (item=216.239.35.8)

PLAY RECAP *********************************************************************
leaf-1                     : ok=6    changed=4    unreachable=0    failed=0   
leaf-2                     : ok=6    changed=4    unreachable=0    failed=0   
leaf-3                     : ok=6    changed=4    unreachable=0    failed=0   
leaf-4                     : ok=6    changed=4    unreachable=0    failed=0   
spine-1                    : ok=4    changed=4    unreachable=0    failed=0   
spine-2                    : ok=4    changed=4    unreachable=0    failed=0   

I didn’t use the leafgroups role for variables in my Playbook but I left it just in case.

Because Arista has nothing similar to Cumulus NetQ to validate the configuration I create a simple arista_check_icmp.yml playbook and use ping from the leaf switches to test if the configuration is successfully deployed.

PLAY [leaf] ********************************************************************

TASK [validate connection from leaf-1] *****************************************
skipping: [leaf-3] => (item=10.255.0.4) 
skipping: [leaf-3] => (item=10.255.0.5) 
skipping: [leaf-3] => (item=10.255.0.6) 
skipping: [leaf-2] => (item=10.255.0.4) 
skipping: [leaf-2] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.255.0.6) 
skipping: [leaf-3] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.255.0.4) 
skipping: [leaf-3] => (item=10.0.102.253) 
skipping: [leaf-3] => (item=10.0.102.254) 
skipping: [leaf-4] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.255.0.6) 
skipping: [leaf-2] => (item=10.0.102.253) 
skipping: [leaf-2] => (item=10.0.102.254) 
skipping: [leaf-4] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.0.102.253) 
skipping: [leaf-4] => (item=10.0.102.254) 
ok: [leaf-1] => (item=10.255.0.4)
ok: [leaf-1] => (item=10.255.0.5)
ok: [leaf-1] => (item=10.255.0.6)
ok: [leaf-1] => (item=10.0.102.252)
ok: [leaf-1] => (item=10.0.102.253)
ok: [leaf-1] => (item=10.0.102.254)

TASK [validate connection from leaf-2] *****************************************
skipping: [leaf-1] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.5) 
skipping: [leaf-3] => (item=10.255.0.5) 
skipping: [leaf-1] => (item=10.255.0.6) 
skipping: [leaf-3] => (item=10.255.0.6) 
skipping: [leaf-1] => (item=10.0.102.252) 
skipping: [leaf-1] => (item=10.0.102.253) 
skipping: [leaf-4] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.0.102.252) 
skipping: [leaf-1] => (item=10.0.102.254) 
skipping: [leaf-3] => (item=10.0.102.253) 
skipping: [leaf-3] => (item=10.0.102.254) 
skipping: [leaf-4] => (item=10.255.0.5) 
skipping: [leaf-4] => (item=10.255.0.6) 
skipping: [leaf-4] => (item=10.0.102.252) 
skipping: [leaf-4] => (item=10.0.102.253) 
skipping: [leaf-4] => (item=10.0.102.254) 
ok: [leaf-2] => (item=10.255.0.3)
ok: [leaf-2] => (item=10.255.0.5)
ok: [leaf-2] => (item=10.255.0.6)
ok: [leaf-2] => (item=10.0.102.252)
ok: [leaf-2] => (item=10.0.102.253)
ok: [leaf-2] => (item=10.0.102.254)

TASK [validate connection from leaf-3] *****************************************
skipping: [leaf-1] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.4) 
skipping: [leaf-2] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.6) 
skipping: [leaf-1] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.255.0.4) 
skipping: [leaf-2] => (item=10.255.0.6) 
skipping: [leaf-1] => (item=10.0.101.253) 
skipping: [leaf-4] => (item=10.255.0.3) 
skipping: [leaf-2] => (item=10.0.101.252) 
skipping: [leaf-4] => (item=10.255.0.4) 
skipping: [leaf-1] => (item=10.0.101.254) 
skipping: [leaf-4] => (item=10.255.0.6) 
skipping: [leaf-2] => (item=10.0.101.253) 
skipping: [leaf-4] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.0.101.254) 
skipping: [leaf-4] => (item=10.0.101.253) 
skipping: [leaf-4] => (item=10.0.101.254) 
ok: [leaf-3] => (item=10.255.0.3)
ok: [leaf-3] => (item=10.255.0.4)
ok: [leaf-3] => (item=10.255.0.6)
ok: [leaf-3] => (item=10.0.101.252)
ok: [leaf-3] => (item=10.0.101.253)
ok: [leaf-3] => (item=10.0.101.254)

TASK [validate connection from leaf-4] *****************************************
skipping: [leaf-1] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.255.0.3) 
skipping: [leaf-1] => (item=10.255.0.4) 
skipping: [leaf-3] => (item=10.255.0.4) 
skipping: [leaf-1] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.255.0.3) 
skipping: [leaf-3] => (item=10.255.0.5) 
skipping: [leaf-3] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.255.0.4) 
skipping: [leaf-1] => (item=10.0.101.252) 
skipping: [leaf-2] => (item=10.255.0.5) 
skipping: [leaf-2] => (item=10.0.101.252) 
skipping: [leaf-3] => (item=10.0.101.253) 
skipping: [leaf-1] => (item=10.0.101.253) 
skipping: [leaf-1] => (item=10.0.101.254) 
skipping: [leaf-3] => (item=10.0.101.254) 
skipping: [leaf-2] => (item=10.0.101.253) 
skipping: [leaf-2] => (item=10.0.101.254) 
ok: [leaf-4] => (item=10.255.0.3)
ok: [leaf-4] => (item=10.255.0.4)
ok: [leaf-4] => (item=10.255.0.5)
ok: [leaf-4] => (item=10.0.101.252)
ok: [leaf-4] => (item=10.0.101.253)
ok: [leaf-4] => (item=10.0.101.254)

PLAY RECAP *********************************************************************
leaf-1                     : ok=1    changed=0    unreachable=0    failed=0   
leaf-2                     : ok=1    changed=0    unreachable=0    failed=0   
leaf-3                     : ok=1    changed=0    unreachable=0    failed=0   
leaf-4                     : ok=1    changed=0    unreachable=0    failed=0   

I don’t usually work with Arista devices and this was a try to use a different switch vendor but still keep using the type of Ansible Playbook.

Please tell me if you like it and share your feedback.

Leave a comment

Ansible Playbook for Cisco BGP Routing Topology

This is my Ansible Playbook for a simple Cisco BGP routing topology and using a CICD pipeline for integration testing. The virtual network environment is created on-demand by using Vagrant, see my post about Cisco IOSv and XE network simulation using Vagrant.

Network overview:

Here’s my Github repository where you can find the complete Ansible Playbook: https://github.com/berndonline/cisco-lab-provision

You can find all the variables for the interface and routing configuration under host_vars. Below is an example for router rtr-1:

---

hostname: rtr-1
domain_name: lab.local

loopback:
  address: 10.255.0.1
  mask: 255.255.255.255

interfaces:
  0/1:
    alias: connection rtr-2
    address: 10.0.255.1
    mask: 255.255.255.252

  0/2:
    alias: connection rtr-3
    address: 10.0.255.5
    mask: 255.255.255.252

bgp:
  asn: 65001
  neighbor:
    - {address: 10.0.255.2, remote_as: 65000}
    - {address: 10.0.255.6, remote_as: 65000}
  networks:
    - {network: 10.0.255.0, mask: 255.255.255.252}
    - {network: 10.0.255.4, mask: 255.255.255.252}
    - {network: 10.255.0.1, mask: 255.255.255.255}
  maxpath: 2

Roles:

  • Hostname: The task in main.yml uses the Ansible module ios_system and configures hostname, domain name and disables dns lookups.
  • Interfaces: This role uses the Ansible module ios_config to deploy the template interfaces.j2 to configure the interfaces. In the main.yml is a second task to enable the interfaces when the previous template applied the configuration.
  • Routing: Very similar to the interfaces role and uses also the ios_config module to deploy the template routing.j2 for the BGP routing configuration.

Main Ansible Playbook site.yml:

---

- hosts: all

  connection: local
  user: vagrant
  gather_facts: 'no'

  roles:
    - hostname
    - interfaces
    - routing

When a change triggers the gitlab-ci pipeline it spins up the Vagrant instances and executes the main Ansible Playbook:

After the main site.yml ran, a second Playbook is executed for basic connectivity testing cisco_check_icmp.yml. This uses the Ansible module ios_ping and can be useful in my case to validate if the configuration was correctly applied:

If everything goes well, like in this example, the job is successful:

I will continue to improve the Playbook and the CICD pipeline so come back later to check it out.

Leave a comment