Network Monitoring with Prometheus and Cumulus Linux

As promised in my previous article Install Prometheus and Grafana, this post is about how to monitor Cumulus Linux switches with Prometheus.

Let’s start directly by installing the Prometheus Node_Exporter:

sudo useradd --no-create-home --shell /bin/false node_exporter

tar xvf node_exporter-0.15.1.linux-amd64.tar.gz
sudo cp node_exporter-0.15.1.linux-amd64/node_exporter /usr/local/bin
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter

sudo bash -c 'cat << EOF > /etc/systemd/system/node_exporter.service
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
EOF'

sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl status node_exporter

Check that the Node_Exporter service is correctly running and listing on tcp 9100 for the Prometheus server to collect the metrics from the switches:

[email protected]:~$ sudo systemctl status node_exporter
● node_exporter.service - Node Exporter
   Loaded: loaded (/etc/systemd/system/node_exporter.service; disabled)
   Active: active (running) since Thu 2018-03-22 13:41:26 UTC; 958ms ago
 Main PID: 5620 (node_exporter)
   CGroup: /system.slice/node_exporter.service
           └─5620 /usr/local/bin/node_exporter

Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - sockstat" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - bcache" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - hwmon" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - cpu" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - stat" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - timex" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - textfile" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - conntrack" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg=" - edac" source="node_exporter.go:52"
Mar 22 13:41:26 spine-2 node_exporter[5620]: time="2018-03-22T13:41:26Z" level=info msg="Listening on :9100" source="node_exporter.go:76"
[email protected]:~$

I created a simple dashboard in Grafana for the switches running Cumulus Linux, where you can find important metrics like throughput of the network interfaces, CPU load, Memory and disk related information:

On the top right corner you can select the switch where you want to see metrics from:

You can also have a central monitoring dashboard where all performance metrics are shown:

Here are detailed views with information about all interfaces from the different switch groups:

This is a very simple solution to monitor your Cumulus Linux switches and in combination with Cumulus NetQ enough to monitor your switch fabric.

FYI, I have used the following virtual topology BGP EVPN and VXLAN with Cumulus Linux.

Please share your feedback and leave a comment.

Install Prometheus and Grafana

Moving away from Cisco and using Open Networking whitebox switches with Cumulus Linux made me think about performance monitoring. In the past I was a fan of Solarwinds NPM but the traditional SNMP based monitoring is pretty outdated and not standard anymore when using Linux based operating systems. I was exploring different other options and came across Prometheus and Grafana.

This is post about how to install Prometheus and Grafana on a central monitoring server, the next post will be about how to integrate Cumulus Linux switches and report metrics to Prometheus and then visualise them with Grafana.

Let’s start installing Prometheus base packages:

sudo useradd --no-create-home --shell /bin/false prometheus
sudo mkdir /etc/prometheus
sudo mkdir /var/lib/prometheus
sudo chown prometheus:prometheus /etc/prometheus
sudo chown prometheus:prometheus /var/lib/prometheus

cd ~
curl -LO https://github.com/prometheus/prometheus/releases/download/v2.0.0/prometheus-2.0.0.linux-amd64.tar.gz
tar xvf prometheus-2.0.0.linux-amd64.tar.gz
sudo cp prometheus-2.0.0.linux-amd64/prometheus /usr/local/bin/
sudo cp prometheus-2.0.0.linux-amd64/promtool /usr/local/bin/
sudo chown prometheus:prometheus /usr/local/bin/prometheus
sudo chown prometheus:prometheus /usr/local/bin/promtool
sudo cp -r prometheus-2.0.0.linux-amd64/consoles /etc/prometheus
sudo cp -r prometheus-2.0.0.linux-amd64/console_libraries /etc/prometheus
sudo chown -R prometheus:prometheus /etc/prometheus/consoles
sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries
rm -rf prometheus-2.0.0.linux-amd64.tar.gz prometheus-2.0.0.linux-amd64

sudo touch /etc/prometheus/prometheus.yml 
sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml

sudo bash -c 'cat << EOF > /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target
EOF'

We have now installed the Prometheus base package but to collect metrics you also need to install the Prometheus Node Exporter:

sudo useradd --no-create-home --shell /bin/false node_exporter

cd ~
curl -LO https://github.com/prometheus/node_exporter/releases/download/v0.15.1/node_exporter-0.15.1.linux-amd64.tar.gz
tar xvf node_exporter-0.15.1.linux-amd64.tar.gz
sudo cp node_exporter-0.15.1.linux-amd64/node_exporter /usr/local/bin
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
rm -rf node_exporter-0.15.1.linux-amd64.tar.gz node_exporter-0.15.1.linux-amd64

sudo bash -c 'cat << EOF > /etc/systemd/system/node_exporter.service
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
EOF'

Configure Prometheus and define node_exporter targets:

sudo bash -c 'cat << EOF > /etc/prometheus/prometheus.yml
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']
  - job_name: 'node_exporter'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9100']  
EOF'

Start services and access the web console:

sudo systemctl daemon-reload
sudo systemctl start prometheus
sudo systemctl start node_exporter

Access the Prometheus web console via http://localhost:9090:

Under “Status -> Targets” you can check if the services state is up:

Let’s continue by installing Grafana:

curl https://packagecloud.io/gpg.key | sudo apt-key add -
sudo add-apt-repository "deb https://packagecloud.io/grafana/stable/debian/ stretch main"
sudo apt-get update
sudo apt-get install grafana
sudo systemctl start grafana-server
sudo systemctl status grafana-server
sudo systemctl enable grafana-server

Now you can access Grafana via http://localhost:3000/. I would recommend putting a Ngnix reverse proxy in-front for SSL encryption.

In the web console we need to configure the data source and point it to Prometheus. To do that go to “settings” and select “data source”:

You should import the following Prometheus dashboard for Grafana otherwise you need to manually configure your dashboard:

For the install of Prometheus and the Node_Exporter I will write two Ansible roles which I will share later. Read my new post about Network Monitoring with Prometheus and Cumulus Linux!

Please share your feedback and leave a comment.

Cumulus Linux Ethernet link-state monitoring using ifplugd

This blog post is about link-state monitoring under Cumulus Linux. Cumulus has no own builtin tool for this and recommends using ifplugd. The tool has some similarities to Cisco’s IP SLA which can track the state of interfaces.

The main reason to use ifplugd is for split-brain scenarios, when you lose the peerlink between Cumulus Linux CLAG pairs. If the peerlink goes down the CLAG primary switch stays active member and the secondary would automatically disable all CLAG bonds to force the connected servers to failover to the CLAG primary switch to keep the network operational. 

Very important you need to configure clagd-backup-ip because this is needed for Cumulus Linux to still be able to communicate to it’s neighbour if they lose the peerlink.

Now ifplugd is important for all connected servers which are not using CLAG bonds, basically servers which are using the normal active/standby teaming which doesn’t require a CLAG bonding configuration. These ports are configured as normal access ports, so an peerlink failure would normally keep these ports up if you don’t configure ifplugd.

Ifplugd needs to be installed and configured on both switches running CLAG, follow the steps below.

Install ifplugd service:

sudo apt-get update
sudo apt-get install ifplugd

Edit the file /etc/default/ifplugd and add the lines below

The delay is set to -d10 moderate 10 seconds because of combination with CLAG. Need to see and lower the value over time.

INTERFACES="peerlink"
HOTPLUG_INTERFACES=""
ARGS="-q -f -u0 -d10 -w -I"
SUSPEND_ACTION="stop"

Edit the file /etc/ifplugd/action.d/ifupdown

The variable $SWITCHPORTS defines which ports ifplugd should shutdown if the peerlink goes down. We came up with to use a custom variable instead of shutting down all ports because CLAG is taking care of configured bonds.

#!/bin/sh

# The peerlink bond interface
PEERLINK=peerlink

# The switchports to bring down on peerlink failure
#
# enslosures 01/02: swp5..swp8
SWITCHPORTS=$(seq -f swp%g 5 8)
# storage system 01/02 : swp19..swp22
SWITCHPORTS="$SWITCHPORTS $(seq -f swp%g 19 22)"
# server1/server2: swp27..swp28
SWITCHPORTS="$SWITCHPORTS $(seq -f swp%g 27 28)"
# VMware cluster: swp35..swp38
SWITCHPORTS="$SWITCHPORTS $(seq -f swp%g 35 38)"

case "$1" in

    "$PEERLINK")
        clagrole=$(clagctl | grep "Our Priority" | awk '{print $8}')
	case "$2" in
	    up | down)
		action=$2
		if [ "$clagrole" = "secondary" ]; then
		    for interface in $SWITCHPORTS; do
			echo "bringing $action : $interface"
			ip link set $interface $action
		    done
		fi
		;;
	esac
	;;

esac

Start ifplugd service

sudo systemctl restart ifplugd.service

Impact of a simulated peerlink failure from the server perspective:

2017-09-19T11:43:15.665057+00:00 leaf-01-c ifplugd(peerlink)[5292]: Link beat lost.
2017-09-19T11:43:25.775585+00:00 leaf-01-c ifplugd(peerlink)[5292]: Executing '/etc/ifplugd/ifplugd.action peerlink down'.
2017-09-19T11:43:25.902637+00:00 leaf-01-c ifplugd(peerlink)[5292]: Program executed successfully.
[email protected]:/home/cumulus# 

[email protected]:/home/cumulus# grep ifplugd /var/log/syslog
[...]
2017-09-19T11:43:15.780727+00:00 leaf-02-c ifplugd(peerlink)[12600]: Link beat lost.
2017-09-19T11:43:25.891584+00:00 leaf-02-c ifplugd(peerlink)[12600]: Executing '/etc/ifplugd/ifplugd.action peerlink down'.
2017-09-19T11:43:26.107140+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing down : swp5
2017-09-19T11:43:26.146421+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing down : swp6
2017-09-19T11:43:26.171454+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing down : swp7
2017-09-19T11:43:26.193387+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing down : swp8
64 bytes from 8.8.8.8: icmp_seq=1623 ttl=59 time=0.524 ms
64 bytes from 8.8.8.8: icmp_seq=1624 ttl=59 time=0.782 ms
64 bytes from 8.8.8.8: icmp_seq=1625 ttl=59 time=0.847 ms
Request timeout for icmp_seq 1626
Request timeout for icmp_seq 1627
Request timeout for icmp_seq 1628
Request timeout for icmp_seq 1629
Request timeout for icmp_seq 1630
Request timeout for icmp_seq 1631
Request timeout for icmp_seq 1632
Request timeout for icmp_seq 1633
Request timeout for icmp_seq 1634
Request timeout for icmp_seq 1635
Request timeout for icmp_seq 1636
Request timeout for icmp_seq 1637
Request timeout for icmp_seq 1638
64 bytes from 8.8.8.8: icmp_seq=1639 ttl=59 time=0.701 ms
64 bytes from 8.8.8.8: icmp_seq=1640 ttl=59 time=0.708 ms
64 bytes from 8.8.8.8: icmp_seq=1641 ttl=59 time=0.780 ms
64 bytes from 8.8.8.8: icmp_seq=1642 ttl=59 time=0.781 ms

Impact of reconnecting the peerlink from the server perspective:

[email protected]:/home/cumulus# grep ifplugd /var/log/syslog
[...]
2017-09-19T11:48:22.190187+00:00 leaf-01-c ifplugd(peerlink)[5292]: Link beat detected.
2017-09-19T11:48:22.290481+00:00 leaf-01-c ifplugd(peerlink)[5292]: Executing '/etc/ifplugd/ifplugd.action peerlink up'.
2017-09-19T11:48:22.524673+00:00 leaf-01-c ifplugd(peerlink)[5292]: Program executed successfully.

[email protected]:/home/cumulus# grep ifplugd /var/log/syslog
[...]
2017-09-19T11:48:22.084477+00:00 leaf-02-c ifplugd(peerlink)[12600]: Link beat detected.
2017-09-19T11:48:22.232192+00:00 leaf-02-c ifplugd(peerlink)[12600]: Executing '/etc/ifplugd/ifplugd.action peerlink up'.
2017-09-19T11:48:22.812771+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing up : swp5
2017-09-19T11:48:22.816175+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing up : swp6
2017-09-19T11:48:22.831487+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing up : swp7
2017-09-19T11:48:22.836617+00:00 leaf-02-c ifplugd(peerlink)[12600]: client: bringing up : swp8
64 bytes from 8.8.8.8: icmp_seq=24 ttl=59 time=0.614 ms
64 bytes from 8.8.8.8: icmp_seq=25 ttl=59 time=0.680 ms
64 bytes from 8.8.8.8: icmp_seq=26 ttl=59 time=8.932 ms
64 bytes from 8.8.8.8: icmp_seq=27 ttl=59 time=1.126 ms
64 bytes from 8.8.8.8: icmp_seq=28 ttl=59 time=2.424 ms
Request timeout for icmp_seq 29
Request timeout for icmp_seq 30
Request timeout for icmp_seq 31
Request timeout for icmp_seq 32
Request timeout for icmp_seq 33
Request timeout for icmp_seq 34
Request timeout for icmp_seq 35
64 bytes from 8.8.8.8: icmp_seq=36 ttl=59 time=6.491 ms
64 bytes from 8.8.8.8: icmp_seq=37 ttl=59 time=1.045 ms
64 bytes from 8.8.8.8: icmp_seq=38 ttl=59 time=1.244 ms

Yes, it takes a few seconds for your server to reconnect if you have a peerlink failure but it is very important to keep the datacenter network operational.

For more information have a look at the Cumulus Linux documentation: https://docs.cumulusnetworks.com/display/DOCS/ifplugd

Uptime – simple http monitoring utility

I found an very interesting http monitoring tool called Uptime using node.js and mongoDB. I directly installed Uptime on one of my Linux servers and from the first look I find it really cool 🙂 before you start you need to get node.js and mongoDB installed on your server and the rest is then very easy.

Ones the Uptime is running you can access the web interface and create the first checks, here some screenshots:

Here you create your http checks and define some settings:

Detailed check overview with graphs:

If you are interested then have a look here: http://fzaninotto.github.com/uptime/