VMware NSX-T 2.0 First Impression

Over the past two days I spend some time with VMware NSX-T 2.0 which has multi-hypervisor (KVM and ESXi) support, and is for containerised platform environments like Kubernetes and RedHat OpenShift. VMware has as well an NSX-T cloud version which can run in Amazon AWS and Google cloud services.

First big change is the new HTML5 web client which looks nice and clean, the menu structure is different to NSX-V for vSphere which you have to get used to first. NSX-V will also get the new HTML5 web clients soon I have heard:

VMware did quite a few changes in NSX-T, they moved over to Geneve and replaced the VXLAN encapsulation which is currently used in NSX-V. That makes it impossible at the moment to connect NSX-V and NSX-T because of the different overlay technologies.

Routing works different to the previous NSX for vSphere version having Tier 0 (edge/aggregation) and Tier 1 (tenant) routers. Previously in NSX-V you used Edge appliances as tenant router which is now replace with Tier 1 distributed routing. On the Tier 1 tenant router you don’t need to configure BGP anymore, you just specify to advertise connected routes, the connection between Tier 1 and Tier 0 also pushed down the default gateway.

The Edge appliance can be deployed as virtual machine or on Bare-Metal servers which makes the Transport Zoning different to NSX-V because Edge appliances need to be part of Transport Zones to connect to the overlay and physical VLAN:

On the Edge itself you have two functions, Distributed Routing (DR) for stateless forwarding and Service Routing (SR) for stateful forwarding like NAT:

Load balancing is currently missing  in the Edge appliance but this is coming in one of the next releases for NSX-T.

Here a network design with Tier 0 and Tier 1 routing in NSX-T:

I will write another post in the coming weeks about the detailed routing configuration in NSX-T. I am also curious to integrate Kubernetes in NSX-T to try out the integration for containerise platform environments.

VMworld Europe 2017 – Speaking at Customer Panel on VMware NSX [NET3081PE] – Update

VMware invited me to be a speaker at a customer panel about VMware NSX at VMworld Europe 2017. I am exited to get the opportunity and share my experience together with other customers in deploying and running VMware NSX in modern datacenter environments.

Here the session information and schedule:

“In this session, a number of VMware NSX customers will discuss how NSX has been used in their environments, specifically focusing on how their organisation has benefited from security, automation, and application continuity-related projects.”

Customer Panel on VMware NSX [NET3081PE]

ScheduleTuesday, Sep 12, 2:00 p.m. – 3:00 p.m. | 41


Very interesting and successful panel discussion with other customers and VMworld attendees.

Here the recorded audio of the customer panel:

VMware NSX Network Design Example

In my previous post about the VMware NSX Edge Routing, I explained how the Edge Service Gateways are connected to the physical network.

Now I want to show an example how the network design could look like if you want to use NSX:

Of course this really depends on your requirements and how complex your network is, I could easily replace the Tenant Edge Service Gateways (ESG) with Distributed Logical Router (DLR) if your tenant network is more complex. The advantage with the ESGs is that I could easily enable Load Balancing as a Service to balance traffic between servers in my tier-3 networks.

The ESGs using Load Balancing as a Services can we as well deploy on-a-stick but for this you need to use SNAT and X-Forwarded-For:

Very interesting it gets when you start using the Distributed Firewall and filter traffic between servers in the same network, micro-segmentation of your virtual machines within the same subnet. In combination with Security Tags this can be a very powerful way of securing your networks.

About what VMware NSX can do, I can only recommend reading the VMware NSX reference design guide, you find lot of useful information how to configure NSX.

Comment below if you have questions.

VMware NSX Edge Routing

I recently deployed VMware NSX (Software defined Network) in our datacentre.

About the NSX Edge cluster there are some some specific requirements when it comes to physical connectivity. All the information you find as well in the VMware NSX reference design guide.

On Cumulus Linux side I am using BGP in Quagga and the traffic is distributed via ECMP (Equal-cost multi-path) over multiple Edge nodes within NSX.

See below the overview:

Very important to have an dedicated VLAN per core switch to the Edge Nodes. In my tests it didn’t work with a shared VLAN via the Cumulus core, the BGP neighbor relationships were correctly established but there was a problem with the packet forwarding via the Peerlink.

Here the example Quagga BGP config from spine-1:

router bgp 65001 vrf vrf-nsx
 neighbor remote-as 65002
 neighbor password verystrongpassword!!
 neighbor timers 1 3
 neighbor remote-as 65002
 neighbor password verystrongpassword!!
 neighbor timers 1 3
 neighbor remote-as 65002
 neighbor password verystrongpassword!!
 neighbor timers 1 3
 neighbor remote-as 65002
 neighbor password verystrongpassword!!
 neighbor timers 1 3
 neighbor remote-as 65001
 neighbor password verystrongpassword!!

 address-family ipv4 unicast
  neighbor route-map bgp-in in
  neighbor route-map bgp-in in
  neighbor route-map bgp-in in
  neighbor route-map bgp-in in
  neighbor next-hop-self
  neighbor route-map bgp-in in

ip route vrf vrf_prod-nsx

access-list bgp-in permit

route-map bgp-in permit 10
 match ip address bgp-in

The second core switch, spine-2 looks exactly the same only different IP addresses are used.

More about my experience with VMware NSX will follow soon.

Data centre network redesign

Over the last month I was busy working on an data centre redesign for my company which I finished this weekend in one of the three data centre’s.

The old network design was very outdated and bad choice of network equipment; Cisco Catalyst 6500 core switch for a small data centre environment with 8 racks is total overkill, two firewall clusters Juniper ISG2000 and Cisco ASA 5550 which were badly integrated and the configuration was a mess.

For the new network I followed a more converged idea to use a small and compact network to be as flexible as possible but also downsize the overall footprint and remove complexity. We adopted parts of DevOps “I like to call it NetOps” and used Ansible to automate the configuration deployment, the whole network stack is deployed within 90 seconds.

Used equipment:

  1. Top two switches were Dell S3048-ON running Cumulus Networks OS and used for internet- and leased-lines
  2. Under the two Dell WAN switches are two Cisco ASR 1001-X router for internet and wide area network (OSPF) routing.
  3. Under the Cisco router, two Dell S4048-ON core switches running Cumulus Network OS and connected existing HP Blade Center’s and HP DL servers. The new Tintri storage for the VMware vSphere clusters was also connected directly to the core switches.
  4. Under the Dell core switches are two Cisco ASA 5545-X in multi-context mode running Production, Corporate and S2S VPN firewalls.
  5. On the bottom of the network stack were existing serial console server and Cisco Catalyst switch for management network.

Now I will start with the deployment of VMware NSX SDN (Software defined Network) in this data centre. Ones VMware NSX is finished and handed over to the Systems Engineers I will do the same exercise for the 2nd data centre in the UK.

About Cumulus Linux and VMware NSX SDN I will publish some more information and my experience in the coming month.