VMware NSX-T 2.0 First Impression

Over the past two days I spend some time with VMware NSX-T 2.0 which has multi-hypervisor (KVM and ESXi) support, and is for containerised platform environments like Kubernetes and RedHat OpenShift. VMware has as well an NSX-T cloud version which can run in Amazon AWS and Google cloud services.

First big change is the new HTML5 web client which looks nice and clean, the menu structure is different to NSX-V for vSphere which you have to get used to first. NSX-V will also get the new HTML5 web clients soon I have heard:

VMware did quite a few changes in NSX-T, they moved over to Geneve and replaced the VXLAN encapsulation which is currently used in NSX-V. That makes it impossible at the moment to connect NSX-V and NSX-T because of the different overlay technologies.

Routing works different to the previous NSX for vSphere version having Tier 0 (edge/aggregation) and Tier 1 (tenant) routers. Previously in NSX-V you used Edge appliances as tenant router which is now replace with Tier 1 distributed routing. On the Tier 1 tenant router you don’t need to configure BGP anymore, you just specify to advertise connected routes, the connection between Tier 1 and Tier 0 also pushed down the default gateway.

The Edge appliance can be deployed as virtual machine or on Bare-Metal servers which makes the Transport Zoning different to NSX-V because Edge appliances need to be part of Transport Zones to connect to the overlay and physical VLAN:

On the Edge itself you have two functions, Distributed Routing (DR) for stateless forwarding and Service Routing (SR) for stateful forwarding like NAT:

Load balancing is currently missing  in the Edge appliance but this is coming in one of the next releases for NSX-T.

Here a network design with Tier 0 and Tier 1 routing in NSX-T:

I will write another post in the coming weeks about the detailed routing configuration in NSX-T. I am also curious to integrate Kubernetes in NSX-T to try out the integration for containerise platform environments.

VMware NSX Network Design Example

In my previous post about the VMware NSX Edge Routing, I explained how the Edge Service Gateways are connected to the physical network.

Now I want to show an example how the network design could look like if you want to use NSX:

Of course this really depends on your requirements and how complex your network is, I could easily replace the Tenant Edge Service Gateways (ESG) with Distributed Logical Router (DLR) if your tenant network is more complex. The advantage with the ESGs is that I could easily enable Load Balancing as a Service to balance traffic between servers in my tier-3 networks.

The ESGs using Load Balancing as a Services can we as well deploy on-a-stick but for this you need to use SNAT and X-Forwarded-For:

Very interesting it gets when you start using the Distributed Firewall and filter traffic between servers in the same network, micro-segmentation of your virtual machines within the same subnet. In combination with Security Tags this can be a very powerful way of securing your networks.

About what VMware NSX can do, I can only recommend reading the VMware NSX reference design guide, you find lot of useful information how to configure NSX.

Comment below if you have questions.

VMware NSX Edge Routing

I recently deployed VMware NSX (Software defined Network) in our datacentre.

About the NSX Edge cluster there are some some specific requirements when it comes to physical connectivity. All the information you find as well in the VMware NSX reference design guide.

On Cumulus Linux side I am using BGP in Quagga and the traffic is distributed via ECMP (Equal-cost multi-path) over multiple Edge nodes within NSX.

See below the overview:

Very important to have an dedicated VLAN per core switch to the Edge Nodes. In my tests it didn’t work with a shared VLAN via the Cumulus core, the BGP neighbor relationships were correctly established but there was a problem with the packet forwarding via the Peerlink.

Here the example Quagga BGP config from spine-1:

router bgp 65001 vrf vrf-nsx
 neighbor 10.100.254.1 remote-as 65002
 neighbor 10.100.254.1 password verystrongpassword!!
 neighbor 10.100.254.1 timers 1 3
 neighbor 10.100.254.2 remote-as 65002
 neighbor 10.100.254.2 password verystrongpassword!!
 neighbor 10.100.254.2 timers 1 3
 neighbor 10.100.254.3 remote-as 65002
 neighbor 10.100.254.3 password verystrongpassword!!
 neighbor 10.100.254.3 timers 1 3
 neighbor 10.100.254.4 remote-as 65002
 neighbor 10.100.254.4 password verystrongpassword!!
 neighbor 10.100.254.4 timers 1 3
 neighbor 10.100.255.2 remote-as 65001
 neighbor 10.100.255.2 password verystrongpassword!!

 address-family ipv4 unicast
  network 0.0.0.0/0
  neighbor 10.100.254.1 route-map bgp-in in
  neighbor 10.100.254.2 route-map bgp-in in
  neighbor 10.100.254.3 route-map bgp-in in
  neighbor 10.100.254.4 route-map bgp-in in
  neighbor 10.100.255.2 next-hop-self
  neighbor 10.100.255.2 route-map bgp-in in
 exit-address-family

ip route 0.0.0.0/0 10.100.255.14 vrf vrf_prod-nsx

access-list bgp-in permit 10.100.0.0/17

route-map bgp-in permit 10
 match ip address bgp-in

The second core switch, spine-2 looks exactly the same only different IP addresses are used.

More about my experience with VMware NSX will follow soon.