In my previous post about the VMware NSX Edge Routing, I explained how the Edge Service Gateways are connected to the physical network.
Now I want to show an example how the network design could look like if you want to use NSX:
Of course this really depends on your requirements and how complex your network is, I could easily replace the Tenant Edge Service Gateways (ESG) with Distributed Logical Router (DLR) if your tenant network is more complex. The advantage with the ESGs is that I could easily enable Load Balancing as a Service to balance traffic between servers in my tier-3 networks.
The ESGs using Load Balancing as a Services can we as well deploy on-a-stick but for this you need to use SNAT and X-Forwarded-For:
Very interesting it gets when you start using the Distributed Firewall and filter traffic between servers in the same network, micro-segmentation of your virtual machines within the same subnet. In combination with Security Tags this can be a very powerful way of securing your networks.
About what VMware NSX can do, I can only recommend reading the VMware NSX reference design guide, you find lot of useful information how to configure NSX.
Comment below if you have questions.
I recently deployed VMware NSX (Software defined Network) in our datacentre.
About the NSX Edge cluster there are some some specific requirements when it comes to physical connectivity. All the information you find as well in the VMware NSX reference design guide.
On Cumulus Linux side I am using BGP in Quagga and the traffic is distributed via ECMP (Equal-cost multi-path) over multiple Edge nodes within NSX.
See below the overview:
Very important to have an dedicated VLAN per core switch to the Edge Nodes. In my tests it didn’t work with a shared VLAN via the Cumulus core, the BGP neighbor relationships were correctly established but there was a problem with the packet forwarding via the Peerlink.
Here the example Quagga BGP config from spine-1:
router bgp 65001 vrf vrf-nsx
neighbor 10.100.254.1 remote-as 65002
neighbor 10.100.254.1 password verystrongpassword!!
neighbor 10.100.254.1 timers 1 3
neighbor 10.100.254.2 remote-as 65002
neighbor 10.100.254.2 password verystrongpassword!!
neighbor 10.100.254.2 timers 1 3
neighbor 10.100.254.3 remote-as 65002
neighbor 10.100.254.3 password verystrongpassword!!
neighbor 10.100.254.3 timers 1 3
neighbor 10.100.254.4 remote-as 65002
neighbor 10.100.254.4 password verystrongpassword!!
neighbor 10.100.254.4 timers 1 3
neighbor 10.100.255.2 remote-as 65001
neighbor 10.100.255.2 password verystrongpassword!!
address-family ipv4 unicast
neighbor 10.100.254.1 route-map bgp-in in
neighbor 10.100.254.2 route-map bgp-in in
neighbor 10.100.254.3 route-map bgp-in in
neighbor 10.100.254.4 route-map bgp-in in
neighbor 10.100.255.2 next-hop-self
neighbor 10.100.255.2 route-map bgp-in in
ip route 0.0.0.0/0 10.100.255.14 vrf vrf_prod-nsx
access-list bgp-in permit 10.100.0.0/17
route-map bgp-in permit 10
match ip address bgp-in
The second core switch, spine-2 looks exactly the same only different IP addresses are used.
More about my experience with VMware NSX will follow soon.
Over the last month I was busy working on an data centre redesign for my company which I finished this weekend in one of the three data centre’s.
The old network design was very outdated and bad choice of network equipment; Cisco Catalyst 6500 core switch for a small data centre environment with 8 racks is total overkill, two firewall clusters Juniper ISG2000 and Cisco ASA 5550 which were badly integrated and the configuration was a mess.
For the new network I followed a more converged idea to use a small and compact network to be as flexible as possible but also downsize the overall footprint and remove complexity. We adopted parts of DevOps “I like to call it NetOps” and used Ansible to automate the configuration deployment, the whole network stack is deployed within 90 seconds.
- Top two switches were Dell S3048-ON running Cumulus Networks OS and used for internet- and leased-lines
- Under the two Dell WAN switches are two Cisco ASR 1001-X router for internet and wide area network (OSPF) routing.
- Under the Cisco router, two Dell S4048-ON core switches running Cumulus Network OS and connected existing HP Blade Center’s and HP DL servers. The new Tintri storage for the VMware vSphere clusters was also connected directly to the core switches.
- Under the Dell core switches are two Cisco ASA 5545-X in multi-context mode running Production, Corporate and S2S VPN firewalls.
- On the bottom of the network stack were existing serial console server and Cisco Catalyst switch for management network.
Now I will start with the deployment of VMware NSX SDN (Software defined Network) in this data centre. Ones VMware NSX is finished and handed over to the Systems Engineers I will do the same exercise for the 2nd data centre in the UK.
About Cumulus Linux and VMware NSX SDN I will publish some more information and my experience in the coming month.