VMware NSX: a short introduction and HOWTO install it

NSX is the SDN solution by VMware. NSX is available as:

  • NSX for vSphere (NSX-V)
  • NSX Multi-Hypervisor (NSX-MH)

NSX-MH is NSX for Multi Hypervisors (ESXi, KVM, Xen, Hyper-V). This post is focused on NSX-V.

Acronyms

  • DFW: Distributed FireWall
  • DLR: Distributed Logical Router
  • LIF: Logical InterFace
  • UWA: User World Agent
  • VDS: VSphere Distributed Switch
  • VIB: VSphere Installation Bundle
  • VNID: VXLAN Network IDentifier
  • VTEP: VXLAN Tunnel Endpoint
  • VXLAN: Virtual eXtensible Local Area Network

Architecture

In short: NSX is a way to move network complexity from physical hardware to software. The main benefit of NSX is a centralized point for virtual host, storage and network environments.

NSX allows to interconnect L3 separated ESXi hosts together, sharing the same L2 domains supporting multi-tenants. Other benefits include a better security approach for the virtual world and few distributed services like: routing, firewall, load-balancer…

NSX is an additional component for the VMware vSphere suite, and it’s currently licences on a per-VM basis. NSX install few components on ESXi hosts, a VM (NSX Manager) and a plugin for the vCenter. Other VMs can be deployed during the setup phase.

Basically the vCenter contact the NSX Manager which contacts NSX Controllers. If at least one NSX controller is alive, the networking provided by NSX will work fine.

Components

  • VMware vSphere: NSX-V includes network functions within a VMware vSphere infrastructure.
  • VXLAN: VLANs are limited to 4094, and cannot span across layer 3 boundaries. VXLAN overcomes both (and other) limits encapsulating Layer 2 data to UDP packets and allowing a 24-bit VNID. In other (simple) words VXLAN can be used to connect discontiguous networks. A larger MTU is needed (1600 suggested). VXLAN tunnels are established between ESXi hosts configured for NSX.
  • NSX Manager (management plane): the NSX manager is a VM used for joining the NSX and vSphere worlds. NSX Manager provides alsto REST API. A 1:1 relation exists between vCenter and NSX Manager.
  • NSX Controllers (control plane): an odd number of controllers are deployed as a cluster in order to enable high-availability and scale. Any failure of the controller nodes does not impact any data plane traffic, each controller is a VM deployed from the NSX Manager. NSX Controllers are required for Unicast VXLAN mode. Using VXLAN in multicast mode does not require NSX Controllers.
  • NSX vSwitch (data plane): The vSwitch in NSX is based on the VDS. Some kernel modules (VIBs) are installed within ESXi and provides additional services such as distributed routing, distributed firewall, load balancers…
  • UWA: the User World Agent is a kernel module installed within ESXi hosts and provides connectivity with NSX Controllers.
  • Transport Zone: defines which logical switch is configured on which vSphere cluster.
  • Logical Switch: it’s a NSX virtual switch configured inside a DVSwitch with a VNI and a Transport Zone assigned.
  • Edge Services Router (ESR): it’s the VM version of a router configured for NSX. It can run DHCP, Firewall, VPN, NAT, Routing and Load Balancing services.
  • Distributed Logical Router (DLR): it’s the distributed version of a router configured for NSX. Each ESXi host knows how route packets between logical switches.
  • Distributed Firewall (DFW): it’s a distributed version of a firewall configured for NSX. Each ESXi host knows how filter packets.

Installation

Deploy the VMware NSX Manager OVA to the vSphere infrastructure. During the deploy phase the system will warn about extra option:

  • vshield.vmtype
  • vshield.vmversion
  • vshield.vmbuild

VMware NSX NSX relies NSX Edge, which was derived from the vCNS Edge (formerly vShield Edge). So vShield Manager is now NSX Manager.

nsx_installation_1

Accept the extra options, confirm accept the license, and configure the network parameters:

  • admin user password;
  • privileged (enabled) password;
  • IPv4 and IPv6 address, netmask and default gateway;
  • DNS and NTP servers;
  • SSH access.

Power on the VM, and after few minutes connect to the IP address using a Web browser and port 80/443. Go to “Manage vCenter Registration” and set both Lookup (SSO) and vCenter servers. In my lab SSO service is installed with the vCenter service, so:

  • Lookup Service IP: vcenter1.example.com
  • Lookup Service Port: 7444
  • SSO Administrator User Name: administrator@vsphere.local
  • vCenter Server: vcenter1.example.com
  • vCenter User Name: EXAMPLE\Administrator
  • Modify plugin script download location: unchecked, because we’re not modifying the NSX Manager IP address.

nsx_installation_2

Before continue logout and logon again any active Web Client session.

If the NSX service is not ready, the configuration is inhibited with a warning:

Component NSX is not yet in RUNNING phase, current phase STARTING. Please retry after some time.

Just wait few more minutes and try again.

Configuring ESXi hosts

Now ESXi hosts must be configured for NSX. vCenter and NSX manager must be located on a ESXi host/cluster not configured for NSX.

Logging in to vSphere Web client another menu item is available under the home: Networking & Security.

Go to “Networking & Security -> Installation -> Host Preparation” and install NSX components on clusters where they are needed:

nsx_installation_3

ESXi hosts are now configured.

VXLAN in unicast mode will be used, so NSX controllers must be deployed under “Networking & Security -> Installation -> Management”:

nsx_installation_4

More controllers should be deployed for HA.

Under “Networking & Security -> Installation -> Logical Network Preparation -> Segment ID”, specify a range of VNIs (each logical switch will use a VNI from this range):

nsx_installation_5

Segment ID pool is used when building Logical Network segments. In other words each Segment ID will be mapped to a logical switch (think about Segment ID as a VLAN).

Go to “Networking & Security -> Installation -> Host Preparation” and configure the cluster for VXLAN:

nsx_installation_6

A new VMkernel interface will be configured in each ESXi and it will be used for VXLAN:

nsx_installation_7

If the above step fails check network connectivity between ESXi hosts and mind that MTU must be at least 1600.

Add a Transport Zone going to “Networking & Security -> Installation -> Logical Network Preparation -> Transport Zones”:

nsx_installation_10

A Transport Zone is a set of networks (Logical Switches) replicated on one or more clusters. A “Global Transport Zone” has been created to contain all logical switches and replicate them to all NSX enabled clusters.

Now logical switches can be created under “Networking & Security -> Logical Switches”:

nsx_installation_8

Now VMs can be attached to NSX switches:

nsx_installation_9VMs attached to the same logical switch can reach each other, even if they are not located on the same ESXi host. VMs attached to different logical switch cannot reach each other (of course).

Now logical switches are isolated, no gateway is configured to link them together or to real network. Next posts will show how to configure Edge or distributed router to link logical switches together and link them to physical networks.

See next post for distributed routing configuration.

References

Posted on 12 Jan 2015 by Andrea.
  • Gmail icon
  • Twitter icon
  • Facebook icon
  • LinkedIN icon
  • Google+ icon