There is various enhancement done in NSX-T version 3.0 by VMware.
Let's talk about architecture change in NSX-T version 3.0
- NSX Manager and its cluster communicate with their transport nodes through APH Server (Appliance Proxy Hub)
- NSX Manager communicates with NSX-Proxy through port 1234.
- CCP (Central control plane) communicates with NSX-Proxy through port 1235.
- RabbitMQ messaging is replaced with NSX-RPC between the management plane and CCP.
Alarm and Events
In NSX-T version 3.0, there is an introduction of Alerts and Events which help in the active monitoring of different components of the environment.
Network Topology UI
In NSX-T 3.0 there is a view of the network topology which gives a diagram of each component of NSX-T. This view gives about numbers of VM connected to segments, numbers of segments, T1, T0. Numbers of uplinks connected to T0.
NSX-T on VDS
In the NSX-T 3.0 version, now we can leverage the vCenter VDS as well as NVDS.
In the ESXi host which are managed by the vCenter server can now be configured using VDS during transport node preparation.
For the standalone ESXi host environments, NSX Manager installs the NSX-T virtual distributed switch (NVDS) on transport nodes.
The distributed port group and NSX distributed port groups can coexist on the same VDS.
The requirement of the VDS environment on NSX-T requires having vCenter 7 & ESXi host 7, as well as VDS, must be configured with VDS7. MTU value of VDS7 should be in 1600.
The new introduction of version 3.0 is VRF Lite where multiple routing instances can be configured without deploying additional Tier-0 gateway along with edge nodes.
VRF Lite does not use MPLS/MP-BGP protocol as other traditional VRF.
Through VRF lite it provides isolation of logical routing and extents peers that are compatible with VRF lite technology.
The requirement of VRF lite
To have a default Tier-0 gateway with eternal connectivity with layer 3 peer.
The peer device supports the 802.1Q protocol for VLAN tagging.
Limitation of VRF lite
It's not compatible with VPN and Load Balancer.
Ethernet VPN (EVPN) is an IEEE standard and has the following characteristics.
- Provides L2 VPN and L3 VPN services.
- Provides control plane and data plane separation.
- Supports several types of encapsulation, such as VXLAN, Multiprotocol label switching.
- Uses Multiprotocol BGP (MP-BGP) for the control plane.
NSX Edge and Routing Enhancement.
The following enhancement has been made in NSX Edge in 3.0
- New Extra large form factor with 16 vCPUs and 64 GB of RAM.
- The NSX Edge nodes settings can be changed after deployment.
- A nice feature is where Edge VM is configured to automatically power on in vSphere Cluster where high availability is disabled.
QoS( Quality of Services profile)
QoS profiles are only supported on the Tier-1 gateway and applied on the uplink ports.
Characteristics of the QoS profile.
- Profiles for different Tier-1 gateway ono the same NSX Edge are isolated from each other.
- An individual profile can be configured for ingress and egress traffic.
- Also, the individual profile can be configured with a single rate.
- Rate-limiting is applied to all traffic (Unicast, BUM, IPV4/IPV6)
Time-Based Firewall Rules:
One can use time-based firewall rules to configure security rules that are valid for a specific period.
- They are available for distribution and gateway firewalls.
- They are configured at the firewall policy level.
- Both recurring and once-off firewall rules can be configured.
- They are only supported on ESXi host and NSX Edge nodes
- These are only configured on the Tier-1 gateway.
- Use cases for Time-based Firewall rules:
- Allow users to access the internet during a specific time slot.
- Allow users to only specific services only during the maintenance window.
The requirement for Time-based Firewall rules:
- NTP services should be on all participating transport nodes.
- Validate the ntp setting on transport nodes using /etc/init.d/ntpd status.
- On Edge nodes validate the services using “ get service NTP”
- Validate the NTP Client to successfully communicate to configure NTP serve # ntpd –p