Posts

NSX-T Data Center 2.4 Management and Control Plane agents

Image
As in the previous article I have illustrated about the NSX-T DC 2.4 management plane and Central control plane which is now conversed into one nsx manager node.





MPA (Management Plane Agent): This agent is located on each transport node which communicate with the NSX manager

NETCPA:  It provides communication between central control plane and the hypervisor.

The management plane and the central control plane (CCP) run on same virtual appliance but they perform different functionality and will cover about they technical aspects below.

The NSX cluster can scale to max of 3 NSX manager nodes run on the management and CCP.



Communication process


The nsx-mpa agent on transport node get communicated with NSX manager over Rabbitmq channel which is on port 5671

Now, the CCP communicate with transport node through nsx-proxy through port 1235

The task of NSX manager is to push the config to the CCP. The CCP configures the dataplane through nsx-proxy, which is one of the component of LCP (Local control p…

NSX-T Control Plane Components

Image
NSX-T Control Plane Components





In NSX-T Datacenter the control plane is split into 2 components which are Central Control Plane (CCP) and Local Control plane (LCP)/
Lets discuss more about Central Control Plane,
 Central Control plane (CCP) 
In central control plane its compute and disseminate the ephermeral runtime state based on the config of management plane and topology reported by data plane element.


Local Control Plane (LCP)
It run at the compute endpoint like on transport node (ESXi/ KVM, baremetal) . It computed the local empheral runtime state for the endpoint based on the update from the CCP and LCP information.
The LCP pushes stateless configuration to forwarding engines in the data plane and report the information back to CCP.
This process easy the task for CCP and enable the platform to scale to thousand diffrent type of endpoints (Hypervisor, containers, hosts,baremetal  or public cloud)

Architecture layout of NSX-T Data Center

Image
Architecture layout of NSX-T Data Center





As we all know that NSX is one of the retro product of VMware into the network and security. It run on any device, any cloud and and any application.


At present one can run and its connectivity on most of the public cloud like Alibaba, IBM Cloud, AWS or Azure.


Lets talk about the all rounder of NSX which is NSX Transformer (NSX-T) which can make communication with various hypervisor like ESXi, KVM, Containers, Openstack and many more.


To continue conversation with NSX-T Data Center, lets discuss its major elements.


There are 3 main elements of NSX -T Data Center which are:


1) Management Plane 2) Control Plane 3) Data Plane


In NSX-T Datacenter ver 2.4 Management and Control Plane are converged means the are now available on single VM or you can say in one OVF.


1)Management Plane:  


It is designed with advance clustering technology, which allow the platform to process large scale concurrent API request. NSX manager of NSX-T DC provides REST API and web base…

CDO Mode in NSX Controller

Image
CDO  ( Controller disconnect operation) Mode in NSX Controller.






CDO mode ensures that the data plane connectivity in the multisite environment. When primary site loses connectivity. Here you can enable CDO mode on secondary site to avoid any temporary connectivity issue related to data plane. 

When the primary site is down or not reachable, the CDO logical switch is used only for control plane. Purpose and therefore its a not visible under logical switches tab.

About NSX VTEP Reports

Image
NSX VTEP Reports





NSX Controller VXLAN directory services. There are basically 3 types of tables under VTEP



1) MAC Table



2) ARP Table



3) VTEP Table







MAC Table: 



The MAC table includes the VNI, the MAC address and VTEP ID that reported it.If a unknown unicast frame is reviewed by a VTEP. The VTEP sends a MAC table request to NSX Controller for a destination MAC address.




If NSX controller has the MAC address in the MAC table, it replies to the VTEP with information on where to forward the frame.




If NSX controller does not have MAC address in the MAC table then the VTEP floods the frame to other VTEP's.






ARP Table:

The ARP table used to suppress the broadcast traffic.




IP report generate the ARP Table. The VTEP's send a copy to each MAC address and IP mapping that they have. This report is called the IP reports.




NSX controller creates a ARP table with the information in the IP request.




The ARP table includes the MAC to IP addrEss mapping and VTEP IP that reported it.




The VTEP intercepts all APR re…

NSX VXLAN Logical Switch Replication mode

NSX VXLAN Logical Switch Replication Mode.


NSX controller is the central control point for all logical switches within a network and maintain information of all virtual machine, host, logical switch and VXLAN.


The controller support two new logical switch control plane mode.

1) Unicast
2) Hybrid

The replication mode tells NSX to manage BUM traffic which sent from virtual machine.


Multicast mode


Control plane operation is based on multicast flooding and learning.BUM traffic replication is based on L2 and L3 multicast.It require 1GMP and multicast routing.
Unicast Mode
Control Plane operation is based on NSX controller cluster.BUM traffic replication is based on unicast . (One destination at a time)Host depend on UTEP (unicast TAP) for replication for traffic on remote signal.


Hybrid Mode
Bum traffic replication based on unicast and L2 multicast. Local replication is offloaded to physical network. Remote replication is based on unicast.





Overview on VXLAN

VXLAN

(Virtual Extensible Local Area Network)


Logical Switch reproduce switching functionality (Unicast, multicast or broadcast) in virtual environment which completely decouple from underlying hardware.

Logical Switches are similar to VXLAN in that they provides network connections to which you can attach virtual machine. The VM's can communicate with each other over VXLAN if they are connected to some logical switches.


About VXLAN LIF

The DLR support logical switches which are backed by VXLAN.


First-hop routing is handling on the host, the traffic is switched to appropriate logical switch. If the destination is at another host. the ethernet frame is placed in the VXLAN frame & forward.
Only one VXLAN LIF can connect to logical switch. The next-hop can be an NSX Edge service gateway.
VXLAN LIF can span all distributed switch in the transport zone.

VMware vForum 2019 - Online

The new year is well under way, and with 2019 in full swing, tech conference chatter will start to heat up. While VMworld is just over six months away, there are still opportunities to get your dose of VMware updates well before then. One of those opportunities is VMware’s vForum Online, which will take place on April 24th, 2019


Register here:
https://secure.vmware.com/vFORUMOnline_REG

"Unknown" status showing in host compliance status

There were several practices where we use host profile into our environment to get the compliance among all other ESXi host into the cluster.

There are mostly 3 types of status identified to the hostprofile which is attached to the ESXi host.

1) Compliant
2) Not-compliant
3) Unknown

As you know when all the features and settings of host profile and ESXi meets perfectly then only status shows as Compliant status.
Not-compliance status shows when the hostpofile unable to meet the complete requirement on the host and some feature are missing.
Unknown status is the one which suspect even when you have ESXi host into the Compliance status or sometime could be in Not-compliant status.

There are several indentified cause for that.

Most of the time we found that all good from ESXi UI and hostprofile where all the parameters are meet successfully and even then host profile status shows as "UNKNOWN" status.

In my case i found one glitch where the dvs configuration was not sync comple…

RAM Disk Full Due To Inodes In ESXi Host

Last week I got into issue where one of my ESXi host was prompting error while creating virtual machine.

I verified the task and events of ESXi host and found below error generating while creating Virtual machine "A general system error occured: Failed to open "/var/log/vmware/journel/ for write: There is no space". While further digging, I identified the ESXi host from where its generating this error from Task and Event section in vCenter.
Tried to take SSH of the question ESXi host but its was inaccessible via putty. However, ESXi host was up and running fine. Last option left to access the ESXi host from its management console which is ILO as its ESXi installed on HP server.

Also you can try accessing the SSH via other ESXi host or Linux machine using below command ssh -T servername. However, this will not give your prompt but you can type the command to get the output, but that was not giving any luck at that time so we use ILO for further troubleshooting.
Took th…