Skip to main content

Architecture layout of NSX-T Data Center

Architecture layout of NSX-T Data Center





As we all know that NSX is one of the retro product of VMware into the network and security. It run on any device, any cloud and and any application.



At present one can run and its connectivity on most of the public cloud like Alibaba, IBM Cloud, AWS or Azure.



Lets talk about the all rounder of NSX which is NSX Transformer (NSX-T) which can make communication with various hypervisor like ESXi, KVM, Containers, Openstack and many more.



To continue conversation with NSX-T Data Center, lets discuss its major elements.



There are 3 main elements of NSX -T Data Center which are:



1) Management Plane

2) Control Plane
3) Data Plane



In NSX-T Datacenter ver 2.4 Management and Control Plane are converged means the are now available on single VM or you can say in one OVF.



1) Management Plane:  



It is designed with advance clustering technology, which allow the platform to process large scale concurrent API request. NSX manager of NSX-T DC provides REST API and web based UI interface entry point for all the user configuration.





2) Control Plane: 




Control Plane consist of 3 node configuration cluster which is responsible for computing and distributing runtime virtual network and security state of NSX-T Data Center environment. 



The Control Plane is further separated into 2 components which are ( CCP) Central Control Plane and (LCP) Local Control Plane.



This separation simplifiy the work of CCP and enable platform to extend and scale environment of the NSX-T





3) Data Plane:




The Data plane includes a group of ESXi or KVM host including NSX Edge nodes. The group of server and edge node prepared for NSX-T are called Transport node.



The Transport node are responsible for distributing and forwarding of network traffic. In NSX-T vCenter is not basic requirement where distributed virtual switch are created. 



In NSX-T its having its own switch (N-VDS) which is NSX-managed virtual distributed switch decoupled the data plane from the compute manager (vCenter).


Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)