Skip to main content

Posts

Showing posts from October, 2019

NSX-T Data Center 2.4 Management and Control Plane agents

As in the previous article I have illustrated about the NSX-T DC 2.4 management plane and Central control plane which is now conversed into one nsx manager node. MPA (Management Plane Agent): This agent is located on each transport node which communicate with the NSX manager NETCPA :  It provides communication between central control plane and the hypervisor. The management plane and the central control plane (CCP) run on same virtual appliance but they perform different functionality and will cover about they technical aspects below. The NSX cluster can scale to max of 3 NSX manager nodes run on the management and CCP. Communication process The nsx-mpa agent on transport node get communicated with NSX manager over Rabbitmq channel which is on port 5671 Now, the CCP communicate with transport node through nsx-proxy through port 1235 The task of NSX manager is to push the config to the CCP. The CCP configures the datapla

NSX-T Control Plane Components

NSX-T Control Plane Components In NSX-T Datacenter the control plane is split into 2 components which are Central Control Plane (CCP) and Local Control plane (LCP)/ Lets discuss more about Central Control Plane, Central Control plane (CCP)  In central control plane its compute and disseminate the ephermeral runtime state based on the config of management plane and topology reported by data plane element. Local Control Plane (LCP) It run at the compute endpoint like on transport node (ESXi/ KVM, baremetal) . It computed the local empheral runtime state for the endpoint based on the update from the CCP and LCP information. The LCP pushes stateless configuration to forwarding engines in the data plane and report the information back to CCP. This process easy the task for CCP and enable the platform to scale to thousand diffrent type of endpoints (Hypervisor, containers, hosts,baremetal  or public cloud)

Architecture layout of NSX-T Data Center

Architecture layout of NSX-T Data Center As we all know that NSX is one of the retro product of VMware into the network and security. It run on any device, any cloud and and any application. At present one can run and its connectivity on most of the public cloud like Alibaba, IBM Cloud, AWS or Azure. Lets talk about the all rounder of NSX which is NSX Transformer (NSX-T) which can make communication with various hypervisor like ESXi, KVM, Containers, Openstack and many more. To continue conversation with NSX-T Data Center, lets discuss its major elements. There are 3 main elements of NSX -T Data Center which are: 1) Management Plane 2) Control Plane 3) Data Plane In NSX-T Datacenter ver 2.4 Management and Control Plane are converged means the are now available on single VM or you can say in one OVF. 1) Management Plane:   It is designed with advance clustering technology, which allow the platform to process l

CDO Mode in NSX Controller

CDO  ( Controller disconnect operation) Mode in NSX Controller. CDO mode ensures that the data plane connectivity in the multisite environment. When primary site loses connectivity. Here you can enable CDO mode on secondary site to avoid any temporary connectivity issue related to data plane.  When the primary site is down or not reachable, the CDO logical switch is used only for control plane. Purpose and therefore its a not visible under logical switches tab.

About NSX VTEP Reports

NSX VTEP Reports NSX Controller VXLAN directory services. There are basically 3 types of tables under VTEP 1) MAC Table 2) ARP Table 3) VTEP Table MAC Table:  The MAC table includes the VNI, the MAC address and VTEP ID that reported it. If a unknown unicast frame is reviewed by a VTEP. The VTEP sends a MAC table request to NSX Controller for a destination MAC address. If NSX controller has the MAC address in the MAC table, it replies to the VTEP with information on where to forward the frame. If NSX controller does not have MAC address in the MAC table then the VTEP floods the frame to other VTEP's. ARP Table: The ARP table used to suppress the broadcast traffic. IP report generate the ARP Table. The VTEP's send a copy to each MAC address and IP mapping that they have. This report is called the IP reports. NSX controller creates a ARP ta

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Unable to poweron the VM. (Failed to lock the file)

I have encountered may issues like where after some upgrade or migration we were unable to power on the VM. Figure 1 An error was received from the ESX host while powering on VM HSSVSQL01. Failed to start the virtual machine. Cannot open the disk '/vmfs/volumes/578d835c-18b2c97a-9b0d-0025b5f13920/SAMPLE1_cloud/000000.vmdk' or one of the snapshot disks it depends on. Failed to lock the file In above Figure:1, where while powering on the VM, its prompt for an error. Well, there are several reason for where the VM unable to poweron and you can find many article on this. Here in this article we will discuss to resolve this issue. Please use below step to resolve the disk lock issue  C hecked that VM is running on snapshot if its getting error " VM Consolidation required". Checked the snapshot manager if its showing any snapshot. If yes, try to delete the  snapshot. Verified the same from Esxi cl

VM Creation Date & Time from Powercli

Most of the times we have several requirement when we talk about IT environment like designing , deployment , compliance check or for Security auditing the environment. Somewhere during security auditing we require to provide several information to security team to get successful audit. One of them is the compliance of Virtual machine auditing of creation date and time. Here into this post we will explore how to get the creation date and time of virtual machine hosted into the vCenter or ESXi. To get the details we will use VMware Powercli to extract the details. By default there is no function added into Powercli to get such details, so here we will add a function of vm creation date. Below is the function which needed to be copy and paste into the Powercli. ======================================================================= function  Get-VMCreationTime  {     $vms  =  get-vm     $vmevts  = @()     $vmevt  =  new-object  PSObject     for