Skip to main content

Posts

Showing posts with the label NSX-T

Quick view on NSX Multi-Tenancy

 NSX-T brings an evaluation into SDN space whether it's networking, security or even monitoring the environment. During its long journey starting from acquiring this product from Nicira Network by VMware to date, we have seen several enhancements evolving into this product. From NSX-V to NSX-T and now rebranded to NSX starting from version 4. x this product is all set on the customer expectation whether it's a startup or a multi-billion Fortune 500 organization. In this article, we will discuss one of NSX new offerings in NSX ver 4.1 which is NSX Project or multi-site tenancy. Before starting into this let's draft a hypothetical or fictitious scenario... In an organization called Virtualvmx, there were 3 tenancies: Alpha Beta Gama All the above 3 tenants have some compliance guidelines for their organization where one tenant should not expose its networking component inside NSX with other tenants like Layer 2 networking which includes segment, security policies, T1 routers

Tunnel Endpoints

Tunnel endpoints are essential in VMware NSX-T  for managing network connectivity across different environments. They handle the encapsulation and decapsulation of network traffic as it moves between overlay and underlay networks. Here are the key aspects of tunnel endpoints in NSX-T. Its uses in both East-West as well as North-South traffic communication. Geneve Tunneling Protocol : NSX-T uses the Geneve tunneling protocol for encapsulating overlay traffic. Geneve offers a flexible and extensible framework, ensuring efficient and secure communication among virtual machines (VMs) and NSX-T logical networks. Tunnel Endpoint (TEP) IP Addresses: Each hypervisor host or NSX-T Edge node is assigned a unique TEP IP address as its tunnel endpoint. These addresses are used for encapsulating and decapsulating overlay traffic between different endpoints. Overlay Transport Zone (OTZ) : An Overlay Transport Zone defines the scope of network communication within an overlay infrastructure. TEP IP ad

Future of NSX

NSX-T is VMware's network virtualization and security platform, which enables the creation of virtual networks and security policies that are decoupled from physical network hardware. VMware has been investing heavily in NSX-T in recent years, and it is considered a critical component of VMware's broader cloud management and automation portfolio. The future of NSX-T looks promising, as it continues to evolve and expand its capabilities to support modern cloud and application architectures. Some of the key trends that are likely to shape the future of NSX-T include: NSX-T is an essential component of VMware's vision for software-defined networking (SDN) and network virtualization, which aims to make it easier for organizations to build and manage complex network environments. Some of the key features and capabilities of NSX-T include: Network virtualization: NSX-T enables the creation of virtual networks that are decoupled from physical network hardware. This allows organiza

About Bidirectional Forwarding Detection (BFD)

  Bidirectional forward detection (BFD) is the protocol designed for detecting fast forwarding path failure detection various media types, encapsulations, topologies and routing protocols. BFD helps in providing a consistent failure detection method.  In NSX-T environment where Edge node in edge cluster exchange its BFD keep-alive status on management and tunnel (TEP/overlay) interface to get proper communication among each Edge/host transport nodes in NSX-T environment.                                                       Fig:1 (Credit: vmware.com) eg: When the standby Edge node on T0 gateway fails to receive keep-alive status on both (management & tunnels) interfaces then in that case its not going to become active as its already in standby state. What its looses is its interface communication either from management of overlay. Some features of BFD  High availability uses BFD to detect forwarding path failures. BFD provides a low-overhead detection of fault even on physical medi

NSX-T Data Center Firewalls

NSX-T Data Center included two types of firewalls: Distributed Firewall    ( for east-west traffic ) Gateway Firewall        ( for north-south traffic )                                              Fig:1 (credit: vmware.com) The distributed firewall is a hypervisor, kernel-embedded stateful firewall:  It resides in the kernel of the hypervisor and outside the guest OS of the VM.  It controls the I/O path to and from the vNIC. The gateway firewall is used for north-south traffic between the NSX-T gateways and the physical network: Its is also called as perimeter firewall protect to and from the physical environment. It applies to Tier-0 and Tier-1 gateway uplinks and service interfaces.  It support both Tier-0 and Tier-1 gateway. If its applies to Tier-0 or Tier-1 gateway then HA status of that gateway should be active-standby. It is a centralized stateful service enforced on the NSX Edge node. Lets discuss both of the above firewall types in detail: Distributed Firewall DFW(Distributed

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)

NSX-T BGP Neighbor validation

NSX-T BGP Neighbor validation  BGP is one of the most popular options for establishing routing adjacencies between NSX and existing networks. It can be configured on the Tier-0 Logical Router . This article demonstrates various ways from where you can validate the BGP Neighbor status from T0 to its associated ToR switches into the rack. Let's get started.. Methods from where one could validate BGP status are as below. Using NSX-T Manager UI From NSX-T Edge CLI First thing first, let's discuss using NSX-T Manager UI method. Login to NSX-T Manager UI Click on MANAGER mode Click on Network Select the desired T0 Gateway > Action > Generate BGP Summary This will show the BGP Connection status.  If Connection status is showing as "ESTABLISHED". This means that T0 router has successfully peering with ToR switch. The second method where you can validate the BGP Connection status is from NSX-T Edge nodes. Steps involved: Login to NSX-T Edge node using SSH Get into the lo

What's new in NSX-T 3.0

There is various enhancement done in NSX-T version 3.0 by VMware.  Let's talk about architecture change in NSX-T version 3.0 Some of the below changes were made concerning the internal communication mechanism within the NSX-T components.  T hey are: Architecture ramp-up: NSX Manager and its cluster communicate with their transport nodes through APH Server ( Appliance Proxy Hub ) NSX Manager communicates with NSX-Proxy through port 1234. CCP (Central control plane) communicates with NSX-Proxy through port 1235 . RabbitMQ messaging is replaced with NSX-RPC between the management plane and CCP.     Add caption   Alarm and Events   In NSX-T version 3.0, there is an introduction of Alerts and Events which help in the active monitoring of different components of the environment.   Network Topology UI   In NSX-T 3.0 there is a view of the network topology which gives a diagram of each component of NSX-T.  This view gives about numbers of VM connected to segments, numbers of segments, T1,

NSX-T Manager Node Recovery

In the NSX-T environment, there were scenarios where it's required to bring down the manager node instances off from the cluster due to several abnormal reasons. Scenarios like if there were some issues during the upgrade of the manager node instance or having any abnormal circumstances where is node unable to recover from NSX-T Manager UI.  To recover/replace the node from the manager cluster its require to attempt with the manual process . Let's discuss the manual path to recover/Replace a Manager Node in the Cluster. 1) Login to NSX-T manager using CLI 2) Use command ' get cluster status ' This command will list all the NSX-T manager/controllers nodes into the cluster. Find the UUID of the existing node and Cluster to identify the node which requires recover/replace. 3) Now that we have identifying the manager node ID from the above command, its time to detect the node from the cluster.  Using detach node command "node id" will remove the node from the clus

Removing NSX-T manager extension from vCenter

In NSX-T starting from ver 2.4 NSX-T appliance got decoupled from vCenter where now its not mandatory to run NSX-T on vCenter platform only. Now NSX-T can be managed through standalone ESXi host, KVM or through container platform. As in version 2.4 there is still an option available to connect vCenter to NSX-T using Compute Manager. Here in this blog we will learn how we can unregister and register NSX-T extenstion from vCenter in case of any sync or vCenter connectivity issue with NSX-T. Lets get started.. 1) Login to NSX-T UI Go to -> System ->Compute Manager Here, vCenter is showing in Down status where the status is showing as "Not Registered" 2) When we click on "Not Registered" option its states below error. 3) When try to click on Resolve option its states below. At this stage if the Resolve option doesn't work then its require the remove the NSX-T extenstion from vCenter. To remove the NSX-T e

NSX-T Data Center 2.4 Management and Control Plane agents

As in the previous article I have illustrated about the NSX-T DC 2.4 management plane and Central control plane which is now conversed into one nsx manager node. MPA (Management Plane Agent): This agent is located on each transport node which communicate with the NSX manager NETCPA :  It provides communication between central control plane and the hypervisor. The management plane and the central control plane (CCP) run on same virtual appliance but they perform different functionality and will cover about they technical aspects below. The NSX cluster can scale to max of 3 NSX manager nodes run on the management and CCP. Communication process The nsx-mpa agent on transport node get communicated with NSX manager over Rabbitmq channel which is on port 5671 Now, the CCP communicate with transport node through nsx-proxy through port 1235 The task of NSX manager is to push the config to the CCP. The CCP configures the datapla

NSX-T Control Plane Components

NSX-T Control Plane Components In NSX-T Datacenter the control plane is split into 2 components which are Central Control Plane (CCP) and Local Control plane (LCP)/ Lets discuss more about Central Control Plane, Central Control plane (CCP)  In central control plane its compute and disseminate the ephermeral runtime state based on the config of management plane and topology reported by data plane element. Local Control Plane (LCP) It run at the compute endpoint like on transport node (ESXi/ KVM, baremetal) . It computed the local empheral runtime state for the endpoint based on the update from the CCP and LCP information. The LCP pushes stateless configuration to forwarding engines in the data plane and report the information back to CCP. This process easy the task for CCP and enable the platform to scale to thousand diffrent type of endpoints (Hypervisor, containers, hosts,baremetal  or public cloud)