Skip to main content



 According to VMware, “Micro-segmentation enables organizations to logically divide its data center into distinct security segments down to the individual workload level, and then define security controls and deliver services for each unique segment.” (Lawrence Miller, CISSP and Joshua Soto, 2015, p. 21) The benefit of micro-segmentation is that it denies an attacker the opportunity to pivot laterally within the internal network, even after the perimeter has been breached. VMware NSX-T supports micro-segmentation as it allows for a centrally controlled, yet distributed firewall to be attached directly to workloads within an organization’s network. The distribution of the firewall for the application of security policy to protect individual workloads is effective as rules can be applied that are specific to the requirements of each workload. The additional value that NSX-T provides is that the capabilities of NSX are not limited to homogenous vSphere environments, but support the hetero

About Bidirectional Forwarding Detection (BFD)

  Bidirectional forward detection (BFD) is the protocol designed for detecting fast forwarding path failure detection various media types, encapsulations, topologies and routing protocols. BFD helps in providing a consistent failure detection method.  In NSX-T environment where Edge node in edge cluster exchange its BFD keep-alive status on management and tunnel (TEP/overlay) interface to get proper communication among each Edge/host transport nodes in NSX-T environment.                                                       Fig:1 (Credit: eg: When the standby Edge node on T0 gateway fails to receive keep-alive status on both (management & tunnels) interfaces then in that case its not going to become active as its already in standby state. What its looses is its interface communication either from management of overlay. Some features of BFD  High availability uses BFD to detect forwarding path failures. BFD provides a low-overhead detection of fault even on physical medi

NSX-T Data Center Firewalls

NSX-T Data Center included two types of firewalls: Distributed Firewall    ( for east-west traffic ) Gateway Firewall        ( for north-south traffic )                                              Fig:1 (credit: The distributed firewall is a hypervisor, kernel-embedded stateful firewall:  It resides in the kernel of the hypervisor and outside the guest OS of the VM.  It controls the I/O path to and from the vNIC. The gateway firewall is used for north-south traffic between the NSX-T gateways and the physical network: Its is also called as perimeter firewall protect to and from the physical environment. It applies to Tier-0 and Tier-1 gateway uplinks and service interfaces.  It support both Tier-0 and Tier-1 gateway. If its applies to Tier-0 or Tier-1 gateway then HA status of that gateway should be active-standby. It is a centralized stateful service enforced on the NSX Edge node. Lets discuss both of the above firewall types in detail: Distributed Firewall DFW(Distributed

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@ Here, the URL specified is the ESXi host ( under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)

NSX-T BGP Neighbor validation

NSX-T BGP Neighbor validation  BGP is one of the most popular options for establishing routing adjacencies between NSX and existing networks. It can be configured on the Tier-0 Logical Router . This article demonstrates various ways from where you can validate the BGP Neighbor status from T0 to its associated ToR switches into the rack. Let's get started.. Methods from where one could validate BGP status are as below. Using NSX-T Manager UI From NSX-T Edge CLI First thing first, let's discuss using NSX-T Manager UI method. Login to NSX-T Manager UI Click on MANAGER mode Click on Network Select the desired T0 Gateway > Action > Generate BGP Summary This will show the BGP Connection status.  If Connection status is showing as "ESTABLISHED". This means that T0 router has successfully peering with ToR switch. The second method where you can validate the BGP Connection status is from NSX-T Edge nodes. Steps involved: Login to NSX-T Edge node using SSH Get into the lo

VSAN VM Storage Policy failed to retrieve data from the server

  Last week I got into an issue in my lab environment where some of my VM's under the vSAN 7.0 cluster was unable to migrate from one ESXi host to another ESXi host. During vMotion the VM from one host to another host,  I was getting an error that the storage profile missing. Credit: On validating the VM storage profile from vCenter which is on 7.0 it's identified that none of the VM storage policies was visible there and flashing error " Failed to retrieve data from the server". vCenter Storage providers were also showing blank as it looks not in sync. Fig-1 Further, investigating through with the vCenter logs, I identified the below error: As per the logs, it's showing Failed to register vSAN VP services. vSAN health services are up and running on the vCenter Server, but found the service log file had a significant size. --rw-r--r-- l.   1 vsan-health users 8.3G Oct 13 10:14 vmware-vsa-health-service.log As the vSAN-health logs were occupying a

What's new in NSX-T 3.0

There is various enhancement done in NSX-T version 3.0 by VMware.  Let's talk about architecture change in NSX-T version 3.0 Some of the below changes were made concerning the internal communication mechanism within the NSX-T components.  T hey are: Architecture ramp-up: NSX Manager and its cluster communicate with their transport nodes through APH Server ( Appliance Proxy Hub ) NSX Manager communicates with NSX-Proxy through port 1234. CCP (Central control plane) communicates with NSX-Proxy through port 1235 . RabbitMQ messaging is replaced with NSX-RPC between the management plane and CCP.     Add caption   Alarm and Events   In NSX-T version 3.0, there is an introduction of Alerts and Events which help in the active monitoring of different components of the environment.   Network Topology UI   In NSX-T 3.0 there is a view of the network topology which gives a diagram of each component of NSX-T.  This view gives about numbers of VM connected to segments, numbers of segments, T1,

Reason's for instability of NSX-T Cluster

  Some time back I had an issue where my NSX-T lab e nvironment was showing unstable status. My environment consists of 3 NSX-T manager nodes aligned with the VIP IP address.  The issue where I was unable to access my NSX-T console through VIP IP address nor with my other NSX-T nodes. It's quite intermittent I was able to access console UI from one of the manager node using admin account. However, unable to login to the manager's node using SSH with admin or root account. As I said its quite intermitted where I managed to access the manager UI console.  In the below Figure:1, it states that 1-2 manager nodes were showing unavailable. Figure:1 On validating the "VIEW DETAILS" it clearly shows that /var/log partition was 100% full. Figure:2 Now the main objective is to either compress or delete the old logs from /var/log partition to bring back the manager's node's.  To accomplish this I booted the NSX-T node VM sequentially, mounting the Ubuntu image using resc