Skip to main content

Posts

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)

NSX-T BGP Neighbor validation

NSX-T BGP Neighbor validation  BGP is one of the most popular options for establishing routing adjacencies between NSX and existing networks. It can be configured on the Tier-0 Logical Router . This article demonstrates various ways from where you can validate the BGP Neighbor status from T0 to its associated ToR switches into the rack. Let's get started.. Methods from where one could validate BGP status are as below. Using NSX-T Manager UI From NSX-T Edge CLI First thing first, let's discuss using NSX-T Manager UI method. Login to NSX-T Manager UI Click on MANAGER mode Click on Network Select the desired T0 Gateway > Action > Generate BGP Summary This will show the BGP Connection status.  If Connection status is showing as "ESTABLISHED". This means that T0 router has successfully peering with ToR switch. The second method where you can validate the BGP Connection status is from NSX-T Edge nodes. Steps involved: Login to NSX-T Edge node using SSH Get into the lo

VSAN VM Storage Policy failed to retrieve data from the server

  Last week I got into an issue in my lab environment where some of my VM's under the vSAN 7.0 cluster was unable to migrate from one ESXi host to another ESXi host. During vMotion the VM from one host to another host,  I was getting an error that the storage profile missing. Credit: yellow-bricks.com On validating the VM storage profile from vCenter which is on 7.0 it's identified that none of the VM storage policies was visible there and flashing error " Failed to retrieve data from the server". vCenter Storage providers were also showing blank as it looks not in sync. Fig-1 Further, investigating through with the vCenter logs, I identified the below error: As per the logs, it's showing Failed to register vSAN VP services. vSAN health services are up and running on the vCenter Server, but found the service log file had a significant size. --rw-r--r-- l.   1 vsan-health users 8.3G Oct 13 10:14 vmware-vsa-health-service.log As the vSAN-health logs were occupying a

What's new in NSX-T 3.0

There is various enhancement done in NSX-T version 3.0 by VMware.  Let's talk about architecture change in NSX-T version 3.0 Some of the below changes were made concerning the internal communication mechanism within the NSX-T components.  T hey are: Architecture ramp-up: NSX Manager and its cluster communicate with their transport nodes through APH Server ( Appliance Proxy Hub ) NSX Manager communicates with NSX-Proxy through port 1234. CCP (Central control plane) communicates with NSX-Proxy through port 1235 . RabbitMQ messaging is replaced with NSX-RPC between the management plane and CCP.     Add caption   Alarm and Events   In NSX-T version 3.0, there is an introduction of Alerts and Events which help in the active monitoring of different components of the environment.   Network Topology UI   In NSX-T 3.0 there is a view of the network topology which gives a diagram of each component of NSX-T.  This view gives about numbers of VM connected to segments, numbers of segments, T1,

Reason's for instability of NSX-T Cluster

  Some time back I had an issue where my NSX-T lab e nvironment was showing unstable status. My environment consists of 3 NSX-T manager nodes aligned with the VIP IP address.  The issue where I was unable to access my NSX-T console through VIP IP address nor with my other NSX-T nodes. It's quite intermittent I was able to access console UI from one of the manager node using admin account. However, unable to login to the manager's node using SSH with admin or root account. As I said its quite intermitted where I managed to access the manager UI console.  In the below Figure:1, it states that 1-2 manager nodes were showing unavailable. Figure:1 On validating the "VIEW DETAILS" it clearly shows that /var/log partition was 100% full. Figure:2 Now the main objective is to either compress or delete the old logs from /var/log partition to bring back the manager's node's.  To accomplish this I booted the NSX-T node VM sequentially, mounting the Ubuntu image using resc

IDS/IPS (Intrusion Detection System) & (Intrusion Prevention System)

 IDS (Intrusion Detection System) As its name suggest that it's designed to detect malicious or suspicious activity in the network by scanning data packets and monitoring the network traffic. It detects packet forwarding if its a good or bad packet where bad packet determines of malicious threats or any kind of risk. It generates logs to identify suspicious activity. It can not prevent malicious threats or attacks from inside the environment or outside, the aim behind the design the IDS to give warnings of that suspicious or malicious activity or threats to the system administrators or security/network admin. It continuously monitors and analyzes the incident, violations, and threats which may be breaking the network security. Credit: pngio.com IPS (Intrusion Prevention System) Its is designed to prevent the malicious or suspicious threat and activities which are detected by IPS in the network. Its design to block suspicious and malicious activities and threats before it develops a

NSX-T Manager Node Recovery

In the NSX-T environment, there were scenarios where it's required to bring down the manager node instances off from the cluster due to several abnormal reasons. Scenarios like if there were some issues during the upgrade of the manager node instance or having any abnormal circumstances where is node unable to recover from NSX-T Manager UI.  To recover/replace the node from the manager cluster its require to attempt with the manual process . Let's discuss the manual path to recover/Replace a Manager Node in the Cluster. 1) Login to NSX-T manager using CLI 2) Use command ' get cluster status ' This command will list all the NSX-T manager/controllers nodes into the cluster. Find the UUID of the existing node and Cluster to identify the node which requires recover/replace. 3) Now that we have identifying the manager node ID from the above command, its time to detect the node from the cluster.  Using detach node command "node id" will remove the node from the clus

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Unable to poweron the VM. (Failed to lock the file)

I have encountered may issues like where after some upgrade or migration we were unable to power on the VM. Figure 1 An error was received from the ESX host while powering on VM HSSVSQL01. Failed to start the virtual machine. Cannot open the disk '/vmfs/volumes/578d835c-18b2c97a-9b0d-0025b5f13920/SAMPLE1_cloud/000000.vmdk' or one of the snapshot disks it depends on. Failed to lock the file In above Figure:1, where while powering on the VM, its prompt for an error. Well, there are several reason for where the VM unable to poweron and you can find many article on this. Here in this article we will discuss to resolve this issue. Please use below step to resolve the disk lock issue  C hecked that VM is running on snapshot if its getting error " VM Consolidation required". Checked the snapshot manager if its showing any snapshot. If yes, try to delete the  snapshot. Verified the same from Esxi cl

VM Creation Date & Time from Powercli

Most of the times we have several requirement when we talk about IT environment like designing , deployment , compliance check or for Security auditing the environment. Somewhere during security auditing we require to provide several information to security team to get successful audit. One of them is the compliance of Virtual machine auditing of creation date and time. Here into this post we will explore how to get the creation date and time of virtual machine hosted into the vCenter or ESXi. To get the details we will use VMware Powercli to extract the details. By default there is no function added into Powercli to get such details, so here we will add a function of vm creation date. Below is the function which needed to be copy and paste into the Powercli. ======================================================================= function  Get-VMCreationTime  {     $vms  =  get-vm     $vmevts  = @()     $vmevt  =  new-object  PSObject     for