Skip to main content

NSX-T Manager Node Recovery

In the NSX-T environment, there were scenarios where it's required to bring down the manager node instances off from the cluster due to several abnormal reasons.

Scenarios like if there were some issues during the upgrade of the manager node instance or having any abnormal circumstances where is node unable to recover from NSX-T Manager UI. 

To recover/replace the node from the manager cluster its require to attempt with the manual process.

Let's discuss the manual path to recover/Replace a Manager Node in the Cluster.

1) Login to NSX-T manager using CLI

2) Use command 'get cluster status'

This command will list all the NSX-T manager/controllers nodes into the cluster.

Find the UUID of the existing node and Cluster to identify the node which requires recover/replace.

3) Now that we have identifying the manager node ID from the above command, its time to detect the node from the cluster. 

Using detach node command "node id" will remove the node from the cluster.

This process will delete that specific node completely from the cluster and NSX-T enviornment.

Now once you deploy a new NSX-T manager node, its require to add the node into the cluster.

4) To add the node manually its require to know the API thumbprint certificate of the cluster to associate the node with the cluster

Using get certificate api thumbprint will get the certificate api.

5) Now, once we get the API thumbprint certificate, we can add the node using the node ID with API thumbprint certificate.
This will successfully add the new node into the NSX-T Cluster in full motion.

6) Here, we need to identify which manager node is the orchestrator node within the cluster.
 It is a self-contained web application that orchestrates the upgrade process of hosts, NSX Controller cluster, and Management plane.


Users can check which node is orchestrator node by running CLI "get service install-upgrade". The IP of the orchestrator node will be shown in the "Enabled-on" output.

"set repository-ip' will make a manager node the orchestrator node. It is needed if the node on which install-upgrade server is enabled (orchestrator node) is being detached from MP Cluster. 


Note: Changing the IP address of the Manager Node needs to follow the same procedure.

This conclude the process to add the NSX-T manager/controller node into the cluster using the manual method.

If you like the contents of this article then please share it further on the social platforms.  :)




Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)