Skip to main content

VSAN VM Storage Policy failed to retrieve data from the server

 

Last week I got into an issue in my lab environment where some of my VM's under the vSAN 7.0 cluster was unable to migrate from one ESXi host to another ESXi host.

During vMotion the VM from one host to another host,  I was getting an error that the storage profile missing.

Credit: yellow-bricks.com


On validating the VM storage profile from vCenter which is on 7.0 it's identified that none of the VM storage policies was visible there and flashing error " Failed to retrieve data from the server".

vCenter Storage providers were also showing blank as it looks not in sync.

Fig-1
Further, investigating through with the vCenter logs, I identified the below error:



As per the logs, it's showing Failed to register vSAN VP services.

vSAN health services are up and running on the vCenter Server, but found the service log file had a significant size.

--rw-r--r-- l.   1 vsan-health users 8.3G Oct 13 10:14 vmware-vsa-health-service.log

As the vSAN-health logs were occupying a significant amount of space of 8.3 GB on the partition and due to unavailability of space health services were unable to write new logging into /var/log/vmware/vsan-health partition.

To troubleshoot this issue, the below steps were taken into action.

1) Stopped vSAN health services on the vCenter Server.

2) Moved the vSAN health services logs file to other partitions for later reference or you can delete if you don't feel it's not useful for you.

3) Start vSAN health services.  >> Validated the service.log file has been recreated successfully.

4) Restarted SPS services on the vCenter Server.


After applying the above steps, all Storage providers were available again in vCenter Server.

After this, all VM storage profiles were visible as vSAN and VM level also able to successfully managed to vMotion VM from one host to another host.



Thanks for reading.. and keep sharing ...








Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Unable to poweron the VM. (Failed to lock the file)

I have encountered may issues like where after some upgrade or migration we were unable to power on the VM. Figure 1 An error was received from the ESX host while powering on VM HSSVSQL01. Failed to start the virtual machine. Cannot open the disk '/vmfs/volumes/578d835c-18b2c97a-9b0d-0025b5f13920/SAMPLE1_cloud/000000.vmdk' or one of the snapshot disks it depends on. Failed to lock the file In above Figure:1, where while powering on the VM, its prompt for an error. Well, there are several reason for where the VM unable to poweron and you can find many article on this. Here in this article we will discuss to resolve this issue. Please use below step to resolve the disk lock issue  C hecked that VM is running on snapshot if its getting error " VM Consolidation required". Checked the snapshot manager if its showing any snapshot. If yes, try to delete the  snapshot. Verified the same from Esxi cl

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)