Skip to main content

Unable to take snapshot & consolidate the VM



Recently I got an issue where customer was not able to take snapshot also unable to consolidate the snapshot.

While verifying the snapshot status of VM in snapshot manager its shows no snapshot available. However, while accessing the datastore I found 30 snapshot on both the hdd attached to the VM.

I tried to consolidate the VM from snapshot manager & from ESXi cli using below command where VIMID is the VM ID which you can get from vim-cmd vmsvc/getallvms(To get vm id) command
vim-cmd vmsvc/snapshot.removeall VMID ( To consolidate the snapshot) but it again failed by giving failed status.

Next, I tried to find CID and PID status of both the HDD connected to virtual machine.  You can use below command followed the path of virtual machine. (i.e /vmfs/volumes/vmname)
for i in `ls -l *.vmdk  | grep -v delta | grep -v ctk | grep -v flat | awk '{print $
$i; done

As mentioned, there are 30 delta files exist on both the HDD and one first HDD I find CID and PID mismatch where delta disk 25-30 where having 0000 CID and PID and all later HDD was having consistent details.

You can verify the disk consistency by using below command.
Vmkfstools –D /vmfs/volumes/datastore/vmname/disk.vmdk

While verifying the disk consistency I found disk to be inconsistent. Now whats next ??

Now, I decided to check the vmdk consistency all from 30 to 1. Later I got I found disk consistency on vmdk 24. I decided to map the vmdk24 to the VM which is the first HDD and try to poweron, but unable to poweron.

So, Finally I decided to clone vmdk24 of first hdd of VM where i foud the disk consistent so that all the metadata get saturated and consolidated to base disk.

You can clone the disk by using command vmkfstools –I (source disk) (destination disk)
Now, I after cloing the disk tried to add the clone HDD to the VM and try to poweron. Fortunatly it works. I’m able to poweron the VM also able to take the snapshot & consolidate.

Happy Sharing… J


Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)