Skip to main content

LUN VMFS partition table corrupted




Last week I got an issue where customer was having issue where one of the shared LUN got disappeared from all the ESXi host.

Well, while troubleshooting I found some error found in vmkernal.log

2017-11-30T06:47:09.272Z cpu78:12160496)WARNING: Partition: 1224: Partition 1 is active, failure to update partition table for naa. 600666666666666666666
2017-11-30T06:47:09.422Z cpu78:12160496)WARNING: Partition: 1224: Partition 1 is active, failure to update partition table for naa. 600666666666666666666

Try to verify if the LUN is attached to ESXi by using below command and found its connected.
esxcli storage core path list OR esxcfg-mpath -l

Later, Verified if the LUN status is showing online, degraded or offline by using command esxcli storage core device list -d NAA_ID

At present, I was able to see the LUN status online. However the LUN is still not visible in device section.

From the vmkernal.log error which was mentioned above “WARNING: Partition: 1051: Partition 1 is active, failure to update partition table for naa.0000000000000000” which indicate that the partition table is corrupted and require to recreate the partition table.

From the below we can observe that the datastore is missing the partition table information,
PartedUtil getptbl/vmfs/devices/disks/naa.000000000000000

Here you will get the output like this.
gpt
31121 255 63 499974144
1 2048 499974110 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

If your output appears similar to the following, it indicates the partition is missing:

gpt
31121 255 63 499974144


Now to find the beginning and end of VMFS partition run below script on ESXi host.

# offset="128 2048"; for dev in `esxcfg-scsidevs -l | grep "Console Device:" | awk {'print $3'}`; do disk=$dev; echo $disk; partedUtil getptbl $disk; { for i in `echo $offset`; do echo "Checking offset found at $i:"; hexdump -n4 -s $((0x100000+(512*$i))) $disk; hexdump -n4 -s $((0x1300000+(512*$i))) $disk; hexdump -C -n 128 -s $((0x130001d + (512*$i))) $disk; done; } | grep -B 1 -A 5 d00d; echo "---------------------"; done

Now if you see the readable value in the output field then we can create recreate the partition table. However, if we see the output into unreadable format and getting error that means it is quite possible that there are rare chance to re-create the partition table.

Option 1 (Where the output is showing readable output)
By running the above script you will get the first block of VMFS which is ideally 128 (in case of 5.5) and 2048 in case of (5.1 and above)

To get the end block for the partition, run this command:

# partedUtil getUsableSectors /vmfs/devices/disks/naa.000000000000 gpt “1
Output comes as:
34 499974110

If running the above command if you are not able to get the specified output and receiving the error stating “Partition table invalid, unable to satisfy all constraints on the partition.. or something similar to that.

Try below command which create the temporary partition and after that you can read the disk information

# partedUtil setptbl /vmfs/devices/disks/naa.0000000000000 gpt "1 2048 4123456 AA31E02A400F11DB9590000C2911D1B8 0"

Now run the command to set the correct value of first and last block
# partedUtil setptbl /vmfs/devices/disks/naa.6006016045502500c20a2b3ccecfe011 gpt "1 2048 499974110 AA31E02A400F11DB9590000C2911D1B8 0"


Now, last run this command to attempt to mount the VMFS datastore:

# vmkfstools –V


If nothings works as above, then please log a case with VMware support to get assistance.

Please comment me if you have any further queries.


Happy Sharing … J

























Comments

Post a Comment

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)