Skip to main content

Assigning Static MAC address to othere VM after Migration (P2V)



 MAC address is not accepting and conflicting , which result its not powering on on destination VC


Cause:

This issue occurs when the virtual machine has been configured with a static MAC address in the 00:50:56:xx:xx:xx range. VMware vCenter Server 5.1 and later detects this as a protected range and refuses to power on the virtual machine.

In vSphere 5.1 and later new policies have been implemented where the statically assigned MAC addresses can only be in the range 00:50:56:[00-3F]:XX:XX or other non-VMware OUI addresses.

Prefix- and range-based MAC address allocation is supported only in vCenter Server 5.1 or later.

This implies that if you add pre-5.1 hosts to vCenter Server 5.1, and use anything other than VMware OUI prefix- or range-based MAC address allocation, virtual machines assigned MAC addresses that are not VMware OUI prefixed fail to power on their pre-5.1 hosts.
Now, from what we’ve seen, the restriction is a little more detailed than what is described above. Static MAC addresses in vSphere 5.1 must be in the range 00:50:56:[00-3F]:XX:XX. If the MAC addressed is prefixed with 00:50:56 but it is outside of the [00-3F]:XX:XX range, it is still considered invalid and the VM will fail to power on.


Resolution:

We have done some modification into VMX file of Virtual machine. By below change VM will not try to detect Static MAC which is aligned to VM from outside Network with other VM's and because of that VM associated MAC will not conflict with any other and will poweron the VM.

We have added below line in ethernet phase of VMX file by unregister and re-register the VM to ESXi 

ethernet0.checkMACAddress = “false”

After this issue got resolved and VM turn on gracefully.

Refered KB Article 2007042

Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)