Skip to main content

Posts

Managing a VMware vCenter Server running on VM

Just wanted to share a couple of pointers which came up during a vSphere design review process for a customer. During my discussions there were arguments around tracking of the vCenter Virtual machine in a big environment and getting on to it for troubleshooting in case the vCenter Server service or the VM is down can be a little time consuming. Therefore, some of the organizations prefer a physical vCenter to have more control and a single point to look at and troubleshoot in case of issues. I would say this has more to do with comfort and mindset of the admin, that the application managing the virtual environment itself is not virtual and isolated from the virtual infrastructure. I would not say that these points are not valid, since no one would like to search for there vCenter VM in case of vCenter downtimes. If you have not planned the initial placement of the vCenter VM, then you might end up logging on to each ESXi server directly via vSphere Clie

Uninstalling VMware tools Manually from VM

Ever had the problem where you either have a corrupt copy of VMware Tools that can’t be updated or you perform an update and the install fails part way through leaving remnants behind?  These remnants then stop you from reinstalling VMware Tools and you are presented with the following error messages: Even an uninstall doesn’t always work so unfortunately you’ll have to revert to to manually removing the items from the registry and file system that are stopping a new install from taking place. The process to resolve this is simple and the following has been taken from VMware’s Knowledge Base article here.   I found that performing these steps helped resolve this problem for me. Open the Windows Registry editor. Click Start > Run . Type regedit press Enter. Browse to HKLM SoftwareMicrosoftWindowsCurrentVersionuninstall. Search for the branch with a key named DisplayName and has a value of VMware Tools . Delete the branch associated with that entry.

Assigning Static MAC address to othere VM after Migration (P2V)

 MAC address is not accepting and conflicting , which result its not powering on on destination VC Cause: This issue occurs when the virtual machine has been configured with a static MAC address in the 00:50:56:xx:xx:xx range. VMware vCenter Server 5.1 and later detects this as a protected range and refuses to power on the virtual machine. In vSphere 5.1 and later new policies have been implemented where the statically assigned MAC addresses can only be in the range 00:50:56:[00-3F]:XX:XX or other non-VMware OUI addresses. Prefix- and range-based MAC address allocation is supported only in vCenter Server 5.1 or later. This implies that if you add pre-5.1 hosts to vCenter Server 5.1, and use anything other than VMware OUI prefix- or range-based MAC address allocation, virtual machines assigned MAC addresses that are not VMware OUI prefixed fail to power on their pre-5.1 hosts. Now, from what we’ve seen, the restriction is a little more detailed

RAW Device Mapping (RDM)

  RDM (Raw Device Mapping) A Raw Disk Mapping (RDM) can be used to present a LUN directly to a virtual machine from a SAN. Rather than creating a virtual disk (VMDK) on a LUN, which is generally shared with other VMs and virtual disks. The reasons for doing this should purley be for functional and management reasons, NOT performance. There is a mis-understanding that RDMs offer greater performance compared to VMDK's on a VMFS datastore. I've seen lots of vSphere environments that have gone over kill on RDMs for SQL servers and the like for "performance reasons", its difficult to manage! If your looking for improved storage performance look into the VMware Paravirtual SCSI (PVSCSI) adaptor. The main reason for using an RDM should be as follows: To utilize native SAN tools and commands If using Microsoft Cluster Services (MSCS), Failover Clusters or other clustering solution. There are two RDM modes to be aware of: Virtual compatabi