Skip to main content

Virtual Volumes



Virtual Volume (VVols) is basically a latest introduction into Vsphere 6.0 environment which has significantly change the architecture of storage part.

Traditionally into vsphere environment we are having a concept of LUN to provide storage into ESXi into which VMware administrator used to provision virtual machine into that peace of storage (LUN). In this process all vm’s which are deployed are depended into that LUN having dependency into it. Like, if that LUN having any issue like all path went down or its loosed its connectivity from ESXi or storage, into this case all its derivative vm’s will get affected. That dependency has moved out into VVol


VVol has introduced where virtual machine is directly connected to SAN (Raw area of storage), not on LUN concept. 

Architecture of VVol



In Virtual Volumes storage has altogether changes compared to old architecture of Vsphere. VVols completely change the way storage is presented, managed and consumed and certainly for the better. Most storage vendors are on board as their software needs to be able to support VVols and they’ve been champing at the bit for VVols to be released.

For supporting VVols, Storage (SAN) need to have VASA capable as its API is used to for communication which Storage and Vsphere environment.

VVols has change the concent of LUN. At the top of storage, storage administrator is used to create storage container which is a logical abstraction on to which  virtual volumes are mapped on the top.  Single storage container can have multiple virtual volumes based on the capability of Storage Array.

Esxi has no direct connection I/O to connect to storage container or storage array. Protocol Endpoint (PE) are the access point from the host to the storage system, which are created by storage administrators. All path and policies are administrated by the protocol endpoints. Protocol Endpoints are compliant with both, Iscsi and NFS. They are intended to replace the concept of LUN and mount points.
PE is like LUN or mount points. They can be mounted or can be discovered by multiple host into the cluster to perform several activity like vmotion, DRS etc..

Virtual Datastore are created on the top of Vsphere while using VVol as Datastore is required for many terms into Vsphere environment, like while vmotion, svmotion, SDRS and others.


Happy Sharing..





Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)