Skip to main content

Integrated Deployment of Nutanix AOS and AHV


The Nutanix platform employs a unified installation approach where AOS and AHV are deployed simultaneously through a single Foundation installation process. This integrated architecture eliminates the need for separate installation procedures, as both components are designed to function as interdependent elements of a cohesive hyperconverged infrastructure system.



Nutanix AOS and AHV integrated architecture and deployment process

The Controller Virtual Machine (CVM) represents the primary mechanism through which AOS operates within the AHV environment. 


In AHV deployments, the CVM runs as a specialized virtual machine with direct hardware access through PCI passthrough technology, allowing AOS to maintain direct control over storage devices while operating within the virtualized environment provided by AHV.


Detailed Installation Flow Process

Nutanix AOS and AHV Installation Flow Diagram - Complete Deployment Process

The deployment process consists of four distinct stages that demonstrate the integrated nature of the Nutanix platform:

Stage 1: Foundation Tool Initialization

Foundation is the official deployment software of Nutanix that allows the configuration of pre-imaged nodes or imaging of nodes with both hypervisor and AOS together. The Foundation process begins with node discovery and hardware validation, followed by IP address assignment for IPMI, hypervisor, and CVM interfaces. This stage establishes the groundwork for the unified installation process that will deploy both AHV and AOS simultaneously.

Stage 2: Installation Phase

During this critical phase, the Foundation process installs AHV as the bare-metal hypervisor while automatically provisioning the Controller Virtual Machine, which runs AOS. The installation specifically configures AHV as the hypervisor layer while simultaneously deploying the CVM with AOS functionality. 


For Nutanix units running AHV, the SCSI controller, which manages the SSD and HDD devices, is directly passed to the CVM through PCI passthrough, enabling AOS to handle all storage management functions.

Stage 3: Bootstrap Process

When AHV boots, it automatically starts the CVM, which then initializes AOS and begins managing the storage pool and cluster services. This automatic startup sequence demonstrates the tight integration between the hypervisor and operating system layers, ensuring that storage services are available immediately upon system initialization. The bootstrap process includes cluster formation and storage pool creation, establishing the distributed storage fabric that characterizes Nutanix architecture.

Stage 4: Integration and Validation

The final stage establishes communication between AHV and the CVM, activates the Prism management interface, and validates all storage services. This integration phase ensures that both hypervisor and storage capabilities are accessible through the unified Prism management interface, eliminating the complexity typically associated with managing separate hypervisor and storage management systems.

Key Integration Points

The flow diagram illustrates several critical integration points that demonstrate why AOS does not require separate installation on AHV:

  • Unified Management: AHV is native to the Nutanix stack of advanced technologies and requires no additional installation or integration, featuring full-stack management from a single, intuitive UI. The integrated architecture consolidates hypervisor and storage management functions into a single management plane.
  • Storage Path Optimization: Unlike conventional hypervisors that manage storage through traditional file systems, AHV does not leverage a traditional storage stack, with all disks passed to VMs as raw SCSI block devices. 

  • This approach keeps the I/O path lightweight while allowing AOS to manage the distributed storage fabric through the CVM.
  • Version Compatibility: The relationship between AOS and AHV versions follows a structured compatibility model where each AOS release includes support for specific AHV versions. The upgrade process through Lifecycle Manager demonstrates the platform's integrated nature by handling both AOS and AHV upgrades simultaneously.

Conclusion

The live flow diagram clearly demonstrates that AOS and AHV function as a cohesive platform where the storage operating system and hypervisor are deployed together as complementary components of a single hyperconverged infrastructure solution. The Foundation installation process ensures both components are properly configured to work together from initial setup, eliminating compatibility issues that might arise from separate installations and providing the foundation for advanced features that require close coordination between hypervisor and storage layers.



Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with...

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0 ...

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)