Skip to main content

Project Pacific VMware



Project Pacific


Project Pacific is a re-architecture of vSphere with Kubernetes as its control plane. To a developer, Project Pacific looks like a Kubernetes cluster where they can use Kubernetes declarative syntax to manage cloud resources like virtual machines, disks and networks. To the IT admin, Project Pacific looks like vSphere – but with the new ability to manage a whole application instead of always dealing with the individual VMs that make it up.
Project Pacific will enable enterprises to accelerate development and operation of modern apps on VMware vSphere while continuing to take advantage of existing investments in technology, tools and skillsets. By leveraging Kubernetes as the control plane of vSphere, Project Pacific will enable developers and IT operators to build and manage apps comprised of containers and/or virtual machines. This approach will allow enterprises to leverage a single platform to operate existing and modern apps side-by-side.
The introduction of Project Pacific anchors the announcement of VMware Tanzu, a portfolio of products and services that transform how the enterprise builds software on Kubernetes.


Kubernetes as a platform platform

The key insight we had at VMware was that Kubernetes could be much more than just a container platform, it could be the platform for ALL workloads. When Joe Beda, co-creator of Kubernetes, talks about Kubernetes, he describes it as a platform platform; a platform for building new platforms. Yes, Kubernetes is a container orchestration platform, but at its core, Kubernetes is capable of orchestrating anything!
What if we used this “platform platform” aspect of Kubernetes to reinvent vSphere? What if when developers wanted to create a virtual machine, or a container, or a kubernetes cluster, they could just write a kubernetes YAML file and deploy it with kubectl like they do with any other Kubernetes object?


Using Kubernetes as the vSphere API



A kubernetes native vSphere platform

Project Pacific transforms vSphere into a kubernetes native platform. We integrated a Kubernetes control plane directly into ESXi and vCenter – making it the control plane for ESXi and exposing capabilities like app-focused management through vCenter.





This is a pretty powerful concept. This brings the great Kubernetes developer experience to the rest of our datacenter. It means developers can get the benefits of Kubernetes not just for their cloud native applications, but for ALL of their applications. It makes it easy for them to deploy and manage modern applications that span multiple technology stacks.


Supervisor clusters

The supervisor is a special kind of Kubernetes cluster that uses ESXi as its worker nodes instead of Linux. This is achieved by integrating a Kubelet (our implementation is called the Spherelet) directly into ESXi. The Spherelet doesn’t run in a VM, it runs directly on ESXi.

The supervisor cluster is a Kubernetes cluster of ESXi instead of Linux.

ESXi Native Pods

Workloads deployed on the Supervisor, including Pods, each run in their own isolated VM on the hypervisor. To accomplish this we have added a new container runtime to ESXi called the CRX. The CRX is like a virtual machine that includes a Linux kernel and minimal container runtime inside the guest. But since this Linux kernel is coupled with the hypervisor, we’re able to make a number of optimizations to effectively paravirtualized the container.
Despite the perception of virtualization as being slow, ESXi can launch native pods in 100s of milliseconds, supporting over 1000 pods on a single ESXi host (same limits as for VMs on ESXi). Are Pods in a VM slow? Well, in our internal testing we’ve been able to demonstrate that ESXi Native Pods achieve 30% higher throughput on a standard Java benchmark than regular Pods in a virtual machine, and 8% faster than Pods on bare metal Linux.

Virtual Machines

The supervisor includes a Virtual Machine operator that allows kubernetes users to manage VMs on the Supervisor. You can write deployment specifications in YAML that mix container and VM workloads in a single deployment that share the same compute, network and storage resources.
The VM operator is just an integration with vSphere’s existing virtual machine lifecycle service, which means that you can use all of the features of vSphere with kubernetes managed VM instances. Features like RLS settings, Storage Policy, and Compute policy are supported.

In addition to VM management, the operator provides APIs for Machine Class and Machine Image management. To the VI admin, Machine Images are just Content Libraries.


Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)

Removing NSX-T manager extension from vCenter

In NSX-T starting from ver 2.4 NSX-T appliance got decoupled from vCenter where now its not mandatory to run NSX-T on vCenter platform only. Now NSX-T can be managed through standalone ESXi host, KVM or through container platform. As in version 2.4 there is still an option available to connect vCenter to NSX-T using Compute Manager. Here in this blog we will learn how we can unregister and register NSX-T extenstion from vCenter in case of any sync or vCenter connectivity issue with NSX-T. Lets get started.. 1) Login to NSX-T UI Go to -> System ->Compute Manager Here, vCenter is showing in Down status where the status is showing as "Not Registered" 2) When we click on "Not Registered" option its states below error. 3) When try to click on Resolve option its states below. At this stage if the Resolve option doesn't work then its require the remove the NSX-T extenstion from vCenter. To remove the NSX-T e