Skip to main content

Posts

Project Pacific VMware

Project Pacific Project Pacific is a re-architecture of vSphere with Kubernetes as its control plane. To a developer, Project Pacific looks like a Kubernetes cluster where they can use Kubernetes declarative syntax to manage cloud resources like virtual machines, disks and networks. To the IT admin, Project Pacific looks like vSphere – but with the new ability to manage a whole application instead of always dealing with the individual VMs that make it up. Project Pacific will enable enterprises to accelerate development and operation of modern apps on  VMware vSphere  while continuing to take advantage of existing investments in technology, tools and skillsets. By leveraging Kubernetes as the control plane of vSphere, Project Pacific will enable developers and IT operators to build and manage apps comprised of containers and/or virtual machines. This approach will allow enterprises to leverage a single platform to operate existing and modern apps side-by-side. The

vMotion

vMotion VMware vMotion enables the live migration of running virtual machines from one physical server to another with zero downtime, continuous service availability, and complete transaction integrity. It is transparent to users. vMotion advantage: Automatically optimize and allocate entire pools of resources for maximum hardware utilization and availability. Perform hardware maintenance without any scheduled downtime. Proactively migrate virtual machines away from failing or underperforming servers. Virtual machine and its host must meet resource and configuration requirements for the virtual machine files and disks to be migrated with vMotion in the absence of shared storage. vMotion in an environment without shared storage is subject to the following requirements and limitations: The hosts must be licensed for vMotion. The hosts must be running  ESXi  5.1 or later. The hosts must meet the networking requirement for vMotion. See  vSphere vMotion Net

Enhanced vMotion

Enhanced vMotion (EVC) vSphere Enhanced vMotion is a feature through which workload can be live migrated from one ESXi host to another ESXi host which are running on different CPU generation but with same cpu vendor. EVC in vSphere was introduced in vSphere 5.1 using vMotion and Storage vMotion terminology. EVC can be enabled at the vSphere ESXi Cluster and on VM's. Figure 1 VMware EVC Mode works by masking unsupported processor having different generation of same vendor and presenting a homogeneous processor to all the vm's in a cluster. The benefit of EVC is that you can add ESXi host consist of latest processors to exising cluster without incurring any downtime. The VMware Compatibility Guide is the best way to determine which EVC modes are compatible with the processors used in your cluster.  Below in  figure 1 demonstrates how to determine which EVC mode to use given 3 types of Intel processors. https://www.vmware.com/resources/compatibility/s

Rolling Updates and Rollbacks in Kubernetes

Rolling Updates and Rollbacks in K8s In our environment everyone has several application deployed and running successfully. Each application comes with a version and time by time application vendor releases new version of it where new version consist of new features and previous bug fixes. Now, its become must task to update our applications to leverage new features. So, how will be make the strategy to upgrade our applications into production environment. Its quite difficult to update all the application at once as it would hamper the stability of the environment. In Kubernetes there is default strategy of deployment called Rolling updates where we do not destroy all the application at one, instead we bring down the application older version and bring back the new version of the application one by one. By doing this application never goes down and upgrade is seamless. It's require to specify the upgrade strategy into the deployment. If there is no suc

Monitoring & Logging in Kubernetes

Monitoring cluster components  of Kubernetes (K8s) There are various type of monitoring we can perform at the cluster, node level and pod level. At cluster level we can monitor like number nodes running, how many are healthy, performance status, network usage etc.   At the POD level we can monitor disk and cpu, memory utilisation, Performance metrics of each POD about its resources. To utilise the experience of monitoring on kubernetes cluster we can use “ Metrics server ”  We can have 1 metrics server per cluster. It's retrieves the information about Nodes , PODS aggregate them and store them into memory.  Matrics server is IN-MEMORY solution where the data or information which it fatch from nodes and pod will be in memory and does not store it in disk.  As Metrics server is " IN-MEMORY " where it's not possible to retrieve the historical data about the kubernetes resources. To get the historical data its require to use advance to

What Is Kubernetes.....

What Is Kubernetes..... Running a container on a laptop is relatively simple. But, connecting containers across multiple hosts, scaling them, deploying applications without downtime, and service discovery among several aspects, can be difficult.                                             Kubernetes is the one which addresses those challenges from the start with a set of primitives and a powerful open and extensible API. The ability to add new objects and controllers allows easy customization for various production needs.  According to the  kubernetes.io  website, Kubernetes is:  "an open-source system for automating deployment, scaling, and management of containerized applications".  Kubernetes is that it builds on 15 years of experience at Google in a project called borg.   Kubernetes is inspired by Borg - the internal system used by Google to manage its applications (e.g. Gmail, Apps, GCE). Methodology of Kubernetes Deploying c

NSX-T Data Center 2.4 Management and Control Plane agents

As in the previous article I have illustrated about the NSX-T DC 2.4 management plane and Central control plane which is now conversed into one nsx manager node. MPA (Management Plane Agent): This agent is located on each transport node which communicate with the NSX manager NETCPA :  It provides communication between central control plane and the hypervisor. The management plane and the central control plane (CCP) run on same virtual appliance but they perform different functionality and will cover about they technical aspects below. The NSX cluster can scale to max of 3 NSX manager nodes run on the management and CCP. Communication process The nsx-mpa agent on transport node get communicated with NSX manager over Rabbitmq channel which is on port 5671 Now, the CCP communicate with transport node through nsx-proxy through port 1235 The task of NSX manager is to push the config to the CCP. The CCP configures the datapla

NSX-T Control Plane Components

NSX-T Control Plane Components In NSX-T Datacenter the control plane is split into 2 components which are Central Control Plane (CCP) and Local Control plane (LCP)/ Lets discuss more about Central Control Plane, Central Control plane (CCP)  In central control plane its compute and disseminate the ephermeral runtime state based on the config of management plane and topology reported by data plane element. Local Control Plane (LCP) It run at the compute endpoint like on transport node (ESXi/ KVM, baremetal) . It computed the local empheral runtime state for the endpoint based on the update from the CCP and LCP information. The LCP pushes stateless configuration to forwarding engines in the data plane and report the information back to CCP. This process easy the task for CCP and enable the platform to scale to thousand diffrent type of endpoints (Hypervisor, containers, hosts,baremetal  or public cloud)

Architecture layout of NSX-T Data Center

Architecture layout of NSX-T Data Center As we all know that NSX is one of the retro product of VMware into the network and security. It run on any device, any cloud and and any application. At present one can run and its connectivity on most of the public cloud like Alibaba, IBM Cloud, AWS or Azure. Lets talk about the all rounder of NSX which is NSX Transformer (NSX-T) which can make communication with various hypervisor like ESXi, KVM, Containers, Openstack and many more. To continue conversation with NSX-T Data Center, lets discuss its major elements. There are 3 main elements of NSX -T Data Center which are: 1) Management Plane 2) Control Plane 3) Data Plane In NSX-T Datacenter ver 2.4 Management and Control Plane are converged means the are now available on single VM or you can say in one OVF. 1) Management Plane:   It is designed with advance clustering technology, which allow the platform to process l

CDO Mode in NSX Controller

CDO  ( Controller disconnect operation) Mode in NSX Controller. CDO mode ensures that the data plane connectivity in the multisite environment. When primary site loses connectivity. Here you can enable CDO mode on secondary site to avoid any temporary connectivity issue related to data plane.  When the primary site is down or not reachable, the CDO logical switch is used only for control plane. Purpose and therefore its a not visible under logical switches tab.

About NSX VTEP Reports

NSX VTEP Reports NSX Controller VXLAN directory services. There are basically 3 types of tables under VTEP 1) MAC Table 2) ARP Table 3) VTEP Table MAC Table:  The MAC table includes the VNI, the MAC address and VTEP ID that reported it. If a unknown unicast frame is reviewed by a VTEP. The VTEP sends a MAC table request to NSX Controller for a destination MAC address. If NSX controller has the MAC address in the MAC table, it replies to the VTEP with information on where to forward the frame. If NSX controller does not have MAC address in the MAC table then the VTEP floods the frame to other VTEP's. ARP Table: The ARP table used to suppress the broadcast traffic. IP report generate the ARP Table. The VTEP's send a copy to each MAC address and IP mapping that they have. This report is called the IP reports. NSX controller creates a ARP ta