Skip to main content

Rolling Updates and Rollbacks in Kubernetes


Rolling Updates and Rollbacks in K8s

In our environment everyone has several application deployed and running successfully. Each application comes with a version and time by time application vendor releases new version of it where new version consist of new features and previous bug fixes.

Now, its become must task to update our applications to leverage new features.

So, how will be make the strategy to upgrade our applications into production environment. Its quite difficult to update all the application at once as it would hamper the stability of the environment.

In Kubernetes there is default strategy of deployment called Rolling updates where we do not destroy all the application at one, instead we bring down the application older version and bring back the new version of the application one by one. By doing this application never goes down and upgrade is seamless.


It's require to specify the upgrade strategy into the deployment. If there is no such update strategy specify then the default strategy assume by system is “Rolling update”

To view the rollout status of your application deployment
>Kubectl rollout status deployment/myapp-deployment


To view the revisions and history of the deployment
>Kubectl rollout history deployment.apps/(deployment name)

Now, if we want to update our application like its version than what’s the process….

There are 2 method to accomplish this task.

Update the image version under spec section in deployment file and then run below command.
>Kubectl apply -f (deployment file)

Another way is to use set image command from where we can change the image version of the application.
>Kubectl set image deployment.app/myapp-deployment  nginx=nginx:1.9.1

However, Doing so will not change the image version into deployment.yml file. So need to be carefull to make changes of that definition file in future.

Now lets discuss the upgrade process under the hood.

When a new deployment is initiated for say 5 replica  (replica set 1) of  PODs. Its then create a replica set (replica set 2) automatically which is equal to the actual deployment.

When we initiate the upgrade then one POD in Replica set 1 gets down and a new POD with updated version created into Replica set -2 and go on …


Rollback Strategy

Once we upgraded the application, we found something which is not correct or application is not behaving as it should be in the new version of build.

As the updated application is not behaving in normal fashion and its require to undo the changes to rollback to previous version.

Kubernetes provides the option to rollback the deployment to the previous version. To do so we can use “undo” command

> Kubectl rollout undo deployment.app/(deployment-name)

This command will Destry all the PODs and containers from new Replica-set and bring back all old POD and containers in old Replica set.

List of Commands.

To create a deployment.
> kubectl create -f deployment-defination.yml

To list the deployment available.
> kubectl get deployments

To update the deployment definition file.
> kubectl apply -f deployment-definition.yml
> kubectl set image deployment/myapp-definition nginx=nginx:1.9.1

To view the rollout status of deployment.
> kubectl rollout status deployment.app/app-deployment

To view the history of deployment of the app.
> kubectl rollout history deployment.app/app-deployment

To undo the rollout of deployment.
> kubectl rollout undo deployment/myapp




Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)