ALL THINGS KUBERNETES

Managing Kubernetes Deployments

In “Introduction to Kubernetes Pods“, we discussed how to create pods using the deployment controller. As you might remember, deployments ensure that the desired number of pod replicas (ReplicaSet) you specified in the deployment spec always matches the actual state. In addition to this basic functionality, the deployment controller also offers a variety of options for managing your deployments on the fly. In this tutorial, we discuss some of the most common ways to manage deployments: executing rolling updates, rollouts, rollbacks, and scaling your applications. By the end of this tutorial, you’ll have everything you need to properly manage stateless apps both in test and production environments. Let’s start!

Prerequisites

To complete examples in this tutorial, you’ll need:

  • a running Kubernetes cluster. See Supergiant GitHub wiki for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • a kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

Step 1: Create the Deployment

In this example, we’re going to define a deployment that creates a ReplicaSet of 3 Apache HTTP servers ( httpd  container) pulled from the public Docker Hub repository. Correspondingly, three pod replicas are the initial desired state of the deployment. Let’s take a look at the deployment resource object:

Let’s discuss key fields of this spec:

.metadata.name  — the name of the deployment. The name must consist of lower case alphanumeric characters, ‘-‘ or ‘.’, and must start and end with an alphanumeric character (e.g., ‘apache-server’)

.metadata.labels  — the deployment’s label.

.spec.replicas  — a number of pod replicas for the deployment. The default value is 1 .

.spec.selector  — specifies a label selector for the pods targeted by the deployment. This field should match spec.template.metadata.labels  in the PodTemplateSpec  of the deployment. Note: If you have multiple controllers with the identical selectors, they might not behave correctly, and conflicts might arise. Therefore, by all means, avoid overlapping selectors in your controllers. Also, In the API version apps/v1 , a deployment’s label selector is immutable after it is created. However, users should be also cautious when updating the deployment’s label even in the previous API versions (this is not recommended).

.spec.strategy  — a strategy used to replace old pods with new ones. Values for this field can be ‘Recreate‘ or ‘RollingUpdate‘,  the latter being the default value.

  • Recreate: when the recreate update strategy is specified, all existing pods are terminated before new ones are created. This strategy is appropriate for testing purposes, but it is not recommended when you are building highly available applications that need to be always running.
  • RollingUpdate: Rolling updates allow updates to take place with no downtime by using the incremental update of pods instances with new ones. If the RollingUpdate strategy is selected, you can set maxSurge  and maxUnavailable  parameters (see the discussion below).

.spec.strategy.rollingUpdate.maxUnavailable  — specifies the maximum number of pods that can be unavailable during the update. You can set the value as an absolute value (e.g., 4) or as a percentage of desired pod (e.g., 20%). The default value for this parameter is 25%. For example, if you set maxUnavailable  to 40%, the old ReplicaSet can be immediately scaled down to 60% when the rolling update begins. Then, the controller can start scaling up a new ReplicaSet, ensuring that throughout the process the total number of unavailable pods is 40% at most.

.spec.strategy.rollingUpdate.maxSurge  — specifies the maximum number of pods that can be created over the desired number. Similarly to the maxUnavailable  parameter, you can define maxSurge  as an absolute value or as a percentage of desired pods. The default value for this field is 25%. For example, if you set maxSurge  to 40%, the deployment controller can scale up the deployment immediately to 140% of the desired pods. As old pods are killed and new ones are created, the controller will always ensure that the maximum number of pods running is at most 140% of the desired pods.

Note: the Deployment spec supports a number of other fields such as .spec.revisionHistoryLimit  discussed later in the course of this tutorial.

Now, as you understand the deployment spec, let’s save the deployment object in deployment.yaml  and create the deployment running the following command:

Please, note that we are using --record  flag that records the command with which the Deployment was created, which simplifies tracking the Deployment history later on.

Let’s verify that the deployment was created:

As the output shows, 3 Pod replicas are currently running, up-to-date and available to users. Thus, the deployment is currently in the desired state as we expected.

It might be useful to read a detailed description of the deployment by running kubectl describe deployment apache-server

This description contains the synopsis of all parameters defined in the deployment spec along with the deployment status and actions performed by the deployment controller such as scaling up our first ReplicaSet. As you see in the last line of this output, the ReplicaSet created is named apache-server-558f6f49f6 . A ReplicaSet’s name is formatted as [DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE].

You can verify that the ReplicaSet was created by running:

Pods in this ReplicaSet have their names formatted as [DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE]-[POD-ID]. To see the pods running, let’s filter kubectl get pods  command by our pod label selector:

That’s it! Your deployment is ready to be managed. Let’s first try to scale it.

Step 2: Scaling the Deployment

Assuming that your application’s load has increased, you can easily scale up the deployment by running the following command:

You can now verify that the deployment has 5 pod replicas in it running the following command:

Scaling down the deployment works the same way: we just specify fewer replicas than we did before. For example, to go back to three replicas again, just run

Note: Scaling operations are not saved in the deployment revision history. Therefore, you can’t return to the past number of replicas by using rollbacks. Unlinking deployment scaling from deployment update is the design feature that facilitates simultaneous manual- or auto-scaling. Note: Horizontal pod autoscaling is a big topic that will be discussed in future tutorials. If you need more details, please, consult the official documentation.

Step 3: Updating the Deployment

As you remember from the spec, our deployment is defined with the rolling update strategy, which allows updating applications with zero downtime. Let’s test how this strategy works with our maxSurge  and maxUnavailable  parameters.

Open two terminal windows. In the first window, you are going to watch the Deployment’s pods as they are created and terminated:

When you run this command, you’ll see the current number of pods in your deployment:

In the second terminal, let’s update the Apache HTTP server’s container image (remember we initially defined the deployment with the httpd:2-alpine  image):

This command updates the httpd  container image to the 2.4 version. Now, let’s look into the first terminal to see what the deployment controller is doing:

As you see, the deployment controller immediately instantiated two new pods: apache-server-5949cdc484-bhz4x  and apache-server-5949cdc484-fltjn  in the new ReplicaSet. Afterward, it started terminating the pod named apache-server-558f6f49f6-587k4  scaling the old ReplicaSet down to 2 pods. Next, the controller scaled up a new ReplicaSet to 3 replicas and gradually scaled down the old ReplicaSet to 0.

You can find a more detailed description of this process by running kubectl describe deployment apache-server :

Now, we can verify that when scaling up and down the ReplicaSets, the deployment controller followed the rules specified in the maxSurge  and maxUnavaulable  fields. At the first glance, this seems to contradict the output above. Indeed, as the deployment description shows, the controller initially scaled up a new ReplicaSet to 2 pods, which implies we already have 5 pods (3 old pods and 2 new ones) at that moment. This number corresponds to approximately 166.6% of the desired state. However, as you remember, our maxSurge  value is 40%, which means that the maximum number of running pods should be at most 140% of the desired state. That’s weird, right? The thing is that, even though the controller initially scaled up a new ReplicaSet to two pods, it did not start creating them before 1 pod in the old ReplicaSet was fully terminated. Two containers in the new ReplicaSet were pending until the maxSurge  and maxUnavailable  requirements were met.

Let’s experiment by updating the maxSurge  to 70% and the maxUnavailable  to 30%. Run kubectl edit deployment/apache-server  to open the vim editor. When inside the editor, press “a” to insert text and change the maxSurge  field to 70% and maxUnavailable  to 30%. Once that’s done, save changes by typing ESC  and then :x  in the editor. You should see the following success message:

Now, let’s try to update our deployment once again:

The deployment’s description now looks like this:

As you see, the controller created a new ReplicaSet and scaled it up to three instances immediately. That is because our maxSurge value is very high (70%). Then, the controller gradually scaled down the old ReplicaSet to 0.

Step 4: Rolling Back the Deployment

Sometimes, when the update of the deployment fails for some reason, you might need to roll back it to the previous/earlier version/s. In this case, you can take advantage of the deployment’s rollback functionality. It works as follows: each time a new rollout is made, a new deployment’s revision is created. Revisions form a rollout history that provides access to the previous versions of your deployment. By default, all past ReplicaSets will be kept in the Kubernetes memory enabling rollbacks to any point in the revision history. However, If you set the .spec.revisionHistoryLimit  to 3, for example, the controller will save only 3 latest revisions. Correspondingly, if you set the value of this field to 0 all old ReplicaSets will be automatically deleted on new updates and you’ll be unable to undo them.

Note: Revisions are not triggered by manual scaling or auto-scaling. A deployment revision is created only when the pod template is changed (e.g., when updating the labels or container image). Therefore, when you roll back to the previous version of your deployment, only the pod’s template part is rolled back.

Let’s show how to roll back using a practical example. As in the previous examples, let’s open two terminal windows and watch the deployment’s pods in the first one:

In the second terminal, let’s update the container image of your deployment and intentionally make a typo in the container image tag ( httpd:23  version does not exist):

Let’s watch what happens in the first terminal window:

Since our container image tag is wrong, we are getting ErrImagePull  and ImagePullBackOff  errors for all three pod replicas. The controller will periodically continue to pull the image but the same errors will arise: our rollout is stuck in the image pull loop. We also can verify this by running:

Press Ctrl-C to exit the above rollout status watch.

You can also see that two ReplicaSets now co-exist: the old ReplicaSet apache-server-558f6f49f6  has two pods because one was already terminated according to maxUnavailable  policy and the second ReplicaSet has 0 Ready replicas because of the image pull loop discussed above.

Now, it’s evident that the Deployment’s update process is stuck. How can we fix this issue? Rolling back to the previous stable revision is the most obvious decision.

To see available revision versions, let’s check the revision history. It’s available because we used --record flag when creating thedDeployment:

You can also see a detailed description of each revision by specifying the --revision  parameter:

Now, let’s roll back to the previous stable revision with the httpd:2-alpine  image.

Alternatively, to roll back to the previous version, you can just run:

Check the revision history again and you’ll see that a new entry was added:

Also, the rollback generated a DeploymentRollback  event that can be seen in the Deployment’s description:

That’s it! Now you know what to do if your Deployment update failed. Choose the preferred revision version and roll back to it. As simple as that!

Conclusion

In this tutorial, we’ve walked you through basic options for managing Kubernetes deployments. The deployment controller is extremely powerful in scaling, updating, and rolling back your stateless applications. It also includes a number of other useful features not covered in this article, such as pausing and resuming the deployment.

If you want to learn more about deployments, check out the official Kubernetes documentation. Deployment management tools you learned from this article, however, are already enough to prepare you for effective management of stateless applications in production ensuring zero downtime and efficient version control at any scale.

Subscribe to our newsletter