ALL THINGS KUBERNETES

Managing Stateful Apps with Kubernetes StatefulSets

In the first part of the StatefulSets series, we discussed key purposes and concepts of StatefulSets and walked you through the process of creating a working StatefulSet. We saw how a sticky pod’s UID and stable network identity can be leveraged to create apps that are stateful by design. Like deployments, StatefulSets also offer ways for managing your applications.

In the second part of the series, we look deeper into how to use StatefulSets to scale and update stateful applications. harnessing the power of ordered pod creation and controlled updates. Let’s begin!

To complete examples from this article, you’ll need:

  • a running Kubernetes cluster. See Supergiant GitHub wiki for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • a kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.
  • Before using examples from this article, you’ll also need to create a working StatefulSet following simple steps described in the first part.

Scaling a StatefulSet

As you might already know, deployments and ReplicationControllers allow users to dynamically scale their applications. For example, once the deployment is running, you can easily adjust a number of pod replicas in it to match application needs. Kubernetes offers similar functionality for StatefulSets. To illustrate how scaling works, let’s open two terminal windows. We will use the first window to watch the process of pods termination and creation. In the second one, we will scale our application.

In the first terminal window run:

You’ll see the current state of your StatefulSet, which is something like this:

In the second window, let’s scale up our StatefulSet from three to six replicas running the following command:

Now, If you look into the first terminal window, you notice that when scaling up the order of pod creation looks identical to when you are creating a StatefulSet from scratch. New pod replicas are created sequentially with Kubernetes, always waiting until the previous pod is running and ready before starting the next one. In this way, Kubernetes manages ordered creation of pods to prevent any conflicts and ensure high availability of your application.

Scaling down a StatefulSet looks similar. Suppose that now your application’s load has decreased and you don’t need six replicas anymore. To address this, let’s scale our application down:

As you see, there is no standalone command for scaling down your StatefulSet: again, we just specify the number of replicas and Kubernetes tries to achieve the desired state. To scale up and down, you can also use kubectl patch  command which updates the StatefulSet resource object with a desired number of pod replicas:

As you might have noticed, when scaling down, pods are also deleted sequentially but in the reverse order (from the sixth pod to the fourth).

That’s it! You have learned how to scale applications up and down. However, what happens to PersistentVolumes  and PersistentVolumeClaims  attached to the pods when pods are terminated like we did while scaling down? One of the StatefulSet’s limitations is that deleting a pod or scaling the StatefulSet down does not result in the deletion of volumes bound to the StatefulSet. According to the official documentation, “This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.”

We can easily verify that the PVs and PVCs claims associated with the pods in the StatefulSet were not deleted by running the following command:

The response should be:

As you see, there are six PVCs in our StatefulSet still even though we scaled it down successfully in the previous step.

Updating a StatefulSet

Automated updates of StatefulSets are supported in Kubernetes 1.7 and later. The platform supports two update strategies: RollingUpdate  and OnDelete . You can specify one of them in the spec.updateStrategy  field of your StatefulSet resource object.

Let’s first illustrate the workings of the RollingUpdate  strategy that can be used to upgrade container images of containers running in the StatefulSet’s pods. The rolling update is the default strategy used in the previous tutorial, so we need not change anything in our spec.

Let’s upgrade the container images in our StatefulSet’s pods:

Now, if you look into the output of the first terminal window, you’ll notice that pods are terminated and started sequentially in the reverse order. This is how a rolling update works.

Kubernetes implements a rolling update as a way to keep applications available all the time. Rolling updates allow updates to take place with zero downtime by incrementally updating pods’ instances with new ones. Unlike deployments, however, StatefulSets do not support proportional scaling. As a side note, proportional scaling allows maintaining a desired number of applications during the rollout by setting the maxSurge  and maxUnavailable  parameters of the deployment. For example, if we set maxUnavailable=2 , the deployment controller will not allow the number of unavailable replicas to be greater than 2. This may be useful if the specified container image is not reachable for some reason: Kubernetes will simply stop the upgrade because of the maxUnavailable  parameter.

StatefulSets also implement fault tolerance though. The StatefulSet controller will ensure that if the container image upgrade fails, the pod with the previous container image version is restored. In this way, the controller attempts to keep the application healthy in the presence of failures.

However, let’s go back to our example.We can now check whether the container images were updated:

Great! As you see, the container image was upgraded to the new version and all three pods in our StatefulSet are now using it.

Staging an Update

In the previous example, we saw how to update all containers in a StatefulSet. However, what if needed to update only some pods while leaving the container images in the others the same? To achieve this, Kubernetes allows setting a partition that prevents updates of pods with the ordinal index lower than the partition’s value. Let’s see how it works.

First, we’ll need to add a ‘partition‘ parameter to the spec.updateStrategy  field:

We set the partition to 1, which means that only the pods with the ordinal index equal or greater than 1 will be updated. Now, let’s update the container images for the pods in our StatefulSet:

Let’s check whether container images were updated:

The response should be:

StatefulSet Partition

As you see, the rolling update did not upgrade the first pod with a new container image. That is because its ordinal index (0) is lower than the value of the partition (1). The container image for this pod will not be upgraded even if we delete this pod:

Let’s verify that we have the same result:

If you at some point decide to decrease the partition, the StatefulSet will automatically update the pods that match the new partition value. Let’s try to illustrate this by patching the partition in our StatefulSet to 0:

Now, if we check container images again, we’ll see something like this:

As you see, the StatefulSet controller has automatically updated the apache-http-0  Pod because of the changed partition although we did not manually patch the update by changing the .spec.template.spec containers.0.image  field.

On Delete

If you are using Kubernetes 1.6 and prior, you may want to prevent automatically updating your pods when a change is made to the StatefulSet’s .spec.template  field. This strategy can be enabled by setting .spec.template.updateStrategy.type  to OnDelete .

Cleaning up

To clean up after these examples are completed, we’ll need to do a Cascading delete like this:

This command will terminate the pods in your StatefulSet  in reverse order {N-1..0}. Note that this operation will just delete the StatefulSet  and its pods but not the headless service associated with your StatefulSet . To clean up, we’ll also need to delete our httpd-service  Service manually.

Finally, let’s delete the StorageClass and PVCs used in this tutorial:

Conclusion

That’s it! Hopefully, now you have a better understanding of available options for managing your stateful apps with StatefulSets. This abstraction offers users an opportunity to scale and update apps in a controlled and predictable way. Although StatefulSets have certain limitations, including the need to delete bound PVCs manually, they are otherwise extremely powerful for the vast array of tasks involved in managing your stateful applications in Kubernetes.

Subscribe to our newsletter