ALL THINGS KUBERNETES

Creating Liveness Probes for your Node JS application in Kubernetes

Kubernetes is extremely powerful in tracking the health of running applications and intervening when something goes wrong. In this tutorial, we teach you how to create liveness probes to test the health and availability of your applications. Liveness probes can catch situations when the application is no longer responding or unable to make progress and restart it. We address the case of HTTP liveness probes that send a request to the application’s back-end (e.g., some server) and decide whether the application is healthy based on its response. We’ll show examples of both successful and failed liveness probes. Let’s start!

Benefits of Liveness Probes

Normally, when Kubernetes notices that your application has crashed, kubelet will simply restart it.

However, there are situations when the application has crashed or deadlocked without actually terminating. That’s exactly a situation where liveness probes can shine! With a few lines in your pod or deployment spec, liveness probes can turn your Kubernetes application into a self-healing organism, providing:

  • zero downtime deployments
  • simple and efficient health monitoring implemented in any way you prefer
  • identification of potential bugs and deficiencies in your application

Now. we are going to show these benefits in action walking you through examples of successful and failed liveness probe. Let’s start!

Tutorial

In this tutorial, we create a liveness probe for a simple Node JS server. The liveness probe will send HTTP requests to certain server routes and responses from the server will tell Kubernetes whether the liveness probe has passed or failed.

Prerequisites

To complete examples in this tutorial, you’ll need:

  • a running Kubernetes cluster. See Supergiant GitHub wiki for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • A kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

Step 1: Creating a Node JS App Prepared for Liveness Probes

To implement a working liveness probe, we had to design a containerized application capable of processing it. For this tutorial, we containerized a simple Node JS web server with two routes configured to process requests from the liveness probes. The application was containerized using Docker container runtime and pushed to the public Docker repository. The code that implements basic server functionality and routing is located in the server.js  file:

In this file, we’ve configured three server routes responding to client GET requests. The first one serves requests to the server’s web root path /  that sends a basic greeting from the server:

The second path named /health-check  returns a 200  HTTP success status telling a liveness probe that our application is healthy and running. By default, any HTTP status code greater than or equal to 200  and less than 400  indicates success. Status codes greater than 400  indicate failure.

Finally, if a liveness probe accesses the third route named /bad-health  , the server will respond with a 500 status code telling kubelet that the application has crashed or deadlocked.

This application is just a simple example to illustrate how you can configure your server to respond to liveness probes. All you need to implement HTTP liveness probes is to allocate some paths in your application and expose your server’s port to Kubernetes. As simple as that!

Step 2: Configure your Pod to use Liveness Probes

Let’s create a pod spec defining a liveness probe for our Node JS application:

Let’s discuss key fields of this spec related to liveness probes:

  • spec.containers.livenessProbe.httpGet.path  — a path on the HTTP server that processes a liveness probe. Note: by default, spec.livenessProbe.httpGet.host  is set to the pod’s IP. Since we will access our application from within the cluster, we don’t need to specify the external host.
  • spec.containers.livenessProbe.httpGet.port  — a name or a number of the port to access the HTTP server on. A port’s number must be in the range of 1 to 65535.
  • spec.containers.livenessProbe.initialDelaySeconds  — number of seconds since the container has started before the liveness probe can be initiated.
  • spec.containers.livenessProbe.periodSeconds  — how often to perform the liveness probe. Default value is 10  seconds and the minimum value is 1 .
  • spec.containers.livenessProbe.failureThreshold:  — a number of tries to perform the liveness probe if the probe fails on pod start. Giving up any attempts to perform a liveness probe means restarting the pod. The default value for this field is 3 and the minimum value is 1 .

Let’s save this spec in liveness.yaml  and create the pod running the following command:

As you see. we defined /health-check  as a server path for our liveness probe. In this case, our Node JS server will always return the success 200 status code. This means that the liveness probe will always succeed and the pod will continue running.

Let’s get a shell to our application container to see responses sent by the server:

When inside the container, install cURL to send GET requests to the server:

Now, we can try to access the server to check a response from the /health-check  route (Don’t forget that the server is listening on port 8080):

If the liveness probe passes (as in this example), the pod will continue running without any errors and restarts triggered. However, what happens when the liveness probe fails?

To illustrate that, let’s change the server path indicated in the field livenessProbe.httpGet.path  to /bad-health . First, exit the shell from the container typing exit  and then change path name in the liveness.yaml  file. Once necessary changes are made, delete the pod.

Then, let’s create the pod one more time.

Now, our liveness probe will be sending requests to the /bad-health  path that return a 500  HTTP error. This error will make kubelet  restart the pod. Since our liveness probe always fails, the pod will be never running again. Let’s verify that the liveliness probe actually fails:

Check pod events at the end of the pod description:

First, as you might have noticed, the liveness probe started exactly after three seconds specified in the spec.containers.livenessProbe.initialDelaySeconds . Afterward, the probe failed with a status code 500  that triggered killing and recreating the container.

That’s it! Now you know how to create liveliness probes to check the health of your Kubernetes applications.

Note: In this tutorial, we used two server routes always returning either success or error status codes. This is enough to illustrate how liveness probes work, however, in production, you’ll need to have one route that will evaluate the healthiness of your application and send either success or failure response back to kubelet .

Step 3: Cleaning Up

Our tutorial is over, so let’s clean up after ourselves.

  1. Delete the pod:

2. Delete the liveness.yaml  file where you saved it.

Conclusion

As you saw, liveness probes are extremely powerful in maintaining your applications healthy and ensuring their zero downtime. In the next tutorial, we’ll learn about readiness probes — another important health check procedure in Kubernetes. Kubelet  uses them to decide when a container is ready to start accepting traffic. Stay tuned for our blog updates to find out more!

Subscribe to our newsletter