Working with Kubernetes Containers

Kubernetes offers a broad variety of features for running and managing containers in your pods. In addition to specifying container images, pull policies, container ports, volume mounts, and other container-level settings, users can define commands and arguments that change the default behavior of running containers.

Kubernetes also exposes container start and termination lifecycle events to user-defined handlers that can be triggered by these events. In this article, we discuss available options for running containers in Kubernetes and walk you through basic steps to define commands for your containers and attach lifecycle event handlers to them. Let’s start!

Containers in Kubernetes

When we think about Kubernetes containers, Docker container runtime immediately springs to mind. Indeed, until the 1.5 release, Kubernetes supported only two container runtimes: the popular Docker and rkt , which were deeply integrated into the kubelet  source code.

However, as more container runtimes have appeared over the past few years, Kubernetes developers decided to abstract the underlying container architecture from the deep layers of Kubernetes platform. That’s why the 1.5 release came out with the Container Runtime Interface (CRI), a plugin interface that allows using a wide variety of container runtimes without the need to recompile. Since that release, Kubernetes has simplified the usage of various container runtimes compatible with the CRI. Docker containers, however, remain one of the most popular among Kubernetes users, so we implicitly refer to them when discussing operations with Kubernetes containers in this article.

Kubernetes wraps the underlying container runtime to provide basic functionality for containers such as pulling container images. As you might already know, container settings are defined in pods — Kubernetes abstractions that act as the interface between Kubernetes orchestration services and running applications.

Each container defined within a pod has the image  property that supports the same syntax as the Docker command does. Other basic fields of the container spec are ports and image pull policy (see the example below).

Note: Since pods can have many containers, the latter are represented as an array of objects with indices starting from 0 (see the syntax below). Keeping this in mind, we discuss the container properties specified in the example above:

spec.containers[0].name  — the container’s name. This value must consist of lower case alphanumeric characters or ‘-‘, and it must start and end with an alphanumeric character (e.g., ‘my-name’ or ‘123-abc’).

spec.containers[0].image  —   the container’s image pulled from some container registry. In this example, we pull the httpd:2-alpine  image from the public Docker Hub registry. Also, Kubernetes comes with a native support for private registries. In particular, the platform supports Google Container Registry (GCR), AWS EC2 Container Registry and Azure Container Registry (ACR). For registry-specific prerequisites and configuration, see the official Kubernetes documentation.

spec.containers[0].ports[0].containerPort  — port/s to open for the container. In this example, the port:80  is opened for our Apache HTTP Server.

spec.containers[0].imagePullPolicy  —  a policy that defines how the container image is pulled from the registry. The default pull policy is IfNotPresent, which makes the kubelet  skip pulling an image if it already exists. If you want to always pull an image, you can set this field to Always , as we did in this example. Also, you can have Kubernetes always pull a container image by using the image’s :latest  tag, although this feature is deprecated. You should avoid using the :latest  tag in production because this makes it difficult to track which version of the image is running. Please note that if you don’t use an image tag, it will be always assumed :latest  implicitly triggering the Always  pull policy. (See the official documentation for more details about best practices for configuring containers in production.)

The discussed pod spec illustrates basic container settings that satisfy many use cases. However, Kubernetes offers even more options for managing containers in your pod. In our tutorial below, we walk you through a simple process of defining commands and arguments for containers and demonstrate how to use container lifecycle hooks to control the container behavior when it starts or terminates.


To complete examples in this tutorial, you’ll need the following prerequisites:

  • a running Kubernetes cluster. See Supergiant GitHub wiki for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

Part 1: Defining Command and Arguments for Containers

Kubernetes allows defining commands and arguments for these commands for containers to use in a pod’s resource object. Here’s how this works using a simple example of the BusyBox container.

In this example, we:

  • create a pod named “demo” with a single container running Busybox image from the Docker Hub registry. BusyBox combines tiny versions of many common UNIX utilities into a single small executable.
  • specify a command for the running container to use. The command is ‘sh‘ utility which is the Almquist shell (ash).
  • specify arguments for the shell command to use. In our case, the args  property takes an array of two elements. The first one is the command argument -c  (since ‘sh‘ command accepts other commands as well) and the second one is the execution code for that command. In the execution code, we set two variables, MIN  and SEC , and do the arithmetic operation with them. The operation’s result is echoed to stdout .

Let’s save this spec in the command-demo.yaml  file and create the pod using kubectl .

Now, if you run kubectl get pod demo , you’ll find out that the pod has successfully completed:

You can check the output produced by the command defined above by looking into the pod’s logs.

As you see, BusyBox utility computed a correct value and echoed it into the stdout . It’s as simple as that!

In the example above, we defined command and arguments for that command as two separate spec fields. However, Kubernetes supports merging both commands and arguments into a single array of values like this:

This will produce the same result as above.

Also, instead of providing command arguments as variables within a script, you can put them in the environmental variables for containers. We can use a spec.containers.env  field with a set of name/values in it like this:

Note: we should put the environmental variables in the parentheses. This is required for them to be expanded in the command  or args  field.

This simple command concatenates two strings stored in the environmental variables. These variables offer a convenient way to store some arbitrary data separately from the command execution context.

You may be wondering how commands and arguments you define change the default behavior of the container. The process is quite straightforward. The default command run by the Docker container is defined in the Entrypoint Docker field name, and the default arguments for that command are defined in the Cmd field name. If you don’t specify any commands or override either the Entrypoint or Cmd with your command and args container settings, the following rules apply:

  • If no command or args are supplied for a container, the defaults in the Docker image are used.
  • If a command is supplied but no args are supplied, only the supplied command is used. The default EntryPoint and the default Cmd defined in the Docker image are ignored.
  • If only args are supplied, the default EntryPoint is run with the args that you supplied.
  • If both args and a command are supplied, they override the default EntryPoint and the default Cmd of the Docker image

Part 2: Attaching Container Lifecycle Hooks

As you know, containers have a finite lifecycle. It might be useful to attach handlers (functions) to various events of the container lifecycle to make them aware of these events and run code triggered by them. Kubernetes implements this functionality with container lifecycle hooks.

The platform exposes two lifecycle hooks to containers: PostStart  and PreStop . The first hook executes immediately after the container is created. However, there is no guarantee that the handler in the PostStart  hook will execute before the container’s EntryPoint or user-defined command.

In its turn, the PreStop  hook is called immediately before a container is terminated. This is a synchronous hook, so it must be completed before the call to delete the container can be sent.

Containers can access lifecycle hooks by implementing and registering a handler (function) for that hook. There are two types of hook handlers in Kubernetes:

  • Exec — this handler executes a specific command inside the cgroups and namespaces of the container. Resources consumed by the command in the handler are counted against available container resources.
  • HTTP — Executes an HTTP request against a specific endpoint on the container.

This is the basic theory behind lifecycle hooks. Now, let’s see how to actually attach handlers to PostStart and PreStop Lifecycle Hooks.

In this pod spec, we create the PostStart  and PreStop  hook handlers for the container running Apache HTTP server. Both handlers are of the Exec type because they execute a specific command in the container environment. To illustrate how the PreStop  handler works, we created a hostPath  Volume on the local node. Before saving this spec, you’ll need to specify your path to the hostPath  Volume, which should be your user directory without any root permissions. For example, you can use /home/<user-name>/tmp  on Linux or /Users/<user-name>/tmp  on Mac.

Once that’s done, save the spec above in the lifecycle-demo.yaml  and create the pod running the following command:

Let’s start with the analysis of the PostStart  handler. As you see from the spec, it creates the index.html  file containing a custom response from Apache HTTP server. Let’s get a shell to our pod’s httpd  container to verify that the PostStart  event fired and the handler executed the code.

Note: If we had two containers in the pod, we would also have to specify the container name in the command above because the shell always gets to a specific container. For example, assuming we have a Ruby container along with the httpd  container, we could get a shell to the Ruby container running the following command:

However, since we have only one container running in the pod, the command above works fine.

This command will get you into the httpd  container’s file system and network environment: root@lifecycle:/usr/local/apache2 . To verify this, you can run the Linux ls command that lists current directories:

Now, let’s see if our server returns the custom greeting written by the PreStart  handler. We’ll need to install cURL inside the container and send a GET request to the server to accomplish that.

Once cURL is installed, we can access the server on the localhost  (as your remember, all containers in a pod are addressable via localhost ):

Great! As you see, the PostStart  handler created the index.html  file with our custom greetings.

Let’s move on to the PreStop  handler. It is an Exec handler that runs a 15-step iteration cycle returning the current date and saving the output to the file in the hostPath  directory we’ve mounted. Let’s verify that it works.

First, let’s exit the container by typing exit  since we don’t need to be inside it anymore:

Next, let’s delete the pod to trigger the PreStop  handler:

Finally, let’s check if the handler wrote the output of the above command to the file inside our /Users/<user-name>/tmp directory (remember to use your own path to that file). Open the file in your favorite text editor:

Great! The handler worked as expected. Now you know how to attach handlers to the container lifecycle events. One important thing to remember is that, at the moment, it’s not so easy to debug these handlers if they fail. The thing is that the logs for a hook handler are not exposed in pod events. However, failed handlers broadcast their own error events. If a PostStart  handler fails, it sends the FailedPostStartHook  event and if the PreStop  handler fails, it sends the FailedPreStopHook  event. You can see these details by running kubectl describe pod <pod_name> .


That’s it! As we’ve seen, Kubernetes offers a powerful API for working with containers including configuring image pull policies and container images. You also learned how to define commands and arguments for your containers to change their default behavior. In addition, we found out how to use container lifecycle hooks and container runtime environment to manage various events in the container lifecycle and interact with the running containers.

Subscribe to our newsletter