Introduction to Kubernetes Pods

Kubernetes (K8s) is an open source platform that automates deployment, scaling, and management of containerized applications, workloads, and services. The platform offers a set of abstraction layers for container management that is agnostic about the underlying container technologies (e.g., Docker, Rkt) used for container packaging. Among other things, Kubernetes facilitates the efficient management of computing resources, deployment, scheduling, horizontal and vertical scaling, updating, security, and much more, through various abstractions.

In this article, we discuss Kubernetes Pods as one of the central concepts in Kubernetes. We first focus on the Pod’s architecture and functions, their basic use cases and benefits, and then proceed to the discussion of their deployment options. We discuss four broad options  — using native Kubernetes deployments such as direct Pod creation, Deployment Controller, Replication Controller, and using Supergiant (our Kubernetes-as-a-Service platform that simplifies the deployment and management of K8s clusters and resources via a centralized UI that abstracts away Kubernetes API). By the end of this tutorial, you will have a better understanding of available options for Pods deployment in Kubernetes and beyond.

What is Pod?

A Pod is a unit of deployment in Kubernetes that provides a set of abstractions and services for applications running on the Kubernetes cluster. It can include one or multiple containers (e.g., Docker, Rkt) that share storage and network resources, Linux cgroups and namespaces, are co-scheduled/colocated, and share the same life cycle.

However, why would we use Pods at all instead of conventional containers that can also provide a good level of isolation? The answer is that a Pod’s model offers higher-level abstractions, which easily allow plug-in Kubernetes services for containers and applications. Pods augment the container model by automatically handling co-scheduling, coordinated replication, resource sharing, dependency management, and shared fate of applications running in a Pod. Thus, Pods can be imagined as “logical hosts” that contain relatively tightly coupled application containers and consume Kubernetes orchestration services to manage them.

Pod’s Shared Resources and Communication

To achieve higher levels of isolation, containers in a Pod have shared contexts that include a set of Linux cgroups and namespaces and Kubernetes native facets of isolation. Also, similarly to containers, Pods may have shared volumes that can be accessed by applications within a Pod. These volumes can be defined in a Pod spec and mounted into each container’s filesystem.

In addition, Pods are created with internal communication mechanisms. Each Pod is created with a unique IP address and port space shared by all containers running in it. Containers can communicate via localhost  but due to a shared network namespace (IP and ports) should also coordinate the usage of ports to avoid conflicts. In their turn, Pods can communicate using their IP addresses in a flat shared network like Flannel with the Pod’s hostname set to the Pod’s Name.

Uses of Pods

The most basic usage of Pods are Pods running a single container where a Pod serves as a wrapper around a single container (e.g., Docker container) and Kubernetes manages Pod services to containers rather than containers directly.

Advanced usage of Pods includes Pods running tightly coupled multiple containers. In this scenario, a Pod is a wrapper around multiple co-located containers that share resources and have distinct responsibilities. For example, one could imagine a Pod encapsulating two containers, one of which acts as a static server for files and the second serves as a ‘sidecar’ container executing operations with these files (e.g., update and transformation).

These two approaches enable a number of use cases for Pods including the following:

  • hosting vertically integrated application stacks (e.g., MEAN and LAMP) that include a number of tightly coupled applications
  • content management systems (CMS), file loaders and local cache managers
  • log shippers, backup, compression, snapshotting
  • monitoring adapters, event publishes, data change watchers, etc.
  • network tools like proxies, bridges, and adapters

Pods Life Cycle

Pods are created and deployed with a unique ID (UID) and scheduled to Nodes where they live until their termination or deletion. (Note: Pods die simultaneously with Nodes on which they live.) It is noteworthy that a Pod is not re-scheduled to a new node after termination. Rather, Kubernetes creates an identical Pod with the same name if needed but with a new UID. When a Pod dies, related shared volumes are also detached.


Pod Lifecycle


A Pod’s life cycle includes a number of phases that are defined in PodStatus object. Possible values for phase include the following:

  • Pending: Pods with a pending status have been already accepted by the system, but one or several container images have not been yet downloaded or installed.
  • Running: The Pod has been scheduled to a specific node, and all containers are already running.
  • Succeeded: All Containers in the Pod were successfully terminated and will not be restarted.
  • Failed: At least one container in the Pod was terminated with a failure. This means that one of the containers in the Pod either exited with a non-zero status or was terminated by the system.
  • Unknown: The state of the Pod cannot be obtained for some reason, typically due to communication error.

Benefits of Pods

In addition to better isolation and access to Kubernetes orchestration services, Pods offer a number of other important advantages in comparison to running multiple programs in a single (Docker) container:

  • Transparency: Thanks to Pods, containers are visible to infrastructure and OS, enabling the provision of various services such as process management and resource monitoring.
  • Decoupling software dependencies: Running a single container for each Pod allows independent versioning, deployment, and upgrading of containers that make up an application.
  • Simplicity: Users don’t need to use their own process managers to manage signal and exit code propagation.
  • Efficiency: Thanks to the delegation of infrastructure services to the system, containers can be more lightweight with Pods.
  • Pluggability: Running containers in Pods allows plugin Kubernetes schedulers and controllers.
  • High-Availability Applications: Pods can be replaced in advance of their termination and deletion, ensuring high availability of your applications.

Deploying Pods

Kubernetes provides several options for creating and managing Pods:

  • direct creation of Pods via Pod templates
  • using Deployment Controller
  • using Replication Controller
  • using a Kubernetes-as-a-Service provider like Supergiant


To try these options, you’ll need to put several prerequisites in place:

  • A running Kubernetes cluster: If you don’t have a Kubernetes cluster yet, you can run a local single-node Kubernetes cluster with Minikube or link a your cloud account to Supergiant and deploy a cluster on it.
  • Kubectl command line tool: You can find instructions for installing kubectl here.

Direct Pod Creation Using Pod Templates

In most cases, you don’t need to create Pods directly. (Note: Deployments are the recommended way to create Pods in Kubernetes.) However, manual creation of Pods may be useful for development and testing purposes.

To deploy a Pod directly, you first must define a Pod template, which is a Pod specification describing the Pod’s runtime, container images used, and other application-specific settings (e.g., ports, proxies). Users can create Pod templates using YAML or JSON syntax. YAML is used in the example below.

As you see, this Pod template does not specify a desired state of the Pod including a number of replicas to bring up. Therefore, if the template changes, it won’t affect the running Pods. This approach radically simplifies the platform’s semantics and increases the flexibility of deployments.

Creating a Pod from Scratch

In this example, we show you the whole process of creating a Pod from a Pod Template.

Step 1. Define a New Pod using a Pod Template

Create a Pod template for the popular Redis data structure store retrieved from Docker Hub container repository. Save this template in the redis-pod.yaml  file for later usage.

This Pod template specifies the following important settings:

  • apiVersion  : Kubernetes API version used.
  • spec.containers.image  : Redis container image to be downloaded from the Docker Hub (we are downloading the latest release).
  • spec.containers.ports.containerPort  : a port assigned to the Redis Pod.
  • restartPolicy  : A restart policy for the Pod. Available options include Always , OnFailure , and Never . In this example, we are asking Kubernetes to never restart a Pod if it fails.

Step 2. Create a Pod

Once our Pod Template is edited and saved, we can use kubectl  CLI to create the Pod. (See instructions on how to install kubectl on your Kubernetes master.)

Step 3. Check the Pod

We can now see the updated list of running Pods using the following command:

The console’s output shows the name of the Pod, its status, a number of restarts, and the Pod’s age.

Deleting the Pod

Pods can be deleted using the following command, where the optional parameter --grace-period=<seconds>  allows users to override the default grace period (30 seconds).

For force deletion, set --grace-period  to   and specify an additional flag -- force . This should work in kubectl  versions >=1.5.

Limitations of Direct Pod Creation

Creating individual Pods directly in Kubernetes is quite rare because Pods are designed to be relatively ephemeral entities. Since Pods do not self-heal, a Pod created manually will not be restarted if a Node fails or if the scheduling operation fails. Similarly, a Pod won’t be recreated after an eviction due to the shortage of compute resources or Node maintenance. As a result, Pods created directly can be easily lost and need to be created from scratch again.

Therefore, the best way to create Pods using native Kubernetes tools is to use Controllers. A Controller is a Kubernetes object that can create and manage multiple Pods enabling replication, rollout, and offering self-healing capabilities. For example, if one of the Nodes fails, the Controller might schedule the affected Pods on a different Node, thereby maintaining the desired cluster state.

Creating Pods with a Replication Controller

A ReplicationController  maintains the desired number of Pods removing extra Pods and creating new ones if there are less than expected. In contrast to manually created Pods, the Pods created by a ReplicationController  are automatically replaced upon failure.

To create a new ReplicationController , we first define a template and save it to a new file titled httpd-rc.yaml . In the example below, we are creating a ReplicationController  that will bring up three replicas of Apache HTTP Server.

Then, run kubectl create  to start Pods.

We can now check on the status of the ReplicationController  using the following command:

As you see, the ReplicationController  has created three httpd  Pods with a shared httpd label. The kubectl  output above also displays information about namespaces and container images used, assigned ports, replicas’ state, and IDs.

Finally, to delete the ReplicationController  run:

As you might have noticed, creating Pods with the ReplicationController  is quite simple. ReplicationController , however, has certain limitations such as a necessity to create new replication controllers for app upgrades, dealing with switching between replication controllers manually, and reverting failed changes manually. Thus, using Deployment Controller is recommended if you want to create replicas while automating other operations like rolling updates, reverting failed changes, etc.

Creating Pods using Deployment Controller

A Deployment Controller can be used to create new ReplicaSets and make declarative updates of existing Deployments. In the following example, we will define a Deployment that creates a ReplicaSet of three Apache HTTP Server Pods (httpd).

The first thing we need to do is to create a Deployment object and save it to a new file – e.g httpd-deployment.yaml .

The Deployment object defined above does the following:

  • Creates a Deployment named httpd-deployment  indicated by the  field.
  • The Deployment is created with three httpd  replicas, specified in the spec.replicas  field.
  • The selector field defines how the Deployment Controller finds the right Pods to manage. In our example, we have created a label “httpd” shared by all Pods ( ).
  • The template.spec  field specifies that the Pods run only one container named httpd , which uses the latest version of the httpd  Docker Hub image.
  • The Deployment opens port:80  for all Pods.

Once all edits are made, we can create the Deployment:

We can then see the newly created Deployment and three running httpd  replicas using kubectl get deployments . The output will be similar to the following:

The output above contains the following information:

  • NAME  : the list of all Deployments in your Kubernetes cluster
  • DESIRED  : the number of replicas K8s wants to see running
  • CURRENT  : displays how many replicas K8s does see running
  • UP-TO-DATE  : references how many of the currently running replicas fit the desired state of the deployment (container image, labels, or other manifest field changes)
  • AVAILABLE  : displays how many currently running replicas have successfully passed their readiness probe
  • AGE  : refers specifically to the age of the deployment resource, and not the replicas/pods themselves

We can also check the Deployment rollout status running kubectl rollout status deployment/httpd-deployment . If your Deployment has been successfully rolled out, you’ll see the following output:

To see the ReplicaSet  ( rs ) created by the deployment, run kubectl get rs :

ReplicaSet  is formatted as [DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE]  where 2955525241  is the Pod’s template hash value automatically generated upon the Deployment creation.

We can also run kubectl get pods --show-labels  to see the labels automatically generated for each Pod

Deleting a Deployment

If you want to delete the Deployment, simply run:

That’s it! You can now create Pods with the Deployment Controller and use kubectl  to check its status and replicas. It’s as simple as that!

Deploying Pods with Supergiant

Supergiant is a Kubernetes-as-a-Service platform that simplifies deployment and management of Kubernetes clusters and resources. It provides an easy-to-use UI for application deployment and grants access to Helm repositories containing hundreds of popular applications.

In a nutshell, a Helm repository is a collection of packages that contain common configuration for Kubernetes applications including an app’s runtime, ports, dependencies, communication, and networking settings. Supergiant ships with the /stable  branch of the official Kubernetes Charts repository that includes approximately 160 configured apps. These charts are well tested packages that comply with all technical requirements of Kubernetes.

The process of deploying apps in Supergiant is quite simple. To deploy a new app from a given repository, click “Apps” in the main navigation menu and then “Deploy New App“. Then, in the application list, select the app you wish to deploy and edit its configuration (see the deployment process in the GIF below). Each chart has its specific parameters and options that can be found in the official documentation for the chart.


Supergiant: app deploy

At the minimum, you should specify a cluster to which to deploy the app and create the user-friendly name for the deployment. After all edits are made, click the “Submit” button and watch your application being deployed at a fraction of the time. On successful deployment, the app’s status will change to “Running” as displayed in the cluster stats.

Adding Custom Repositories

Supergiant allows adding custom repositories (both private and public), which means that you can have access to any Pods or applications you like. To add a new repository, select App & Helm Repositories under the Settings drop-down menu in the upper header. Put the name and the URL of a new repository in blank fields and click “Add new Repo“. Supergiant will add a new repository to its memory and refresh the list of available apps in your apps’ list.

Supergiant: adding a Helm repo


For example, in the GIF above, we’ve added an official Kubernetes Charts incubator repository that includes apps that have not yet passed all requirements for /stable repository.


As we have seen, Pods are powerful Kubernetes abstractions that enable co-scheduling, replication, communication, updating, and other operations with tightly coupled containers. As a Kubernetes user, you have a wide array of options to deploy Pods to your cluster including direct Pod creation, using Deployment Controller and ReplicationController. Supergiant provides access to these native Kubernetes API components while also enabling easy deployments of Helm charts via repositories accessible in the easy-to-use Supergiant dashboard. Supergiant abstracts the deployment of Pods even further, making the task easier for developers not familiar with complex Kubernetes concepts. In the next tutorials, we’ll dive deeper into other Kubernetes concepts and components, so stay tuned for upcoming content.

Subscribe to our newsletter