Containerization is rapidly becoming a standard for managing applications in large production environments. As reported by Datadog, half of the companies that run over 1,000 hosts have already adopted containers.
To run containers efficiently, more companies are increasingly using container orchestration. According to a CNCF survey, over 80% of companies running containers also used container orchestration at the end of 2018. This is up from 45% reported by the New Stack survey in 2016.
Container orchestration’s market has been rapidly evolving over the past couple of years, and there are now over a dozen mature container orchestration platforms available. However, Kubernetes is increasingly the first choice among container users. For example, Datadog reports Kubernetes usage rising from 22.5% in October 2017 to 32.5% in October 2018.
Does this mean that Kubernetes is the only game in town as of 2019? We want to answer this question in this article by looking into key pros and cons of the platform.
The article is organized as follows. First, we discuss Kubernetes architecture and key features, and then we focus on major advantages and disadvantages of the platform. Let’s get started!
If you are new to the Supergiant blog, this might be the first time you’ve heard about Kubernetes. Below is our short non-technical introduction to the platform. For the in-depth review, please consult the official docs.
Kubernetes (K8s in short) is an open-source container orchestration platform introduced by Google in 2014. It is a successor of Borg, Google’s in-house orchestration system that accumulated over a decade of the tech giant’s experience of running large enterprise workloads in production. In 2014, Google decided to further container ecosystem by sharing Kubernetes with the cloud native community. Kubernetes became the first graduated project of the newly created Cloud Native Community Foundation (CNCF), an organization conceived by Google and the Linux Foundation as the main driver of the emerging cloud native movement.
So, what’s the deal with Kubernetes anyway?
The platform’s main purpose is to automate deployment and management (e.g., update, scaling, security, networking) of containerized application in large distributed computer clusters. To this end, the platform offers a number of API primitives, deployment options, networking, container and storage interfaces, built-in security, and other useful features.
To make it sound less complicated, here’s what a basic process of running applications in Kubernetes looks like.
First, you package your application with all its dependencies into a Linux container (for example, with Docker).
Then, you create an API resource in Kubernetes where you specify the container image to use, the number of replicas to run, ports, volumes, update policy, configuration, and other parameters.
Third, you register the API object with the Kubernetes API server.
Thereafter, Kubernetes works to maintain the desired state of the API resource you declared. For example, it tries to run the number of replicas you specified, re-schedule the app onto another node if the one hosting it failed, perform liveness and readiness probes, etc.
In sum, Kubernetes provides a way to maintain the desired state of your application. With this platform, you declare how you want your app to be run and Kubernetes takes care of it. Kubernetes also allows administrators to efficiently manage cluster resources both on-premises and in the cloud.
The next question: what components does Kubernetes consist of? You can run Kubernetes on a single node, but, in production, it usually consists of one or several masters and non-master nodes.
A Kubernetes master runs Control Plane responsible for maintaining the desired state of the cluster we discussed above. In its turn, the Kubernetes Control Plane consists of several components with unique roles (see the image below):
Image: Kubernetes architecture
Applications deployed by users usually run on non-master nodes. These nodes communicate with the master via kubelet, a central node component that performs many orchestration tasks such as registering nodes with the API server, starting and killing containers, monitoring containers, executing liveness probes, collecting container and node metrics, etc.
Also, nodes run kube-proxy, a program that reflects Kubernetes networking services on each node.
As the image above suggests, Kubernetes architecture is quite complex. We’ve just touched a tip of the iceberg in this discussion. For a more in-depth overview of the Kubernetes architecture, consider reading the following article.
Now that you have a basic idea of how Kubernetes works under the hood, let’s discuss key advantages of the platform. We’ve compiled a list of the following benefits:
In the open source world, the popularity of software has many positive implications. It goes hand in hand with frequent contributions from the community, faster development, and release cycles, and better maintainability (bug fixes and feature updates).
At this time, Kubernetes is the most popular orchestration platform in the world. It has a vibrant community of end-users, developers and maintainers. Let the facts speak for themselves: as of July 18, 2019, Kubernetes had 55,314 GitHub stars compared to just 5,632 for Docker Swarm and 4,226 for Apache Mesos. Kubernetes was forked 19,212 times compared to only 1,113 for Docker Swarm and 1,610 for Apache Mesos (see the image below).
Also, Kubernetes is supported by major cloud providers including AWS, Google Compute Engine, Microsoft, and IBM. All these companies offer managed Kubernetes offerings in the cloud.
The conclusion is obvious: companies adopting Kubernetes will get a mature container orchestration platform. Kubernetes adopters will not fall behind the competition and technology advances because Kubernetes is the main driver of container innovation right now.
Companies seeking to adopt container orchestration should understand that orchestration is only one piece of a puzzle. To run containers in production, they will also need many supporting tools, interfaces, and services such as security, monitoring, logging, networking, and more.
Kubernetes facilitates the integration of these additional tools and services into the platform, and the Kubernetes community works hard to make these tools compatible with Kubernetes. You can get tools for literally any use case and task, including:
Further, all major databases, application stacks, and networking solutions are actively developing Helm charts (pre-packaged applications that can be easily deployed in Kubernetes). The platform’s users can easily get up and running with anything ranging from MySQL or WordPress to big data analytics and Machine Learning using customizable community-developed charts.
Kubernetes is a platform built around open source and cloud native standards. One important aspect of it is broad support of container runtimes. Docker is very popular, but users may need other container runtimes as well. Kubernetes takes care of this need.
Kubernetes introduced the Container Runtime Interface (CRI), an interface that supports a broad array of container runtimes without the need to recompile, in v1.5. Kubernetes currently supports such container runtimes as Docker, CRI-O, Containerd, frakti, and other CRI runtimes.
Kubernetes offers many API resources and primitives for running any kind of application including:
Kubernetes offers many API primitives such as Pods, Deployments, StatefulSets, Jobs, CronJobs, and Services to implement different application types. All other container orchestrators have far fewer workloads and deployment options.
Kubernetes has excellent support for stateful applications that require stable and persistent storage. The platform supports local persistent storage, cloud storage, network storage, software defined storage (SDS), and many other options. With Kubernetes in-tree volume plugins, you can attach any popular storage solution to your containers and reliably persist your application data.
Also, Kubernetes supports Container Storage Interface (CSI) and FlexVolume, allowing Kubernetes users to attach any storage solution that has a CSI or FlexVolume plugin. As a result, you can place your Kubernetes workloads on any storage infrastructure, be it standard block storage or distributed network storage.
Efficient networking is very important when running containers in a distributed environment and/or using microservices. However, creating a viable networking solution for your containerized workloads from scratch may involve a lot of work. Kubernetes helps companies avoid “reinventing the wheel.”
Kubernetes is based on a flat networking model in which each Pod gets a unique IP address that can be accessed across nodes. This flat network is, in essence, an overlay network that can be configured with the CNI-compliant plugins such as Weave Net, Contiv, Cilium, and others. Since Kubernetes supports Container Networking Interface (CNI), users can easily configure Kubernetes networking with these plugins.
In addition to cluster-wide networking, Kubernetes offers a number of networking features including:
Kubernetes facilitates agile CI/CD pipeline with the built-in rolling updates feature. A rolling update is the type of sequential application update in which newer versions of application temporarily co-exist with the older ones. Such approach is used for a smooth transition between versions in production without downtimes.
Also, in Kubernetes, you can easily implement various update patterns such as Blue/Green deployments, canary releases and, A/B testing. You can also rollback to a previous version of your app if something goes wrong.
Kubernetes provides with an advanced resource management model for managing compute resources at various levels, from containers to entire clusters. For example, container resource requests and limits feature allows you to allocate specific amounts of CPU and RAM to containers and create various classes of Pods, depending on their resource requirements.
Also, K8s administrators can intelligently control the use of resources across teams, applications, and users by separating a cluster into logical areas called Namespaces. You can set default requests and limits for a Namespace, or define resource quotas. All these features make resource management in Kubernetes extremely flexible and agile.
Kubernetes supports a number of security features for your clusters, including
Kubernetes is built with extensibility and pluggability as its design principles. Users can extend K8s with their custom APIs, resources, storage plugins and more. The most popular extension points in Kubernetes include the following:
These features make Kubernetes an infinitely extendable platform that provides flexibility and allows for any customization.
Since v1.6, Kubernetes ships with the cloud-controller-manager that enables integration with major cloud providers. This master component allows running various cloud checks and cloud control loops inside the K8s clusters. Also, Kubernetes simplifies using cloud-based load balancers, Ingress controllers, cloud storage, and other resources specific to a given cloud.
Kubernetes is great, but, as anything in the world, it’s not perfect. Moreover, transitioning to Kubernetes is associated with numerous challenges which must be addressed by companies seeking to adopt it. Here’s our list of cons and challenges.
Kubernetes is not an easy platform to learn, even for the most experienced developers and DevOps. Teams seeking to adopt Kubernetes need to go a long way from the understanding of basic K8s concepts and primitives to mastering advanced development and operations concepts. This journey takes time and requires much effort.
Moreover, it’s not just about Kubernetes. It’s about the entire cloud-native ecosystem. Learners should develop skills in a variety of subjects such as networking, cloud computing, distributed applications, distributed logging, services meshes, and many more. Therefore, if you want to get up and running with container orchestration pretty fast, Kubernetes may not be the best option for you.
However, if you decided to learn Kubernetes or to train your team, the end goal is realistic. There are a number of training courses offered by reputable Kubernetes service providers. For example, Supergiant.io offers training courses for foundational, intermediary, and advanced levels. You can learn more about them here.
Kubernetes consists of multiple components that should be configured and installed separately to initialize the cluster. If you install Kubernetes manually, you should also configure security, which includes creating a certificate authority and issuing certificates. There are other important pre-installation and post-installation tasks, such as:
If you look at all these tasks, you understand why Kubernetes has earned a reputation of being hard to install, configure, and manage. Luckily, however, the Kubernetes community has developed a number of cluster provisioning tools such as kubeadm, Kops (Kubernetes operations) that simplify the Kubernetes installation.
Companies seeking to adopt Kubernetes fast can also use managed Kubernetes in the cloud or turnkey cloud solutions like Supergiant Toolkit that allow to easily spin up clusters via a simple UI.
Kubernetes does not provide a High Availability mode by default. To create a fault-tolerant cluster, you have to manually configure HA for your etcd clusters, master components such as kube-apiserver, load balancers, nodes, and applications. Alternative solutions like Docker Swarm and Mesos/Marathon have at least a built-in master HA via Raft or ZooKeeper quorum.
If you want to get up and running with Kubernetes fast, you may not have time to develop in-house Kubernetes expertise. You’ll probably go for established Kubernetes experts. That may pose a problem because the K8s talent is not cheap. For example, according to PayScale, the average salary for the skill “Kubernetes” is $116,000 (June 2019). Budgets of many medium and small enterprises are simply too small to allow hiring established Kubernetes experts at this pay rate.
As this analysis demonstrates, Kubernetes will be a great choice for companies seeking to adopt container orchestration. It is suitable for any compute environments, application types, and business-specific requirements. Also, Kubernetes offers many useful features such as security, broad container support, persistent storage, and networking out-of-the-box. It also grants flexibility for companies to extend the platform, which makes Kubernetes extremely customizable and configurable.
Overall, the Supergiant team considers Kubernetes to be the best container orchestration platform at this time and offers services that can help you adopt Kubernetes faster and more efficiently.
If you need help in the design, deployment, and management of your Kubernetes clusters, Supergiant offers support from certified support administrators and engineers with many years of experience in running Kubernetes in production.
Learn more here.
We offer on-site Kubernetes training courses that cater to your team’s experience and goals. Whether you need an introduction to Kubernetes or are seeking an advanced class to help you pass the Certified Kubernetes Administration exam, Supergiant has the right course for you.
Learn more here.