Top Reasons Businesses Should Move to Kubernetes Now

Posted by Mike Johnston on May 26, 2016

If you haven’t been able to find your way out of the datacenter lately, you may not have noticed that the container management market is exploding.

Until the last couple of years, containers were a cool concept, but running them at scale and running them in a way that is actually as performant as bare metal (or even VM) was not really possible. Shared service companies that ventured into multi-tenant infrastructures based on Docker provided horrible performance to their customers. The solutions they used were hand-rolled, and, unless you had the sheer capital and a dev army to make something good, most solutions were pretty poor. Managing inter-container networking, persistent storage, and autoscale (along with many other standard infrastructure features) in an automated way just wasn’t in the cards.


One organization with the capacity to produce a really good container management platform was Google, with their Borg container management system. In 2014 the Google gods chose to release this system to the open source wild under the Kubernetes project.

In this article I will go deeper into why Kubernetes is such a game changer in the container space, and in this series I will continually take you down the rabbit hole into Kubernetes features that you may not know about or that may totally blow your mind.



As the title suggests, this particular article  is targeted to both maximum businessers and DevOps engineers who seek the overhead and performance benefits of a containerized infrastructure.

This series will guide you through how to use Kubernetes in your business and how it makes financial sense. I will link to technical documents throughout the series, but it does not make sense for me to go into too much technical detail when most of technical information is readily available through your friendly Google search bar.

I will try to break concepts down in a understandable way, and if you or your team would like to dive deeper into the business case or technical details of any of my articles, I encourage you to hit us up in the comments, visit our Supergiant subreddit, or join our Supergiant Slack channel.

Containers

The first concept I want to cover is containers. From a high level I have noticed a lot of confusion about what containers actually are, and this has led to some misconceptions about how they should be used and whether they are right for your infrastructure.

Let’s get something out of the way right now: Containers are not virtual machines.

Containers, at their heart, are a way to fence an application off from other areas or applications on your servers. Applications inside a container have direct access to your underlying server hardware without the overhead of virtualization. They also have the benefit of being portable and self-contained, which means all the time and money investment spent configuring servers with dependencies, support libraries, etc. is a thing of the past for the most part.



With containers, all of your hardware can receive the same configuration, and the containers are self-contained apps on top of a pile of computer power.

I will go into making great containers later in the series, but you can see really quickly that containers allow you to standardize your infrastructure across the board, lowering engineering costs.

What does this mean for my technology team?

Developers no longer need to be at odds with engineers. A developer can create, test, and build an app all in a container. Then they simply hand the container over to the engineering staff or container management platform. The engineer or container management service just needs to know what network and storage requirements the container(s) need. They really don’t need much knowledge about the container’s function -- just that it runs. This allows increased speed to market, and it lowers engineering costs by bypassing the server-level engineering normally needed to launch applications.

In general, containers allow your team to work better together and to iterate/develop faster.

So what does Kubernetes bring to the table?

Now that you plan to run your applications in a container-based infrastructure, how do you herd the cats? A large infrastructure could have hundreds if not thousands of applications, and they all need networking, storage, and alerting/management. These features must be automated. There has been a flood of new container management platforms in the last couple of years. Some are good; some are horrible. After much review, trial, and error at scale, we think the best solution available right now is Kubernetes.

Some solutions are really good at networking, while others are really good at managing persistent storage -- but we feel Kubernetes is the only one with the “whole package.”

Kubernetes clusters can automatically handle networking, storage, autoscaling, logs, alerting, etc. for all your containers. It has been our experience that Kubernetes clusters are extremely low maintenance. Once they are set up and properly configured, you can expect your applications to run with extremely low downtime, great performance, and with a greatly reduced need for support intervention. Once we began to deploy our services on Kubernetes, our support team saw a dramatic decrease in support issues, and they were able to better allocate their time to an even higher relationship standard with our customers. I think we can agree that is where we would like our support teams to be able to focus their efforts.

Another unsung hero that Kubernetes brings to the table is its compute management scheme.

I am not kidding when I say this: Kubernetes can significantly reduce your hardware costs by better utilizing the hardware you are paying for. We have seen 40%-50% reductions in our hardware costs by utilizing the Kubernetes resource scheme, and it does this without impacting application performance or customer experience at all. We even noticed an improvement to our overall application performance across the board. Read more about this below.

Improvements to Kubernetes

After all my gushing above, it may seem like there isn’t anything left to improve, but there is. We call it Supergiant.

What does Supergiant do differently than Kubernetes, and why wouldn’t I just use Kubernetes?

The nice thing about Supergiant is, you can just use Kubernetes. We don’t modify Kubernetes in any way. We feel that would be sacrilege, so the Supergiant open-source CLI can be used to deploy and manage Kubernetes clusters even if you have no intention of using Supergiant at all.


When we built Supergiant, our goal was to add value by focusing on performance and ease of management, so we built an easy-to-use UI and we abstracted more complicated features into easy-to-understand forms: installation, persistent storage, load balancing, and hardware (read: cost) autoscaling.

Installing Kubernetes on your own hardware can be a tough hurdle for a lot of teams, so we made it easy to install. I won’t go into more detail here, but you can use the CLI, or you can get started faster with our installer script that runs through a handful of CLI actions. Our installer script may be accessed from the Install Supergiant page on our site.

Supergiant gives you tools to manage persistent storage that allow you to migrate data, change hard drive size, type, etc. on the fly. We focused on performance and ease of management for stateful containers, like databases, and we made them accessible to users and organizations who don’t have 6 months of lead time to learn a new management platform.

Supergiant also augments Kubernetes hardware management by squeezing even better utilization out of existing hardware and by autoscaling hardware where and when it is needed.

In essence, we wanted to bring our internal tools that we wrote to make Kubernetes work well with our Qbox.io hosted Elasticsearch infrastructure.

Once we created it, we wanted to make these tools available free forever to anyone who wants to get the same benefits we did.

Thanks for your time. Check us out at supergiant.io, Twitter, Reddit, slack, and Github!

Further Reading

You can read more Supergiant case studies and examples in these articles:

comments powered by Disqus