Supergiant Blog

Product releases, new features, announcements, and tutorials.

Supergiant Announces Winners of Echo Drawing

Posted by Mark Brandon on August 31, 2016

The Supergiant team had a great time at the Linux Foundation’s LinuxCon and ContainerCon last week in Toronto. It was a special week for the Linux community, celebrating 25 years since Linus Torvalds sent an email missive requesting help on the kernel he had been developing, giving birth to the open-source movement. Torvalds himself stopped by the Supergiant booth.

Visitors to our booth were entered into a drawing to win an Amazon Echo. We had hoped to do this drawing at the event, but we didn’t actually get the list until after we got home. To show that everything was on the up and up, we recorded this video of Mark Brandon demonstrating the Echo and Alexa Voice Assistant before he randomly drew the names of the winners.

Supergiant Announces Winners of Echo Drawing

The two winners were John-Alan Simmons, CTO of Conference Cloud, and Salman Saidi from Intel.

Keep reading

How to Install Supergiant Container Orchestration Engine on AWS EC2

Posted by Brian Sage on August 22, 2016

The goal of this walkthrough is to help you provision Supergiant on an Amazon Web Services EC2 test server. By the end of this tutorial, you will be ready to deploy your first App on Supergiant. Supergiant is the easiest open-source container orchestration system to install and use. AWS hardware usage rates apply.

You won’t need to download any source code or binaries. We’ve packaged everything into an Amazon machine image for AWS; however, if you want to take a look at the source, it’s all on GitHub.



Prepare for Launch

Sign into your AWS console and prepare to launch a new EC2 instance through a series of wizarding steps.

Step 1

From the EC2 console, push the Launch Instance button, choose Community AMIs, and search for "supergiant". You should be able to find the latest stable Supergiant AMI release quickly.

Note: when we release new versions of Supergiant, the AMI ID will change and will be different for each region, but we will always release new AMIs under the name "Supergiant".


Search for the latest release of supergiant AMI, then press Select to choose your instance type.

Step 2

Supergiant creates a dashboard that helps you manage any number of Supergiant Kubernetes instances. We recommend a single m4.large instance to hold the latest version of Supergiant, but you can experiment with what works best for you. After you select your instance type, click Next: Configure Instance Details.


Select your instance  type, then click Next: Configure Instance Details.

Step 3

There’s nothing we need to change, here. You may select to change whatever you like, but the defaults are sensible enough. When you’ve made your changes (or not), click Review and Launch.


Configuring these settings are out of the scope of this tutorial. To simply get started, we can leave everything here, as-is. Click Review and Launch.

Step, er... 7

The wizard skips a few steps, but hey, today that works for us. We only need to change one thing we skipped over. We need to take a step back to allow HTTP and HTTPS traffic. Click Edit security groups on this screen.


Click Edit security groups to add rules to allow HTTP and HTTPS traffic.

Add Security

The Dashboard will listen for HTTP traffic on port 9090 when it's ready, so we need to click Add Rule to allow HTTP traffic to that port.

0.8.x UPDATE: The Dashboard now listens for HTTP traffic on port 80.

0.9.x UPDATE: For better security, Supergiant now creates a self-signed certificate and serves the dashboard over HTTPS on port 443.


Click Add Rule and make the Type HTTP and the Port Range 80. Click Add Rule one more time and make the Type HTTPS and the Port Range 443. I’ve changed the Source to My IP in this example. Click Review and Launch when done.

Review and Launch Supergiant

It’s time to launch. If you want to use tags to identify this EC2 server, now is the time to add them. When you’re emotionally prepared for all the excitement, click Launch, and you will be asked what key pair you wish to use. Select whatever option you prefer.

Review changes, add tags if you want them, and click Launch.

Access the Dashboard

To access the dashboard, we will need to access the server’s logs to get the randomly-generated dashboard password. From the Launch Status page, click on the server’s ID to go to the EC2 console with only your new instance visible in the list. With the instance selected, click Actions > Instance Settings > Get System Log and then find your Supergiant Login Info, close to the bottom of the log.


With the instance selected, click Actions > Instance Settings > Get System Log.


Find your Supergiant Login UserName and Password near the bottom of the System Log.

Use your instance’s public DNS or IP address to access it with a web browser on port 80, then use the UserName and Password from the System Log to authenticate. You're now ready to administrate Kube clusters using your Supergiant dashboard!

Access the Supergiant Community

Remember, if you have trouble, or if you want to talk to other users about how they're making the most of Supergiant, our community hangs out on the Supergiant public Slack channel.

We would love any feedback you want to leave. We're working hard to add features that our community wants most, so all your comments are helpful to us. Visit our Slack channel, and post your questions and comments.

Where-To Next?

This tutorial is first in a series to help you get started with Kubernetes using Supergiant. From here we recommend checking out the following tutorials:

Keep reading

Stop By Our Booth at LinuxCon + ContainerCon!

Posted by Adam Vanderbush on August 17, 2016

Supergiant will join other containerization experts at LinuxCon and ContainerCon in Toronto to introduce their new open-source container orchestration system on August 22 - 24. LinuxCon is the place to learn from the best and the brightest, delivering content from the leading maintainers, developers and project leads in the Linux community and from around the world. ContainerCon expands upon the Linux Foundation’s success in Linux by bringing together leaders in the development and deployment of containers, the Linux kernel, to continue to innovate on the delivery of open source infrastructure. 

By co-locating with LinuxCon, ContainerCon will bring together a diverse range of experts from cloud computing and Linux to offer a general technical conference that is open to everyone; creating a place where companies on the leading edge can network with users and developers to advance computing. 

Supergiant is the first production-grade container orchestration system that makes it easy to manage auto-scaling, clustered, stateful datastores.

Scan your badge at our booth for your chance to win 1 of 2 Echos. 


Supergiant began in 2015 when the team at Qbox.io needed a production-ready, scalable solution for their Hosted Elasticsearch Service. After massive initial internal success, it was refined to easily launch and manage any containerized application. Supergiant solves many huge problems for developers who want to use scalable, distributed databases in a containerized environment.

Built on top of Kubernetes, Supergiant exposes many underlying key features and makes them easy to use as top-layer abstractions. A Supergiant installation can be up and running in minutes, and users can take advantage of automated server management/capacity control, auto-scaling, shareable load balancers, volume management, extensible deployments, and resource monitoring through an effortless user interface.

We also have free astronauts, shirts, and stickers. 


Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.

Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services. Google Cloud, Rackspace, and Openstack are next on the roadmap. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.

Keep reading

Deploy a MongoDB Replica Set with Docker and Supergiant

Posted by Ben Hundley on July 19, 2016

Despite a number of annoying challenges, it is totally possible to run distributed databases at scale with Docker containers. This tutorial will show you how to deploy a MongoDB replica set in 5 steps using Docker containers, Kubernetes, and Supergiant.


NOTE: This tutorial was depricated when Components were removed in version 0.11.x. This tutorial is valid for Supergiant version =< 0.10.x.


About the Tools

Docker is much-loved for bundling applications with dependencies into containers. I’m going to assume you already have knowledge of Docker, or you wouldn’t be reading this tutorial. We’re going to use the official MongoDB container already on DockerHub.

Kubernetes solves many container orchestration challenges for us, including networking (for clustering) and external storage (for state and data persistence). However, the sequence of Kubernetes commands needed to deploy a clustered application with storage is far from straightforward. We’ll use Supergiant to solve these problems.

Supergiant solves Kubernetes complications by allowing pre-packaged and re-deployable application topologies. Or, in more specific terms, Supergiant lets you use Components, which are somewhat similar to a microservice. Components represent an almost-uniform set of Instances of software (e.g., Elasticsearch, MongoDB, your web application, etc.). They roll up all the various Kubernetes and cloud operations needed to deploy a complex topology into a compact entity that is easy to manage. If you don’t already have Supergiant running on AWS, you can do this pretty quickly from the Install Supergiant page.

So let’s get down to business. This tutorial will use the Supergiant’s API directly with cURL for clarity on all the configuration and inputs, and we’ll have a running replica set in just 5 steps.

Step 1: Create an App

An App allows you to group your components. For example, you might have an app named “my-site-production”, where one of the components is “my-mongodb”.

curl -XPOST $API_HOST/v0/apps -d '{
 "name": "test"
}'

Step 2: Create an Entrypoint

An Entrypoint represents a cloud load balancer (such as an ELB on AWS). When you create components that expose network ports, you can optionally allow external access by assigning them to entrypoints, which then gives the component a publicly-reachable address. This will allow us to communicate with MongoDB from anywhere outside of the Kubernetes cluster.

Note: we’ll do this for our tutorial, but if you don’t need external access, it is smarter to leave communication on the private network by just using the private address without an entrypoint.

curl -XPOST $API_HOST/v0/entrypoints -d '{
 "domain": “example.com"
}'

Step 3: Create a Component

The component contains only 2 attributes: name and custom_deploy_script. Custom Deploy Scripts allow components to extend the standard Supergiant deployment flow. The deploy script used here is supergiant/deploy-mongodb, which configures a replica set based on the component information.

curl -XPOST $API_HOST/v0/apps/test/components -d '{
 "name": "mongo",
 "custom_deploy_script": {
   "image": "supergiant/deploy-mongodb:latest",
   "command": [
     "/deploy-mongodb",
     "--app-name",
     "test",
     "--component-name",
     "mongo"
   ]
 }
}'

Step 4: Create a Release

A Release holds all the configuration for a component. You can think of it like a commit to a git repo. Creating new releases allows you to adjust configuration, and deploy changes when needed.

curl -XPOST $API_HOST/v0/apps/test/components/mongo/releases -d '{
 "instance_count": 3,
 "volumes": [
   {
     "name": "mongo-data",
     "type": "gp2",
     "size": 10
   }
 ],
 "containers": [
   {
     "image": "mongo",
     "command": [
       "mongod",
       "--replSet",
       "rs0"
     ],
     "cpu": {
       "min": 0,
       "max": 0.25
     },
     "ram": {
       "min": "256Mi",
       "max": "1Gi"
     },
     "mounts": [
       {
         "volume": "mongo-data",
         "path": "/data/db"
       }
     ],
     "ports": [
       {
         "protocol": "TCP",
         "number": 27017,
         "public": true,
         "per_instance": true,
         "entrypoint_domain": "example.com"
       }
     ]
   }
 ]
}'

Since the release is the real meat of the matter, I’ve highlighted the parts that can be adjusted without altering the actual topology. First, there’s the volumes section, in which the name, size, and type of EBS (hard drive) can be edited. Then, cpu and ram, both of which control the allotted min/max (or reserve/limit) range for each instance of the component. The mounts section corresponds to volumes, so make sure that the volume value matches the name used for the drive.

Step 5: Deploy

This will deploy the Component as outlined by the Release.

curl -XPOST $API_HOST/v0/apps/test/components/mongo/deploy

When the deploy finishes, you can retrieve the assigned address of each Instance of the component like so:

Request:

curl $API_HOST/v0/apps/test/components/mongo/releases/current/instances/0

Response:

{
 "id": "0",
 "base_name": "mongo-0",
 "name": "mongo-020160715200942",
 "status": "STARTED",
 "cpu": {
   "usage": 7,
   "limit": 250
 },
 "ram": {
   "usage": 36270080,
   "limit": 1073741824
 },
 "addresses": {
   "external": [
     {
       "port": "27017",
       "address": "supergiant-example-com-XXXXXXXXXX.us-east-1.elb.amazonaws.com:30682"
     }
   ],
   "internal": [
     {
       "port": "27017",
       "address": "mongo-0-public.test.svc.cluster.local:27017"
     }
   ]
 }
}

Using the external address of the first instance (highlighted above), we can connect to the MongoDB shell remotely (from your local computer for instance) like so:

mongo supergiant-example-com-XXXXXXXXXX.us-east-1.elb.amazonaws.com:30682

Your output upon connecting should look like this (the important part being the prompt, rs0:PRIMARY>, confirming the replica set is configured):

MongoDB shell version: 2.4.10
connecting to: supergiant-example-com-XXXXXXXXXX.us-east-1.elb.amazonaws.com:30682/test
...
rs0:PRIMARY>

Note:  if you’re looking to deploy larger, sharded MongoDB clusters, you could use the following layout:

  • Component for each shard (optionally as a replica set), just as defined above. Runs the mongod process (see command section of the container definition).

  • Component for the config server replica set. Runs mongod with the --configsvr option.

  • Component for the routing layer. Runs mongos.

View Deploy a Sharded Cluster in the MongoDB manual for an overview of the setup.

And there you have it -- a MongoDB replica set running as containers, with data stored reliably on detachable external drives.

This setup can be resized at any time (CPU, RAM, or disk; volumes can be resized), by creating a new release that defines the new resource allocations you want. Supergiant will then gently rebuild each container upon deploy.

Keep reading

Why Join the Cloud Native Computing Foundation

Posted by Mark Brandon on June 20, 2016

Why Qbox joined and why you should consider it, too

As announced today by the Linux Foundation, Qbox has joined the Cloud Native Computing Foundation. The CNCF is a collaborative project of the non-profit Linux Foundation that brings together market participants in the containerization space to formulate and promote standards.  

Seeded in 2015 when Google contributed the Kubernetes project, CNCF now includes dozens of companies across the globe. By working together, the participants hope to drive the business value of Cloud Native applications like Kubernetes, Prometheus, Docker, Rocket, and the Open Container Initiative and to work toward a more hardware-agnostic future.

Qbox decided to join because our Supergiant project is based on Kubernetes, the most prominent project of the CNCF. Supergiant seeks to extend the benefits of containerization to stateful distributed apps. We created it to manage our Hosted Elasticsearch business, achieving eye-popping performance improvements while cutting our AWS bills in half

However, it can just as easily be extended to other NoSQL technologies like Couchbase, Redis, and MongoDB. (Speaking of MongoDB, if you’re attending MongoDB World (#MDBW2016) next week in New York City, come visit the Supergiant Core Team at Booth #22… you may also use the sponsor code “Supergiant20” to get 20% off your registration.)

By joining with CNCF and The Linux Foundation, we’ll be in a position to learn about best practices in this skyrocketing space, participate in events, network with other inter-operable companies in the space, and possibly contribute or otherwise influence the roadmap of these groundbreaking projects. Without organizations like these, technologies might go back to the bad old days when for-profit corporations would spend years in a standardization tug of war. Competing agendas pulled new technologies in directions that were not always mutually beneficial, hampering adoption and holding the future back. Recall the operating system wars of the 1990’s, which started with half a dozen or more proprietary server OS’s. Today, Linux is the undisputed champ in the enterprise.

Of course, managing these non-profit projects that are built around freely downloadable software is expensive. Companies that make or use these technologies should consider giving back by joining the CNCF, contributing code, or sponsoring CNCF and LF events. Just as with movies and music, if the creatives making the art never get paid, we’ll have less of it.

Keep reading

Supergiant Shows DB Containerization at MongoDB World

Posted by Adam Vanderbush on June 17, 2016

Supergiant will join other NoSQL experts at MongoDB World in New York to introduce their new open-source container orchestration system on June 28 and 29. MongoDB World is a leading technology conference that focuses on database best practices and networking with peers and industry professionals. Supergiant is the first production-grade container orchestration system that makes it easy to manage auto-scaling, clustered, stateful datastores.

Supergiant began in 2015 when the team at Qbox.io needed a production-ready, scalable solution for their Hosted Elasticsearch Service. After massive initial internal success, it was refined to easily launch and manage any containerized application. Supergiant solves many huge problems for developers who want to use scalable, distributed databases in a containerized environment.

Come visit us and get some swag

Supergiant Lego Characters

Built on top of Kubernetes, Supergiant exposes many underlying key features and makes them easy to use as top-layer abstractions. A Supergiant installation can be up and running in minutes, and users can take advantage of automated server management/capacity control, auto-scaling, shareable load balancers, volume management, extensible deployments, and resource monitoring through an effortless user interface.

Want a shirt? We have hundreds.

Supergiant T-Shirts

Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.

Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services. Google Cloud, Rackspace, and Openstack are next on the roadmap. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.

Keep reading