In this blog post, we will compare traditional web application architecture with the emerging microservice architecture that is all the rage these days.
We will explain why microservice architecture is much preferred over traditional architecture for web applications and why it’s such a great idea to build microservices with Supergiant.
Traditional web application architecture consists of three major components:
In this kind of architecture, the server-side application acts as a single large structure. If we need to introduce a significant change or a new feature, we would include changes in the server-side application.
This architecture served as a good practice before the age of the cloud, but since the advent of the cloud, scaling these types of applications has become problematic.
The scaling issue arises when there are many instances of the application running and we need to make a change. Whether such a change is small or big, incorporating it into the entire architecture means the individual changes need to take place in all instances of the application, so they require bigger change cycle times.
The popularity of this architecture has skyrocketed among developers for a number of reasons, such as platforms moving to cloud, along with the increasing number of devices other than normal computing devices (read: IoT) coming into picture for a single application.
There are no strict definitions or guidelines to tell us how a microservice application should be designed or architectured, but it’s agreed across the industry that good microservice architecture involves applications built as several small components, which could interact with each other independently and without failing.
When each component service is independently deployable and scalable, this architecture gains significant advantage over the traditional type because an entity that requires change is independent and does not affect other services. Therefore, not only is the the deployment of changes simpler, but also the scaling of a particular service is greatly simplified and easier to control. Traditionally, scaling had to be done with all of the components inclusively — or at least it required additional effort to scale only the needed components.
Here’s an illustration that depicts the difference between the scaling methodologies for a traditional monolithic service and a typical microservice for the same application.
Using this illustration, we note that the microservices architecture scales only those service that need scaling, whereas with traditional monolithic architecture, the entire application has to be scaled.
Another advantage of microservices architecture is that its methodology generally favors business model development instead of the older concept of specific component-based development. For example, if a feature like “add to cart” had a change to be made, the initial model required the entire team to get in line from the UI team, to the DB team, etc., and be focused as things gone wrong, which could break other things, etc.
In the microservices model, updates are much simpler, and the chance of unintended breakage is much lower because everything runs independently.
Microservice-based architectures are heavily decentralized because they focus on the successful reusability of individual components. Many applications use pre-written microservice libraries for quick deployment and might also modify those according to their use case, making development much faster.
With the advent of cloud, developers have naturally drifted toward microservices when designing applications, thus the growing acceptance of container-based app deployment systems. With the adoption of microservices architecture, the need to separate or rather isolate the services from one another has become a priority, and this can be achieved easily with containerization.
Containers are self-contained application execution environments that have their own memory, CPU, etc. They enable more efficient, more fluid utilization of system resources than virtual machines do because VMs have a additional weight in the OS they carry.
As containers have become popular, many container management systems have come to fill conceptual and operational gaps. Since the container ecosystem lives only inside the kernel, what if we need to use multiple containers and multiple kernels? When one container is ready to scale, how will the host hardware be allocated and optimized? Such scalability questions were answered very efficiently by the arrival of Kubernetes, the cloud-scale orchestration platform for containers.
Kubernetes manages clusters of hardware for use by multiple containers. The result is better container performance without unwanted resource waste.
Now comes Supergiant, which is built on top of Kubernetes. Supergiant transforms Kubernetes to a more efficient cluster deployment platform by equipping admins with easy configuration options, automatic load balancer management, and tighter resource allocation controls for maximum efficiency.
To keep our illustration simple, let’s look at how a traditional web application would look on Supergiant. Here we have two main components: the web-app component and the database component.
As a microservice architecture grows, the optimizations performed by Supergiant dramatically benefit the performance of the entire system. Also, this architecture results in better microservice abstraction.
Learn more about Kubernetes, which Supergiant is built on top of.
Imagine an application comprised of many services. We create a good microservice architecture by deploying each service as an independent container. When it comes time to scale hardware for our application cloud, Supergiant handles it for us, and our microservices remain resilient.
Until this point, everything is fine, and it seems nothing is complicated. But what if we had to do a significant change in one of the services of our application? In a normal container cluster, this has to be done by individual tracking and replacing the particular service. This involves the same steps to be replicated along all the installed nodes, which is a tedious and time-consuming task.
Supergiant saves devops time because it can propagate such a change to every node in our cluster with just one single command. This timesaving affects not only the individual service changes/replaces but also security, logging, any other individual operation, or even the application as a whole.