Introduction to Istio Service Mesh for Kubernetes

Modern cloud-native applications are moving away from the monolithic architecture approach toward microservices architecture in which an application is split into multiple loosely coupled parts (referred to as microservices) that have shared communication interfaces, access control rules, and security policies. Microservices might be written in different programming languages, they might have multiple instances and versions running concurrently on different platforms (e.g., cloud or on-premises). As a result, a microservices application becomes similar to a network of services, which introduces new complexities and challenges for its maintenance, monitoring, performance, and security.

Microservices applications have requirements that are difficult to meet within the application code. For example, in terms of traffic management, microservices applications need to have the following features:

  • Routing and load balancing traffic between different microservices and between different versions of the same microservice.
  • Implementing failover management features like circuit breakers and timeouts.
  • Splitting traffic between different microservices in a controlled manner.
  • Monitoring traffic.

In addition, in terms of security, microservices should be able to:

  • Defend against the man-in-the-middle attacks with traffic encryption.
  • Have mutual TLS and fine-grained access controls.
  • Use various auditing tools to know the actions of users at a specific time.

Finally, remember that interactions between microservices in the cloud are hard to monitor using traditional monitoring approaches. In order to gain real-time insights into their performance and to be able to identify issues, we might need distributed tracing, monitoring, and logging features specifically designed for the cloud-native environment.

To address all these challenges, microservices require efficient load balancing, authentication and authorization, traffic routing, encryption, circuit breaker functionality, tracing, monitoring, and many others. All these features should be also added to the application without affecting its core functionality. This is where the concept of service mesh becomes so important!

What Is a Service Mesh?

A service mesh is a configurable infrastructure and network layer for microservices applications that enables efficient interaction between them and integrates all the functionality described above. A service mesh is normally implemented through a proxy instance, called a sidecar, that is added to each service instance. Sidecars do not affect the application code and abstract the service mesh functionality away from the microservices. This allows developers to concentrate on developing and maintaining the application, while OPs can manage the service mesh in the distributed environment.

The most popular service meshes are Linkerd, Envoy, and Istio. In this tutorial, we’ll discuss Istio Service Mesh launched by Google, IBM, and Lyft in 2017. Its architecture and features are discussed below.

Istio Service Mesh

Istio is a platform-independent service mesh that can run in a variety of environments including cloud, on-premise, Mesos, Kubernetes, and Consul. The platform allows creating a network of microservices with service-to-service authentication, monitoring, load balancing, traffic routing, and many other service mesh features described above. You can create the Istio service mesh for your microservices application by adding a special sidecar proxy that intercepts all network calls between your microservices and subjects them to Istio checks and user-defined traffic rules.

Two basic components of the Istio architecture include Data Plane and Control Plane (see the image below).

Data Plane

The data plane is based on a set of intelligent Envoy proxies deployed as sidecars to the relevant Service inside Pod(s) managed by this Service. Istio leverages such features of Envoy as dynamic service discovery, load balancing, TLS termination, circuit breakers, HTTP/2 and gRPC proxies, health checks, staged rollouts with percentage-based traffic splits, fault injection, and telemetry.

Control Plane

The control plane configures and manages Envoy proxies to route traffic to microservices. It is also used to configure Mixers. These general purpose policy and telemetry hubs can enforce access control and usage policies and collect metrics from proxies to guide their decisions. Mixers use request-level attributes extracted by proxies to create and manage their policy decisions.

In general, the Control Plane functionality involves the following:

  • Automatic load balancing for HTTP, WebSocket, TCP traffic, and gRPC.
  • Fine-grained control of traffic behavior including rich routing rules, circuit breakers, retries, failovers, and fault injection. Istio makes it easy to set up A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits.
  • A policy layer with support for access controls, rate limits, and quotas.
  • Creation of metrics, logs, nd traces for all Ingress and Egress traffic within your cluster. Istio’s custom dashboard provides with valuable insights into the performance and health of your services.
  • Strong identity-based authentication and authorization, and encryption of service communication at scale to secure your applications.

Istio ArchitectureSource: Istio Documentation

In addition to Mixers discussed above, this functionality is implemented by the following Istio components:

  • Pilot. As the core component used for traffic management in Istio, Pilot configures and manages traffic routing and service discovery for Envoy sidecar, and it ensures resiliency through such failure recovery features as timeouts, retries, circuit breakers, among others.
  • Citadel. This is a security component of Istio that offers strong service-to-service and user authentication and ships with built-in identity and credentials management. Citadel can also be used to encrypt traffic in the Istio service mesh.
  • Galley. This is component that is responsible for validating user-created Istio API configuration on behalf of other Istio Control Plane components.

All these features make Istio a powerful infrastructure layer for microservices running in your Kubernetes cluster. Istio’s traffic management, security, access control, and failover management capabilities make it an indispensable component of modern cloud-native applications.

In what follows, we’ll guide you through installing Istio and its components in the local Minikube cluster. By the end of this tutorial, you’ll have Istio installed and configured on your infrastructure and understand how to use basic traffic routing capabilities of this service mesh. Let’s get started!


In order to test the examples presented below, the following prerequisites are required:

  • A running Kubernetes cluster. See Supergiant documentation for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • A kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

Note: all examples in this tutorial assume that you are using Minikube cluster deployed on a local machine. To run Istio locally, you’ll need Minikube, version 0.28.0 or later.

Step 1: Prepare Minikube for Istio

In order to install Istio’s control plane add-ons and other applications for telemetry, Istio documentation recommends starting Minikube with 8192 MB  of memory and 4 CPUs :

For Kubernetes version > 1.9, you should start Minikube with the following configuration (see the code below).

Note: Please, replace - -vm-driver=your_vm_driver_choice  with your preferred VM driver option for Minikube (e.g virtualbox). For available VM drivers for Minikube, consult the official Minikube documentation.

Upon running this command, Minikube will be configured for Istio installation.

After starting Minikube you might also consider removing taints applied to Minikube beceause they can affect Istio Pods scheduling. To remove taints, run the following command:

Step 2: Install Istio on your Minikube Cluster

The first thing you need to do to install Istio on Minikube is to get the latest release of Istio containing various CRDs, YAML manifests, and Istio command line tools. To get the latest release of Istio for MacOS and Linux, run the following command:

Next, you need to go to the Istio package directory. For example, if the downloaded package is named istio-1.0.5 run:

If you run ls  command inside the package, you’ll see the following assets:

  • Installation .yaml  files for Kubernetes in the install/  directory.
  • Sample applications in the samples/  folder. We’ll use one of these applications later to demonstrate Istio traffic management features.
  • The bin/  directory with the istioctl  client binary. We’re going to use this binary to manually inject Envoy as a sidecar proxy and to create routing rules and policies.
  • The istio.VERSION  configuration file.

To use the istioctl  client, you have to add its path to the PATH environment variable. On macOS and Linux, run:

The installation directory stores Istio’s Custom Resource Definitions used to install Istio. CRDs allow extending the Kubernetes API with your custom resources. Istio extensively uses CRDs to create its own API on top of Kubernetes. We can install Istio’s CRDs by running the following command:

In several seconds, multiple Istio’s CRDs will be committed to the kube-apiserver. Let’s move on!

Step 3: Install Istio’s Core Components

There are several options for installing Istio’s core components described in the Istio’s Quick Guide for Kubernetes. We will install Istio with default mutual TLS authentication between sidecars, which is enough for demonstration purposes. In the production environment, however, you should opt for installing Istio using the Helm chart, which allows for more control and customization of Istio in your Kubernetes cluster.

To install Istio with the default mutual TLS authentication between sidecars, run

Awesome! You have Istio and its core components like Pilot, Citadel, and Envoy installed in your Kubernetes cluster. We are ready to use Istio to create a service mesh!

Step 4: Deploy Bookinfo Application

Now that Istio and its core components are installed, we will demonstrate how the service mesh works using the Bookinfo sample application from the package’s /samples  folder. The Bookinfo application displays book information similarly to catalog entries in online bookstores. Each book entry features a book description, book meta details (ISBN, page count), and a few book reviews (with or without ratings).

The Bookinfo is a typical example of the microservices application that screams to be managed with Istio.

Why is this so? The app is broken into four separate microservices (productpage, details, reviews, and ratings), each written in a different programming language. The “productpage” microservice is written in Python; the “details” microservices — in Ruby; the “reviews” microservice — in Java; and the “ratings” microservice — in Node.js. In addition, there are three versions of the reviews  microservice. The versions differ in how they display ratings and whether they call the ratings service. Managing this homogeneity definitely requires a service mesh that can connect loosely coupled microservices together and route traffic between different versions of the same microservice.

To apply service mesh capabilities to the Bookinfo app, we don’t need to change anything in its code. All we need to do is to enable the Istio environment by injecting Envoy sidecars alongside each microservice described above. Once injected, each Envoy sidecar will intercept incoming and outgoing traffic to the microservices and provide hooks needed for the Istio Control Plane (see the blog’s intro) to enable traffic routing, load balancing, telemetry, and access control for this application.

Before deploying the Bookinfo app, let’s first look at the contents of the bookinfo.yaml  file that contains all manifests:

The manifests above contain one Deployment and one Service for each microservice of the app and for all three versions of the reviews microservice. To install the Bookinfo app, we will be using manual sidecar injection that adds an Envoy sidecar exposing Istio capabilities to each microservices of the app. We should use the istioctl kube-inject  command to manually modify the bookinfo.yaml  file before creating the Deployments:

Alternatively, you can deploy the app by enabling the automatic sidecar injection . In this case, you should label the default namespace with istio-injection=enabled

And then simply deploy the app using kubectl :

Note: Automatic sidecar injection requires Kubernetes 1.9 or later.

Both commands will launch four microservices and start all three versions of the reviews service. In a realistic scenario, you would need to deploy new versions of a microservice over time instead of deploying them simultaneously.

Now, let’s see if the Bookinfo Services were successfully created:

and if the Bookinfo Pods are running:

Cool! The Bookinfo app was successfully deployed. Now, let’s configure the Istio Gateway to make the app available from outside of your cluster.

Step 5: Enable Istio Gateway

An Istio Gateway configures a load balancer for HTTP/TCP traffic at the edge of the service mesh and enables Ingress traffic for an application. Essentially, we need an Istio Gateway to make our applications accessible from outside of the Kubernetes cluster. After enabling the gateway, users can also use standard Istio rules to control HTTP(s) and TCP traffic entering a Gateway  by binding a VirtualService  to it.

We can define the Ingress gateway for the Bookinfo application using the sample gateway configuration located in the samples/bookinfo/networking/bookinfo-gateway.yaml . The file contains the following manifests for the Gateway and VirtualService:

The Gateway manifest simply creates an Istio gateway for all incoming HTTP traffic for all hosts. To make the Gateway work for our Bookinfo application, we also bind a VirtualService with a list of all microservices routes to the Gateway.

In essence, a VirtualService is Istio’s abstraction that defines a set of rules that control how requests for a given microservice are routed within an Istio service mesh. We can use virtual services to route requests to different versions of the same microservice or to a completely different microservice than was requested. We bound a VirtualService to a given Gateway by specifying the gateway’s name in the gateways  field in the configuration (see the manifest above). Now that you understand how Gateways and VirtualServices work, let’s enable them by running the following command:

Confirm that the gateway has been created by running the following command:

Step 6: Set the INGRESS_HOST and INGRESS_PORT Variables for Accessing the Gateway

The next step is setting the INGRESS_HOST  and INGRESS_PORT  variables for accessing the gateway. First, you need to determine if your cluster is running in an environment with external load balancers. To check this, run the following command:

If the EXTERNAL-IP  value is <pending> or <none> , the environment does not provide an external load balancer for the Ingress gateway. This is what we expect when running this tutorial on Minikube. In this case, you can access the Istio Gateway using the Service’s NodePort.

If you don’t have external load balancers, set the Ingress ports running the following command:

Setting the ingress IP depends on the cluster provider. For the Minikube, we use:

Check out the Istio official documentation if you are using other providers:

Now everything is ready to set the GATEWAY_URL :

Let’s see if the environmental variable was created:

If you have an environment with external load balancers, you should follow the instructions here.

Awesome! Let’s now confirm that the Bookinfo application is running:

Because we used the Istio Gateway and the VirtualService bound to it, you can also access the Bookinfo application in your browser by visiting http://$GATEWAY_URL/productpage . Here is how it looks like:


Istio Bookinfo

Try to refresh the Product Page several times, and you’ll notice that different versions of reviews microservice are displayed. One version has no stars, and other one have stars with different colors (red and black):


That’s because, by default, Istio presents three versions of the reviews microservice in a round robin style (no stars, black stars, red stars). We will change this behavior later by using Istio to control the version routing.

Step 7: Set Default Destination Rules

The first thing we need to do to implement version routing with istio is to define subsets in destination rules.

Subsets are actually different versions of the application binary. These can be different API versions of the app or iterative changes to the same service deployed in different environments (staging, prod, dev, etc.). Subsets can be used for various scenarios such as A/B testing and canary rollouts. The choice of version to display can be decided based on headers, URL, and weights assigned to each version (see our blog about traffic splitting in Traefik for more information about traffic weights).

In its turn, a destination refers to the network addressable service to which the request/connection will be sent after processing a routing rule. The service in the service registry (e.g., Kubernetes Services, Consult services) to which the traffic routed should be referred in the field.

In what follows, we create default destination rules for the Bookinfo services. The destination rules manifest looks as follows:

Since we deployed Istio with default mutual TLS authentication, we need to execute the following command:

You’ll need to wait a few seconds for the destination rules to be enabled. Then, you can display the rules with the following command:

Step 8: Implement Request Routing

Now, we are ready to change the default round-robin behavior for traffic routing. In this example, we will route all traffic to v1  (version 1) of the ratings service. Then, we will route traffic based on the value of an HTTP request header.

In order to route only to one version, we need to apply new VirtualServices that set the default version for the microservices. In this example, our virtual service will route to the v1 of all microservices in the application. The manifest looks as follows:

As you see, we have four VirtualServices  for each microservice: details, ratings, reviews, and productpage. Each virtual service routes traffic to the v1 of the microservice. This is specified in the destination property that points to a specific subset defined in the destination rules we created above. For example:

To apply the VirtualServices, run the following command and wait a few seconds:

Now, you can display the defined routes with the following command:

You can also see the corresponding subset definitions by running:

Awesome! We have configured Istio to route to the v1 of the Bookinfo microservices. You can test the configuration by refreshing the /product page several times. You’ll see that each time you refresh the app the same version of reviews microservice with no rating stars is displayed no matter how many times you refresh. This implies that the traffic routing is configured to route all traffic to the version reviews:v1 which does not have the star rating service.

Step 9: Enabeg User-Based Routing

In the next example, we will implement a user-based traffic routing. Requests from a specific user will be routed to a specific service version. For example, all traffic from John will be routed to the service reviews:v2 , and all traffic from the user Mary will be routed to reviews:v1.

We will implement this functionality by adding a custom end-user  header to all outbound HTTP requests to the reviews service.

Let’s take a look at the virtual service manifest to understand how this works:

This manifest checks the request header and if its name is “end-user” and the contents matches “john,” and then the request is routed to the reviews:v2 version. If the request header matches “mary,” all requested are routed to the reviews:v3. In all other cases, the requests are routed to the first version.

Enable these rules by running:

Let’s confirm that the rules have been applied. First, log in to the Bookinfo app as john  (use random password) and refresh the browser. You’ll see that black stars ratings appear next to each review no matter how often you refresh the browser. Next, log in as mary . Now, if you refresh the page, you’ll see red star rating displayed by the v2 of the reviews microservice. Finally, if you log in as any other user or don’t log in at all, the v1 of the reviews microservice with no ratings will be displayed. It’s as simple as that! We have successfully configured Istio to route traffic based on user identity.

Step 10: Clean Up

Now, as the tutorial is over let’s clean up after ourselves:

Delete Bookinfo App:

Remove the application virtual services:

Delete the routing rules and terminate the application pods:

If you wish, you can also delete Istio from the cluster.

If you installed Istio with istio-demo.yaml  run:

If you installed Istio with istio-demo-auth.yaml run:

You can also delete all Istio CRDs if needed:


That’s it! Now you understand the basic purpose and architecture of services meshes and how to implement them in Kubernetes using Istio service mesh. In this article, we touched just the tip of the Istio’s “iceberg.” In particular, you learned how to install Istio and its core components in Kubernetes and use some of its intelligent routing features.

In the next tutorial, we’ll focus in more detail on traffic management with Istio, touching upon such topics as fault injection, traffic splitting, controlling Ingress and Egress traffic, circuit breaking, and some other important topics. Stay tuned to the Supergiant blog to find out more!

Subscribe to our newsletter