ALL THINGS KUBERNETES

Understanding Network Policies in Kubernetes

As you remember from an earlier Supergiant tutorial, the Kubernetes networking model allows Pods and containers running on different nodes to easily communicate with each other.

Containers can access each other via localhost , and Pods can access other Pods using service name or Fully Qualified Domain Name (FQDN) if they live in different namespaces. In both cases, kube-dns  or any other DNS service deployed to your cluster will ensure that the DNS is properly resolved and Pods can access each other.

This flat networking model is great when you want all Pods to access all other Pods. However, there are scenarios where you want to limit access to certain Pods. For example, you may want to make some Pods “isolated” and to  forbid any access to them or to limit access from some Pods or Services that are not expected to interact with a selected group of Pods. Kubernetes can help you achieve this with the NetworkPolicy  resource. In what follows, we’ll show you how to define a NetworkPolicy  to create “isolated” Pods or limit access to a certain group of Pods. Let’s get started!

Tutorial

To complete the examples used below, you’ll need the following prerequisites:

  • A running Kubernetes cluster. See Supergiant documentation for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube >=0.33.1.
  • A kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

Network Policy is just an API resource that defines a set of rules for Pod access. However, to enable a network policy, we need a network plugin that supports it. We have a few options:

If you are running Minikube, Cilium is the simplest solution to test network policies. Let’s go ahead and deploy it to our local cluster.

Step 1: Deploy Cilium to Minikube

To deploy Cilium, you should have Minikube >=0.33.1  started with the following arguments:

After Minikube is started, we need to deploy the Cilium DaemonSet , Cilium RBAC  and the necessary configuration for connecting to etcd  instance deployed to Minikube.

First, find your Kubernetes version. It’s displayed in the console when you start Minikube:

If you want a specific K8s version for running Minikube, use the --kubernetes-version  flag with your preferred version.

Note: there are some issues when deploying Cilium with Kubernetes 1.8 and 1.9. See the details here.

Next, find the YAML file with the Cilium manifests for your Kubernetes version in the official Cilium Getting Started guide here. Finally, deploy Cililum to Minikube using the manifests for your version (e.g., we run Kubernetes 1.13.0):

This should deploy Cilium to the kube-system  namespace. To see this list of Cilium Pods, you can run:

In the production multi-node environment, Cilium DaemonSet  will place one Pod per node. Each Pod then will enforce network policies on the traffic using Berkeley Packet Filter (BPF). Also, note that for production use of Cilium you’ll need a key-value store (e.g etcd ). See the Cilium Kubernetes Integration Guide to learn more.

Step 2: Deploy Apache Web Server

Next, we need to deploy an app that we want to be managed by a Network Policy. We’re going to create a simple Apache HTTPD Deployment with two replicas. The manifest for this Deployment is pretty straightforward:

Create the Deployment:

The best way to access Pods in the Deployment is to expose them using a Service. Let’s do it with a simple one-liner like this:

Now let’s check if everything worked as we expected:

Great! The Apache Deployment and the Service are ready to go, and we can apply a NetworkPolicy to them now.

Step 3: Define Network Policy

As you remember, all Pods in your cluster are non-isolated by default, which means they can be accessed by any other Pods. However, if we apply a   NetworkPolicy  to a particular Pod, that Pod will then reject all connections that are not allowed by that  NetworkPolicy . We can define a Network Policy using the following spec:

This NetworkPolicy  manages a group of Pods specified in the spec.podSelector  field. Thus, all Pods with the label app=httpd  (i.e., our Apache endpoints) will be selected by this Network Policy. Please, note that if this field is left empty, a Network Policy will select all Pods in its namespace.

The next step is to specify a type of network policy applied to selected Pods. We can apply Ingress  policy to control incoming traffic, Egress  policy to control Egress traffic from selected Pods or both.

Ingress rules list traffic sources allowed to access a group of Pods specified in the spec.podSelector . These sources can be specified by Pod selector, IP range, and namespace selector. For example, Ingress podSelector  matches a group of Pods that can access our Apache HTTPD Deployment by label. These Pods should run in the same namespace where the NetworkPolicy  is enabled.

Also, we can allow access from all Pods living in a particular namespace by using the namespaceSelector  field. If you specify both namespaceSelector  and podSelector  in a single array entry as in the example below, you will select particular Pods within particular namespaces. Please note that to enable this behavior, both namespaceSelector  and podSelector  should belong to a single array element. This works as if we are ANDing two source types.

This example is different from the manifest above where namespaceSelector  and podSelector  are two separate match rules independent of each other. This works as ORing two traffic source types.

We can use   ipBlock field to allow Ingress or Egress traffic from or to particular IP CIDR ranges. Pod IPs are ephemeral, so these IPs should be cluster-external IPs. For more details about using IP ranges for Ingress and Egress, please consult this Kubernetes network policies doc.

Finally, we can specify the ports on which to allow the connection to our Apache Pods. In the manifest above, we allow connection on TCP port 80 , the port on which our HTTPD server listens to connections.

Egress rules are very similar to Ingress rules except that they define the destination of the traffic. As in the case of Ingress, the Egress rules may be based on podSelector , namespaceSelector , and ipBlock .

So, let’s summarize what the NetworkPolicy  above does. It allows connections to all the Pods with the label app=httpd  (Apache web server) on TCP port 80  in the default  namespace from:

  • Any Pod that has the label role=frontend .
  • Any Pod in a namespace with the label project=dev .

Our Egress rules allow connections from any Pod in the “default” namespace with the label app=httpd  to the CIDR range 10.0.0.0/24  on TCP port 5978 .

Now that you understand how NetworkPolicy  works, let’s go ahead and create it:

Step 4: Testing the Network Policy

Because we’ve already deployed Cilium, we can expect our NetworkPolicy  to work. Let’s test it by creating a Pod with the label app:busybox , different from the one specified in the Ingress Pod selector. We’ll try to connect to Apache server from within the Busybox container using the wget  command:

Save this manifest to busybox.yaml  and create the Pod:

Let’s stream the Busybox logs to see what happens:

After some time, the request to the Service httpd-deployment  timed out. That’s because the Busybox Pod’s label does not match the podSelector  label in the Ingress rule of the Network Policy. Thus, the Pod is not allowed to access Pods in your Apache Web Server deployment.

Hmm! Let’s create another Pod with a different label:

As you see, this Pod has a label role:frontend  that is allowed in our Ingress rules. Let’s create the Pod:

Now, if you check the Pod’s logs you’ll find that wget  command returned no errors. This means that this Pod has successfully connected to your Apache Service:

Conclusion

That’s it! In this tutorial, you learned how to use NetworkPolicy  resource to control traffic and access to your Deployments and Pods. This feature is very useful when you want to limit access to some sensitive Pods from within and outside of the cluster.

Here we used Cilium as the network controller for the Network Policy, but you can try out other options such as Calico or Weave Net, among others. Also, check out our latest tutorial to learn how service meshes can be used for more advanced use cases of service-to-service communication in Kubernetes.

If you enjoyed this article you might be interested in watching the following webinar! Click on the banner below to watch it now.

Free Kubernetes Training - How to Deploy a Cluster from Scratch link to webinar

Subscribe to our newsletter