ALL THINGS KUBERNETES

Advanced Network Rules Configuration in Kubernetes with Cilium

As you remember from previous tutorials, Kubernetes imposes a number of fundamental networking requirements on any networking implementation. These include unique IP for each Pod and NAT-less routing of traffic across nodes. The platform also ships with a number of useful API primitives for configuring networking access and security, including LoadBalancer, Ingress, and NetworkPolicies. However, these are not implemented by default. For example, to implement Ingress or NetworkPolicy, you’ll need an Ingress controller or CNI-compliant networking plugin that supports Ingress or NetworkPolicies. Cilium is one of the most popular and easy-to-install for testing purposes.

In this article, we’ll demonstrate how to use Cilium to configure access to external domains and show you how to use L7 Network Security policies for fine-grained control over HTTP/API access by the application. Let’s get started!

Why Use Networking Plugins with Kubernetes?

As we’ve mentioned already, Kubernetes does not implement such things as multi-host networking (i.e., Pod-to-Pod communication across nodes). We need to use CNI-compliant networking plugins like Weave Net or Cilium to do so.

You might also consider CNI-compliant networking plugins to configure various aspects of networking, including:

  • Networking security. For example, networking access control, routing and protocol-level security for such protocols as REST/HTTP, gRPC, or Kafka.
  • Load balancing. For example, how network traffic is routed to even out load across an app.
  • Networking performance. Specifically, how kube-proxy  switches between userspace and kernelspace
  • Container isolation and access from other containers using Network Policies.
  • Container access to external services via egress
  • Networking monitoring, troubleshooting, and packet tracking.

What Is Cilium?

Cilium is a CNI-compliant networking plugin whose purpose is to provide multi-host network connectivity for Linux containers and a way to define granular network-layer and application-layer security policies.

Cilium’s developers sought to align network security management at the kernel level with the requirements and challenges of the container environment. In particular, until very recently, defining network policies such as ingress/egress for containers was based on the Linux network filtering technology iptables built on top of netfilter  Linux kernel module. iptables  is used by Kubernetes to route network packets from Service IPs to backend Pods or to block access to certain ports and protocols. However, iptables was originally designed mainly for firewalling purposes, and it is not a good fit for the container environment.

This environment has a challenge with the highly volatile cycle of containers with frequently changing IPs and IP routing rules. However, iptables  filters on IP address and TCP/UDP ports that frequently churn in the dynamic container environment. Moreover, iptables  can’t easily scale to hundreds of thousands of networking rules and load balancing tables that need to be updated at the increasingly higher frequency.

Also, traditional Linux approaches do not implement granular network policies. For example, they lack the ability to filter on individual application protocol requests such as HTTP GET, POST or DELETE. They normally operate at Layer 3 and 4; a protocol running on a particular port is either fully trusted or blocked entirely.

Cilium addresses these shortcomings with the Berkeley Packet Filter (BPF) technology, enables the dynamic insertion of network security visibility and control logic within the Linux kernel. In plain language, BPF allows application developers to write a program, load it into the kernel’s memory, and then run it when certain events happen. For example, when a network packet has been received. This small program then can enforce security policies and monitor packets at the kernel level. Thus, with BPF, we can filter packets and update network rules in a more granular and fine-grained manner.

By leveraging Linux BPF, Cilium gets the ability to insert security rules based on service/pods/container identity rather than IP address identification as in the traditional system. It also can filter on the application-layer network events. As a result, BPF makes it simple to apply security policies in a dynamic container environment decoupling security from addressing and providing stronger security isolation.

In addition to BPF, Cilium adds the following functionality to the Kubernetes cluster:

  • Pod-to-Pod connectivity via Multi-Host Networking. With Cilium, each Pod gets an IP address from the node prefix and can talk to Pods on other nodes.
  • Advanced usage of the NetworkPolicy resource to control security and access parameters of Pods. For example, create isolated Pods, limit access to certain Pods etc.
  • Cilium provides a Custom Resource Definition (CRD) that implements Kubernetes NetworkPolicy resource. CiliumNetworkPolicy CRD extends policy control to add Layer 7 policy enforcement on ingress and egress for HTTP and Kafka protocols.
  • Egress support for CIDRs to secure access to external services.
  • Enforcement to external headless services to automatically restrict to the set of Kubernetes endpoints configured for a Service.
  • ClusterIP implementation to provide distributed load-balancing for Pod-to-Pod traffic.

Tutorial

In this tutorial, we’ll show you how to use Cilium to manage egress policies and control network access at the Layer 7 HTTP leveraging the power of the BPF technology discussed above.

To complete the examples, you’ll need the following prerequisites:

  • A running Kubernetes cluster. See Supergiant documentation for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube >=0.33.1.
  • A kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

In the first part of this tutorial, we’ll install Cilium on Minikube and show how to use its advanced security settings to lock egress access to external services.

Step #1: Deploy Cilium on Minikube

First, let’s deploy Cilium on our Minikube instance. To deploy Cilium, you should have Minikube >=0.33.1 started with the following arguments:

We need to deploy the Cilium DaemonSet, RBAC, and several configuration files. The Cilium website provides manifests for specific Kubernetes versions. You can find the version you are using in the console when you start Minikube.

Finally, deploy Cilium to Minikube using the manifests for your version (e.g., we run Kubernetes 1.13.3):

This should deploy Cilium to the kube-system  namespace. To see this list of Cilium Pods, you can run:

Step #2: Locking Down External Access from a Pod with Cilium’s DNS-Based Policies

DNS-based policies are very useful for controlling access to external services/domains. In this example, we use Cilium DNS-backed policy to allow egress from the Pod to specific FQDN (Fully Qualified Domain Name) and block access to all other domains from that Pod. The Pod will be able to send egress traffic only to the allowed destination.

First, we’ll create a Pod running netperf  container used for network bandwidth testing.

Please, note that we defined two labels: org  and class . We need Pod labeling to reference this Pod in the Cilium network policy later.

Let’s create the Pod:

Next, let’s create a Cilium network policy. As you already know, CiliumNetworkPolicy  is a CRD that extends K8s built-in NetworkPolicy  functionality. For more information about Network Policies in Kubernetes, see our latest tutorial here.

The following Cilium network policy blocks access of test-pod  to any external domain other than api.twitter.com .

We apply this custom network policy to our test-pod  by specifying its labels in the endpointSelector . Furthermore, in the spec.egress  field, we specify allowed FQDNs for the Pods that are managed by this policy. In our case, we allow the Pod to access only api.twitter.com .

Now, let’s go ahead and create this policy:

Step #3: Testing the Policy

Let’s verify that the policy works as expected. Simply get a shell to a running test-pod  and curl api.twitter.com :

This command should return Twitter’s HTML page as a response.

Now, try to access another domain, e.g., kubernetes.io

The connection hangs and then after timeout terminates with the exit code 7 (“Failed to connect() to host or proxy”). This response indicates that the egress request to kubernetes.io  was blocked.

That’s it! We have implemented a DNS-based egress policy for our Pod using Cilium.

Example: HTTP-Aware L7 Network Policy

As we’ve mentioned, Cilium leverages BPF for the fine-grained definition of network policies at the protocol or application layer. The motivation behind this approach is quite simple. Sometimes we want to allow API requests to only specific endpoints such as GET /listing  and ban access to all other endpoints: POST /listing , DELETE /listing , etc. This approach enforces least-privilege isolation and enables the strongest security in communication between microservices.

To illustrate the L7 Cilium network policy, we have created a Docker image with a simple Node.js application that returns HTTP responses on the /discount  API endpoint. We’ll run this application in the following Pod:

Save this manifest to l7-test-pod.yaml  and run:

Check if everything went smoothly:

Next, let’s create a client Pod that will access the Node.js application of the first Pod:

Create the Pod:

Great! Now, let’s define a new Cilium Network Policy with the L7 network security rules:

This policy will apply to the Pods with the label type:l7-test , i.e., the Pod we created above. It will filter requests from the Pods that match the label org:client-pod  which is specified in the ingress[]fromEndpoints  field.

The first part of the Ingress policy we define is a simple policy that filters only on IP protocol (network layer 3) and TCP protocol (network layer 4). It is widely referred to as L3/L4 network security policy. This policy is defined in the ingress.toPorts[].ports  part of the Ingress definition.

However, we also defined a L7 policy that filters requests at the HTTP protocol level. It is specified in the ingress.toPorts[].rules  section of the Ingress configuration. The policy allows only HTTP GET requests to /discount  API endpoint of our Node.js application.

Now that you understand how this L7 policy works, let’s go ahead and apply it to our Pod:

You can now see the policy as the CiliumNetworkPolicy  resource:

Let’s now test the policy. First, we’ll need to find the IP of the Node.js Pod (a server Pod):

We’ll try to access the Node.js Pod on this IP and the /discount  route. Get a shell to a running client-pod  and run curl GET request on this path (note: use the IP of your Pod):

As you see, our client-pod  has successfully gotten a discount from the l7-test-pod . Now, let’s try to access the same endpoint but with the POST request from the client Pod:

We now can’t POST to /discount  because the Ingress L7 policy prohibits client Pod to access this API endpoint. We’ve successfully implemented L7 network security policy with Cilium.

Conclusion

In this tutorial, we discussed the architecture of the Cilium networking plugin that leverages advanced Linux BPF technology for fine-grained definition and management of network rules and policies. This plugin is CNI-compliant, so you can use it for multi-host networking and network policies in Kubernetes. We demonstrated how you can use Cilium to whitelist the Pod’s Egress connections to some domain and block access to all others.

Also, we learned how Cilium leverages BPF technology to create fine-grained network policies that filter packets at the HTTP protocol and application-layer. Using these policies, you can allow/ban access of Pods to certain API endpoints of other Pods without changing the application code. You no longer need to write and update complex API request access rules in your application code. HTTP-layer network policies can be managed at the cluster level which decouples network security from the application logic.