ALL THINGS KUBERNETES

Kubernetes DNS for Services and Pods

As we know, a Kubernetes master stores all service definitions and updates. Client pods that need to communicate with backend pods load-balanced by a service, however, also need to know where to send their requests. They can store network information in the container environmental variables, but this is not viable in the long run. If the network details and a set of backend pods change in the future, client pods would be incapable of communicating with them.

Kubernetes DNS system is designed to solve this problem. Kube-DNS and CoreDNS are two established DNS solutions for defining DNS naming rules and resolving pod and service DNS to their corresponding cluster IPs. With DNS, Kubernetes services can be referenced by name that will correspond to any number of backend pods managed by the service. The naming scheme for DNS also follows a predictable pattern, making the addresses of various services more memorable. Services can also be referenced not only via a Fully Qualified Domain Name (FQDN) but also via only the name of the service itself.

In this blog post, we discuss the design of Kubernetes DNS system and show practical examples of using DNS with services and debugging DNS issues. Let’s get started!

How Does Kubernetes DNS Work?

In Kubernetes, you can set up a DNS system with two well-supported add-ons: CoreDNS and Kube-DNS. CoreDNS is a newer add-on that became a default DNS server as of Kubernetes v1.12. However, Kube-DNS may still be installed as a default DNS system by certain Kubernetes installer tools.

Both add-ons schedule a DNS pod or pods and a service with a static IP on the cluster and both are named kube-dns  in the metadata.name  field for interoperability. When the cluster is configured by the administrator or installation tools, the kubelet  passes DNS functionality to each container with the --cluster-dns=<dns-service-ip>  flag. When configuring the kubelet , the administrator can also specify the name of a local domain using the flag --cluster-domain=<default-local-domain> .

Kubernetes DNS add-ons currently support forward lookups (A records), port lookups (SRV records), reverse IP address lookups (PTR records), and some other options. In the following sections, we discuss the Kubernetes naming schema for pods and services within these types of records.

Service DNS Records

In general, Kubernetes services support A records, CNAME, and SRV records.

A Record

A Record is the most basic type of a DNS record used to point a domain or subdomain to a certain IP address. The record consists of the domain name, the IP address to resolve it, and TTL in seconds. TTL stands for Time To Live, and is a sort of expiration date put on a DNS record. A TTL tells the DNS server how long it should keep a given record in its cache.

Kubernetes assigns different A record names for “normal” and “headless” services. As you remember from our earlier tutorial, “headless” services are different from “normal” services in that they are not assigned a ClusterIP and don’t perform load balancing.

“Normal” services are assigned a DNS A record for a name of the form your-svc.your-namespace.svc.cluster.local  (the root domain name may be changed in the kubelet  settings). This name resolves to the cluster IP of the Service. “Headless” services are also assigned a DNS A record for a name of the form your-svc.your-namespace.svc.cluster.local . However, in contrast to a “normal” service, this name resolves to a set of IPs of the pods selected by the service. The DNS will not resolve this set to a specific IP automatically so the clients should take care of load balancing or round-robin selection from the set.

CNAME

CNAME records are used to point a domain or subdomain to another hostname. To achieve this, CNAMEs use the existing A record as their value. In its turn, an A record subsequently resolves to a specified IP address. Also, in Kubernetes, CNAME records can be used for cross-cluster service discovery with federated services. In this scenario, there is a common Service across multiple Kubernetes clusters. This service can be discovered by all pods no matter what cluster they are living on. Such an approach allows for cross-cluster service discovery, which is a big topic in its own right to be discussed in another tutorial.

SRV Records

SRV records facilitate service discovery by describing the protocol/s and address of certain services.

An SRV record usually defines a symbolic name and the transport protocol (e.g., TCP) used as part of the domain name and defines the priority, weight, port, and target for a given service (see the example below)

In the example above, _sip  is the service’s symbolic name and _tcp  is the transport protocol used by the service. The record’s content defines a priority of 10 for both records. Additionally, the first record has a weight of 70 and the second one has a weight of 20. The priority and weight are often used to encourage the use of certain servers over others. The final two values in the record define the port and hostname to connect to in oder to communicate with the service.

In Kubernetes, SRV Records are created for named ports that are part of a “normal” or “headless” service. The SRV record takes the form of _my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local . For a regular service, this resolves to the port number and the domain name: my-svc.my-namespace.svc.cluster.local . In case of a “headless” service, this name resolves to multiple answers, one for each pod backing the service. Each answer contains the port number and the domain name of the pod of the form auto-generated-name.my-svc.my-namespace.svc.cluster.local .

Pod DNS Records

A Records

If DNS is enabled, pods are assigned a DNS A record in the form of pod-ip-address.my-namespace.pod.cluster.local . For example, a pod with IP 172.12.3.4  in the namespace default  with a DNS name of cluster.local  would have an entry of the form 172-12-3-4.default.pod.cluster.local .

Pod’s Hostname and Subdomain Fields

The default hostname for a pod is defined by a pod’s metadata.name  value. However, users can change the default hostname by specifying a new value in the optional hostname  field. Users can also define a custom subdomain name in a subdomain  field. For example, a pod with its hostname  set to custom-host , and subdomain  set to custom-subdomain , in namespace my-namespace , will have the fully qualified domain name (FQDN) custom-host.custom-subdomain.my-namespace.svc.cluster.local .

Tutorial

Now we demonstrate for you how to address services by their DNS names, check the DNS resolution, and debug DNS issues when they occur. To complete examples used below, you’ll need the following prerequisites:

  • A running Kubernetes cluster. See Supergiant documentation for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • A kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

First, let’s create a deployment with three Python HTTP servers that listen on the port 80 for connections and return a custom greeting containing a pod’s hostname.

Let’s create the deployment:

Next, we need to create a service that will discover the deployment’s pods and distribute client requests among them. Below is a manifest for a “normal” service that will be assigned a ClusterIP.

Please, note that the spec.selector   field of the service should match the spec.template.metadata.labels  of the pod created by the deployment.

Finally, we need to create a client pod that will curl the service by its name. This way we don’t need to know the IPs of the service’s endpoints and be dependent on the ephemeral nature of Kubernetes pods.

Please note that we are using the name of the Service instead of its ClusterIP or IPs of pods created by the Deployment. We can use a DNS name of the service (“tut-service”) because our Kubernetes cluster uses a Kube-DNS add-on that watches the Kubernetes API for new services and creates DNS records for each of them. If Kube-DNS is enabled across your cluster, then all pods can perform name resolution of services automatically. However, you can certainly continue to use the ClusterIP of your service.

Once the client pod is created, let’s see its logs to verify that the Service’s name resolved to correct backend pods:

The response above indicates that Kube-DNS has correctly resolved the service name to the service’s ClusterIP and the service has successfully forwarded the client request to random backend pod picked in a round-robin fashion. In its turn, the selected pod returned its custom greeting, which you can see in the response above.

Using nslookup to Check DNS Resolution

Now, let’s verify that DNS works correctly if we look up the FQDN defined by the A record. To do this, we’ll need to get a shell to a running pod and use nslookup  command inside it.

First, let’s find the pods created by the deployment:

Select one of these pods and get a shell to it using the command below (use your unique pod’s name):

Next step, we’ll need to install nslookup  command available in the BusyBox package:

After BusyBox is installed, let’s check the DNS of the Service:

In the command above, we used the naming schema for the service’s A record. Let’s verify that the DNS lookup resolved the service DNS to the correct IP (A record).

That looks correct! You can see that the ClusterIP of the service is 10.109.90.121  — the same IP to which the DNS lookup resolved.

Debugging DNS

If the nslookup  command failed for some reason, you have several debugging and troubleshooting options. However, how do you know that the DNS lookup failed in the first place? If DNS fails, you’ll usually get responses like this:

The first thing you need to do in case of this error, is to check if DNS configuration is correct. Let’s take a look at the resolv.conf  file inside the container.

Verify that a search path and a name server are set up correctly as in the example below (note that search path may vary for different cloud providers):

If the /etc/resolve.conf  has all correct entries, you’ll need to check whether the kube-dns/coredns plugin is enabled. On Minikube, run:

As you see, we have the kube-dns   enabled. If your DNS add-on is not running, you can try to enable it with the following command:

Alternatively, you can check if the kubedns/coredns pods are running:

If the pod is running, there might be something wrong with the global DNS service. Let’s check it:

You might also need to check whether DNS endpoints are exposed:

These debugging actions will usually indicate the problem with your DNS configuration, or it will simply show you that a DNS add-on should be enabled in your cluster configuration.

Conclusion

To summarize, Kubernetes enables efficient service discovery with its built-in DNS add-ons: Kube-Dns or CoreDNS.

Kubernetes DNS system assigns domain and sub-domain names to pods, ports, and services, which allows them to be discoverable by other components inside your Kubernetes cluster.

DNS-based service discovery is very powerful because you don’t need to hard-code network parameters like IPs and ports into your application. Once a set of pods is managed by a service, you can easily access them using the service’s DNS.