You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts.
Create a simple Pod to use as a test environment
Note: This example creates a pod in the default namespace. DNS name resolution for services depends on the namespace of the pod. For more information, review DNS for Services and Pods.
Use that manifest to create a Pod:
…and verify its status:
Once that Pod is running, you can exec nslookup in that environment. If you see something like the following, DNS is working correctly.
Check the local DNS configuration first
Take a look inside the resolv.conf file. (See Customizing DNS Service and Known issues below for more information)
Verify that the search path and name server are set up like the following (note that search path may vary for different cloud providers):
Errors such as the following indicate a problem with the CoreDNS (or kube-dns) add-on or with associated Services:
or
Check if the DNS pod is running
Use the kubectl get pods command to verify that the DNS pod is running.
Note: The value for label k8s-app is kube-dns for both CoreDNS and kube-dns deployments.
If you see that no CoreDNS Pod is running or that the Pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually.
Check for errors in the DNS pod
Use the kubectl logs command to see logs for the DNS containers.
For CoreDNS:
See if there are any suspicious or unexpected messages in the logs.
Is DNS service up?
Verify that the DNS service is up by using the kubectl get service command.
Are DNS endpoints exposed?
You can verify that DNS endpoints are exposed by using the kubectl get endpoints command.
If you do not see the endpoints, see the endpoints section in the debugging Services documentation.
For additional Kubernetes DNS examples, see the cluster-dns examples in the Kubernetes GitHub repository.
Are DNS queries being received/processed?
You can verify if queries are being received by CoreDNS by adding the log plugin to the CoreDNS configuration (aka Corefile). The CoreDNS Corefile is held in a ConfigMap named coredns. To edit it, use the command:
Then add log in the Corefile section per the example below:
After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods.
Next, make some queries and view the logs per the sections above in this document. If CoreDNS pods are receiving the queries, you should see them in the logs.
Here is an example of a query in the log:
Does CoreDNS have sufficient permissions?
CoreDNS must be able to list service and endpoint related resources to properly resolve service names.
Sample error message:
First, get the current ClusterRole of system:coredns:
If any permissions are missing, edit the ClusterRole to add them:
kubectl edit clusterrole system:coredns -n kube-system Example insertion of EndpointSlices permissions:
Are you in the right namespace for the service?
DNS queries that don’t specify a namespace are limited to the pod’s namespace.
If the namespace of the pod and service differ, the DNS query must include the namespace of the service.
This query is limited to the pod’s namespace:
This query specifies the namespace:
To learn more about name resolution, see DNS for Services and Pods.
Known issues
-
Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces
/etc/resolv.conf
with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s--resolv-conf
flag to point to the correct resolv.conf (With systemd-resolved, this is/run/systemd/resolve/resolv.conf
). kubeadm automatically detects systemd-resolved, and adjusts the kubelet flags accordingly. -
Kubernetes installs do not configure the node’s
resolv.conf
files to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually. -
Linux’s libc (a.k.a. glibc) has a limit for the DNS nameserver records to 3 by default and Kubernetes needs to consume 1 nameserver record. This means that if a local installation already uses 3 nameservers, some of those entries will be lost. To work around this limit, the node can run dnsmasq, which will provide more nameserver entries. You can also use kubelet’s —resolv-conf flag.
-
If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly due to a known issue with Alpine. Kubernetes issue 30215 details more information on this.
reference