Search code examples
dnskuberneteskube-dns

How to expose kube-dns service for queries outside cluster?


I'm trying to expose the "kube-dns" service to be available to be queried outside of the Kubernetes cluster. In order to do this I edited the "Service" definition to change "type" from "ClusterIP" to "NodePort" which seemed to work fine.

However, when I attempt to query on the node port, I'm able to get a TCP session (testing with Telnet) but can't seem to get any response from the DNS server (testing with dig).

I've had a look through the logs on each of the containers on the "kube-dns" Pod but can't see anything untoward. Additionally, querying the DNS from within the cluster (from a running container) appears to work without any issues.

Has anyone tried to expose the kube-dns service before? If so, are there any additional setup steps or do you have any debugging advice for me?

The service definition is as follows:

$ kubectl get service kube-dns -o yaml --namespace kube-system
apiVersion: v1
kind: Service
metadata:
...
spec:
  clusterIP: 10.0.0.10
  ports:
  - name: dns
    nodePort: 31257
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    nodePort: 31605
    port: 53
    protocol: TCP
    targetPort: 53
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

Solution

  • Are you querying on the tcp port or the udp port?

    I changed my kube-dns to be a NodePort service:

    $ kubectl describe services kube-dns --namespace kube-system
    Name:           kube-dns
    Namespace:      kube-system
    Labels:         k8s-app=kube-dns
                kubernetes.io/cluster-service=true
                kubernetes.io/name=KubeDNS
    Selector:       k8s-app=kube-dns
    Type:           NodePort
    IP:         10.171.240.10
    Port:           dns 53/UDP
    NodePort:       dns 30100/UDP
    Endpoints:      10.168.0.6:53
    Port:           dns-tcp 53/TCP
    NodePort:       dns-tcp 30490/TCP
    Endpoints:      10.168.0.6:53
    Session Affinity:   None
    

    and then queried on the udp port from outside of the cluster and everything appeared to work:

    $ dig -p 30100 @10.240.0.4 kubernetes.default.svc.cluster.local
    
    ; <<>> DiG 9.9.5-9+deb8u6-Debian <<>> -p 30100 @10.240.0.4 kubernetes.default.svc.cluster.local
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45472
    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
    
    ;; QUESTION SECTION:
    ;kubernetes.default.svc.cluster.local. IN A
    
    ;; ANSWER SECTION:
    kubernetes.default.svc.cluster.local. 30 IN A   10.171.240.1
    
    ;; Query time: 3 msec
    ;; SERVER: 10.240.0.4#30100(10.240.0.4)
    ;; WHEN: Thu May 26 18:27:32 UTC 2016
    ;; MSG SIZE  rcvd: 70
    

    Right now, Kubernetes does not allow NodePort services to share the same port for tcp & udp (see Issue #20092). That makes this a little funky for something like DNS.

    EDIT: The bug was fixed in Kubernetes 1.3.