Search code examples
dnskubernetesconsulkube-dnsskydns

Consul DNS and Kubernetes


I am running servers that are registering with Consul, external to my Kubernetes 1.8.x cluster. Consul is running inside my Kube cluster (configured by Helm), and is peered with an external Consul cluster. Kube-dns is configured to use the internal Consul pods as "stubDomains" with the following ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {
      "consul": [
        "10.244.0.xxx:8600",
        "10.244.1.xxx:8600",
        "10.244.2.xxx:8600"
      ]
    }

When everything is working as expected, kube-dns resolves the external consul domain names. The problem is when a Consul pod crashes and restarts with a new IP address.

Is there a way to recovery from Consul pod crashes, without having to manually change the IP addresses listed in the kube-dns ConfigMap?


Solution

  • I ended up modifying the "consul-ui" service (the one with an IP address) to expose the Consul DNS port. I copied the following from the "consul" service (the one without a cluster IP) to "consul-ui" service, in the ["spec"]["port"] section:

      {
        "name": "consuldns-tcp",
        "protocol": "TCP",
        "port": 8600,
        "targetPort": 8600,
        "nodePort": 30766
      },
      {
        "name": "consuldns-udp",
        "protocol": "UDP",
        "port": 8600,
        "targetPort": 8600,
        "nodePort": 32559
      }
    

    Then used the service IP address instead of the Pod IP addresses in the kube-dns ConfigMap.