Search code examples
amazon-web-servicesdnskuberneteskube-dns

Create specific A record entry to Kubernetes local DNS


I have a Kubernetes v1.4 cluster running in AWS with auto-scaling nodes. I also have a Mongo Replica Set cluster with SSL-only connections (FQDN common-name) and public DNS entries:

  • node1.mongo.example.com -> 1.1.1.1
  • node2.mongo.example.com -> 1.1.1.2
  • node3.mongo.example.com -> 1.1.1.3

The Kubernetes nodes are part of a security group that allows access to the mongo cluster, but only via their private IPs.

Is there a way of creating A records in the Kubernetes DNS with the private IPs when the public FQDN is queried?

The first thing I tried was a script & ConfigMap combination to update /etc/hosts on startup (ref. Is it a way to add arbitrary record to kube-dns?), but that is problematic as other Kubernetes services may also update the hosts file at different times.

I also tried a Services & Enpoints configuration:

---
apiVersion: v1
kind: Service
metadata:
  name: node1.mongo.example.com
spec:
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017
---
apiVersion: v1
kind: Endpoints
metadata:
  name: node1.mongo.example.com
subsets:
  - addresses:
      - ip: 192.168.0.1
    ports:
      - port: 27017

But this fails as the Service name cannot be a FQDN...


Solution

  • While not so obvious at first, the solution is quite simple. kube-dns image in recent versions includes dnsmasq as one of it's components. If you look into its man page, you will see some usefull options. Following that lecture you can choose a path similar to this :

    Create a ConfigMap to store your dns mappings :

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kube-dns
      namespace: kube-system
    data:
      myhosts: |
        10.0.0.1 foo.bar.baz
    

    Having that ConfigMap applied in your cluster you can now make some changes to kube-dns-vXX deployment you use in your kubernetes.

    Define volume that will expose your CM to dnsmasq

      volumes:
      - name: hosts
        configMap:
          name: kube-dns
    

    and mount is in your dnsmasq container of kube-dns deployment/rc template

        volumeMounts:
        - name: hosts
          mountPath: /etc/hosts.d
    

    and finally, add a small config flag to your dnsmasq arguments :

        args:
        - --hostsdir=/etc/hosts.d
    

    now, as you apply these changes to the kube-dns-vXX deployment in your cluster it will mount the configmap and use files mounted in /etc/hosts.d/ (with typical hosts file format) as a source of knowledge for dnsmasq. Hence if you now query for foo.bar.baz in your pods, they will resolve to respective IP. These entries take precedence over public DNS, so it should perfectly fit your case.

    Mind that dnsmasq is not watching for changes in ConfigMap so it has to be restarted manually if it changes.

    Tested and validated this on a live cluster just few minutes ago.