Is it possible to use a private DNS in Kubernetes? For example, an application needs to connect to an external DB by its hostname. The DNS entry, which resolves the IP, is deposited in a private DNS.
My AKS (Azure Kubernetes Service) is running on version 1.17 which already uses the new coreDNS.
My first try was to use that private DNS like on VM by configuring the /etc/resolve.conf file of the pods:
dnsPolicy: "None"
dnsConfig:
nameservers:
- 10.76.xxx.xxx
- 10.76.xxx.xxx
searches:
- az-q.example.com
options:
- name: ndots
value: "2"
Then I tried to use configmap to adjust the coreDNS:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["10.76.xxx.xxx", "10.76.xxx.xxx"]
But my pod is every time running in an error on deployment:
$ sudo kubectl logs app-homepage-backend-xxxxx -n ingress-nginx
events.js:174
throw er; // Unhandled 'error' event
^
Error: getaddrinfo ENOTFOUND az-q.example.com az-q.example.com:636
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
What am I missing?
In order to achieve what you need, I'd go with dnsPolicy: ClusterFirst
definition in the pod manifests and then a definition of a stub zone (private DNS zone) in your cluster DNS subsystem.
For identifying the Cluster DNS stack, typically check the pods running in the kube-system
namespace. Most likely you'll find one of these two: CoreDNS or Kube-DNS.
In case your cluster DNS runs on CoreDNS, then look for this kind of a modification in your coredns
configmap.
If you run on the older Kube-DNS system, then look for this modification in kube-dns
configmap.
It's important to say that if you would like to apply this modification to pods running in the host network mode (many pods from kube-system
namespace), you need to modify their manifests with dnsPolicy: ClusterFirstWithHostNet
stanza.