Search code examples
dnskuberneteskube-dns

kubernetes pods replying with unexpected source for DNS queries


I have a kubernetes + flannel setup. Flannel config is {"Network": "10.200.0.0/16", "SubnetLen":24, "Backend": {"Type": "vxlan"}}.

I started the apiserver with --service-cluster-ip-range=10.32.0.0/24. As I understand, pods addresses are managed by flannel and service-cluster-ip-range is managed by iptables. I ran kubedns and tried executing dig from the kubernetes worker node for a deployment that I am running.

$ dig phonebook.default.svc.cluster.local @10.32.0.10 +short
10.32.0.7

However, when I run the same command from one of the containers running in the pod, I get:

$ dig phonebook.default.svc.cluster.local
;; reply from unexpected source: 10.200.16.10#53, expected 10.32.0.10#53
;; reply from unexpected source: 10.200.16.10#53, expected 10.32.0.10#53
;; reply from unexpected source: 10.200.16.10#53, expected 10.32.0.10#53

; <<>> DiG 9.9.5-9+deb8u8-Debian <<>> phonebook.default.svc.cluster.local
;; global options: +cmd
;; connection timed out; no servers could be reached

Any idea what might be wrong here?


Solution

  • adding --masquerade-all flag to kube-proxy, solved this for me. It seems that iptables is not masquerading the requests without this flag which causes dns lookup to fail.