Search code examples
kubernetesamazon-ekscoredns

Pods in EKS: can't resolve DNS (but can ping IP)


I have 2 EKS clusters, in 2 different AWS accounts and with, I might assume, different firewalls (which I don't have access to). The first one (Dev) is all right, however, with the same configuration, UAT cluster pods is struggling to resolve DNS. The Nodes can resolve and seems to be all right.

1) ping 8.8.8.8 works

--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms

2) I can ping the IP of google (and others), but not the actual dns names.

Our configuration:

  1. configured with Terraform.
  2. The worker nodes and control plane SG are the same than the dev ones. I believe those are fine.
  3. Added 53 TCP and 53 UDP on inbound + outbound NACl (just to be sure 53 was really open...). Added 53 TCP and 53 UDP outbound from Worker Nodes.
  4. We are using ami-059c6874350e63ca9 with 1.14 kubernetes version.

I am unsure if the problem is a firewall somewhere, coredns, my configuration that needs to be updated or an "stupid mistake". Any help would be appreciated.


Solution

  • After days of debugging, here is what was the problem : I had allowed all traffic between the nodes but that all traffic is TCP, not UDP.

    It was basically a one line in AWS: In worker nodes SG, add an inbound rule from/to worker nodes port 53 protocol DNS (UDP).

    If you use terraform, it should look like that:

    resource "aws_security_group_rule" "eks-node-ingress-cluster-dns" {
      description = "Allow pods DNS"
      from_port                = 53
      protocol                 = 17
      security_group_id        = "${aws_security_group.SG-eks-WorkerNodes.id}"
      source_security_group_id = "${aws_security_group.SG-eks-WorkerNodes.id}"  
      to_port                  = 53
      type                     = "ingress"
    }