Search code examples
kubernetesiptableskube-proxy

Unable to acccess nginx pod across nodes using ClusterIP


I have created nginx deployment and nginx service(ClusterIP) to access nginx pod. But not able to access pod through cluster IP across nodes other than node where pod is scheduled.

I tried looking for IPtable too. But do not DNAT entry over there.

root@kdm-master-1:~# k get all -A -o wide |grep nginx
default       pod/nginx-6db489d4b7-pfkm9                 1/1     Running   0          3h16m   10.244.1.3   kdm-worker-1   <none>           <none>

default       service/nginx        ClusterIP   10.102.239.131   <none>        80/TCP                   3h20m   run=nginx

default       deployment.apps/nginx     1/1     1            1           3h32m   nginx        nginx                      run=nginx

default       replicaset.apps/nginx-6db489d4b7     1         1         1       3h32m   nginx        nginx                      pod-template-hash=6db489d4b7,run=nginx

IP table:

root@kdm-master-1:~# iptables -L -t nat|grep nginx
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.102.239.131       /* default/nginx:80-80 cluster IP */ tcp dpt:http

KUBE-SVC-OVTWZ4GROBJZO4C5  tcp  --  anywhere             10.102.239.131       /* default/nginx:80-80 cluster IP */ tcp dpt:http
# Warning: iptables-legacy tables present, use iptables-legacy to see them

Please advice how can I resolve it?


Solution

  • set net.ipv4.ip_forward=1 in /etc/sysctl.conf

    run sysctl --system

    This will resolve the issue and one will be able able to access the pod from any node.