I have kubernetes cluster (node01-03). There is a service with nodeport to access a pod (nodeport 31000). The pod is running on node03. I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?
NodePort
is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:
each Node will proxy that port (the same port number on every Node) into your Service
So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort
on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.
However, if the service is accessed using NodeIP:NodePort
from outside the cluster, we need to first make sure that NodeIP
is reachable from where we are hitting NodeIP:NodePort
.
If NodeIP:NodePort
cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP
rule on the FORWARD
chain (which in turn is caused by Docker 1.13
for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:
iptables -A FORWARD -j ACCEPT
The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).
Three other options to enable external access to a service are:
ExternalIPs
: https://kubernetes.io/docs/concepts/services-networking/service/#external-ipsLoadBalancer
with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancerIngress
: https://kubernetes.io/docs/concepts/services-networking/ingress/