I'm managing a small Kubernetes cluster on Azure with Postgres. This cluster is accessible through an Nginx controller with a static IP.
The ingress routes to a ClusterIP to a pod which uses a Postgres instance. This Postgres instance has all IPs blocked, with a few exceptions for my own IP and the static IP of the ingress. This worked well until I pushed an update this morning, where to my amazement I see in the logs an error that the pods IP address differs from the static ingress IP, and it has a permission error because of it.
My question: how is it possible that my pod, with ClusterIP, has a different outer IP address than the ingress static IP I assigned it? Note that the pod is easily reached, through the Ingress.
Ingresses
and Services
handle only incoming pod traffic. Pod outgoing traffic IP depends on Kubernetes networking implementation you use. By default all outgoing connections from pods are source NAT-ed on node level which means pod will have an IP of node which it runs on. So you might want to allow worker node IP addresses in your Postgres.