I have a tool that uses nmap
to run a basic port scan on a host to check for open ports. It's setup in a docker container and on my local machine, works perfectly (shows the expected ports being open, etc.).
When I deploy this container to a Kubernetes cluster in Google Cloud and trigger the scan, I noticed the ports always show up as filtered.
I know that all 1,000 ports showing up as filtered generally means there's a firewall rule somewhere that's causing packets to drop, but I can't figure out where it is.
Our infrastructure setup is:
Here's what I've tried (didn't work):
egress
firewall rule in GCP to allow everything (all ports, protocols) on all my instancesIs there anything I can do to help further debug this or figure out where the firewall rules might be applied?
Thanks for your help.
Here's how I ended up solving this:
I made use of the traffic.sidecar.istio.io/includeOutboundIPRanges
annotation on my pod. I set that to be the CIDR of the service mesh.
Doing that made it so any requests going out of that pod to something within my mesh was sent through Envoy. Anything else was ignored by Envoy (allowing the nmap
requests to scan properly instead of being filtered).
There are probably a number of other ways to get this to work given how much customization there seems to available in Istio, but this satisfied my requirements.