Search code examples
dockerkubernetesgoogle-cloud-platformgoogle-kubernetes-enginenmap

NMAP shows all filtered ports in GCP with Kubernetes/Docker


I have a tool that uses nmap to run a basic port scan on a host to check for open ports. It's setup in a docker container and on my local machine, works perfectly (shows the expected ports being open, etc.).

When I deploy this container to a Kubernetes cluster in Google Cloud and trigger the scan, I noticed the ports always show up as filtered.

I know that all 1,000 ports showing up as filtered generally means there's a firewall rule somewhere that's causing packets to drop, but I can't figure out where it is.

Our infrastructure setup is:

  • GCP/GKE for Kubernetes on GCP
  • Dockerized containers deployed and managed by Kube
  • Istio service mesh

Here's what I've tried (didn't work):

  • Updated the egress firewall rule in GCP to allow everything (all ports, protocols) on all my instances
  • Added NAT gateway to the network to make sure it could access external things
  • Made sure Istio had all outbound enabled (no restrictive egress rules)

Is there anything I can do to help further debug this or figure out where the firewall rules might be applied?

Thanks for your help.


Solution

  • Here's how I ended up solving this:

    I made use of the traffic.sidecar.istio.io/includeOutboundIPRanges annotation on my pod. I set that to be the CIDR of the service mesh.

    Doing that made it so any requests going out of that pod to something within my mesh was sent through Envoy. Anything else was ignored by Envoy (allowing the nmap requests to scan properly instead of being filtered).

    There are probably a number of other ways to get this to work given how much customization there seems to available in Istio, but this satisfied my requirements.