Search code examples
windowsdockerkuberneteshostnetwork

docker desktop kubernetes - how to map ports with ClusterFirstWithHostNet


I'm using kubernetes from docker for windows and I encountered problem. I use statefulset with following part of config:

    spec:
      terminationGracePeriodSeconds: 300
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet

In classic kubernetes this spec exposes all ports from pod on node ip, so all of them can be accessed through it. I'm trying to develop it on kubernetes from docker for windows, but it seems that I cannot access node by it's ip (like in minikube or microk8s), but docker for windows maps localhost to the cluster. So here is a problem: this config exposes all ports on node ip, which is for example 192.168.65.4, but i cannot access it from windows - I can only access cluster via localhost, but it only exposes protocol related port, for example 443. So when my service runs on port i.e. 10433, there is no access from localhost:10433 but also there is no access in general through node ip. Is there any way to configure it to work as classic kubernetes, where all ports are exposed? I know that single port can be exposed through NodePort, but it's important for me to expose all ports from the pod to imitate real kubernetes behaviour


Solution

  • In general, Docker host networking doesn't work on non-Linux platforms. It's accepted as a valid Docker option, but the "host" network isn't actually the physical system's network. This probably applies to the Kubernetes setup embedded in Docker Desktop as well.

    It should be pretty rare to need host networking, and even more unusual in Kubernetes. Host networking disables the normal inter-container communication mechanisms. Kubernetes in particular has a complex network environment and there is usually more than one node; opting out of the network setup like this can make it all but impossible to reach your service, either from inside the cluster or outside.

    Instead of host networking, you should use the normal Kubernetes networking setup. Pretty much every Deployment you create will need a matching Service, and if you set that Service to have type: nodePort then it will be accessible from outside the cluster (try both the assigned nodePort: number and the service's cluster-internal port:; it's not clear which port Docker Desktop actually uses).

    For some purposes, the easiest approach is to set up a local port-forward to the service

    kubectl port-forward deployment/some-deployment 8888:3000
    

    will set up a port-forward from port 8888 on the local system to port 3000 on some pod managed by the named deployment. This forwards to a single pod (if you have multiple replicas, it targets only one of them), it's slower than a direct connection, and the port-forward will fail occasionally, but this is good enough for maintenance tasks like database migrations.

    imitate real kubernetes behaviour

    In the environment I work on normally, each cluster has dozens to hundreds of nodes. The nodes can't be directly accessed from outside the cluster. It's also reasonably common to configure a PodSecurityPolicy to disallow host networking since it can be viewed as a security concern.