I am running the simple deployment below (webapp with 5 replicas and a load balancer) in the single node Docker Desktop K8 cluster on Windows as host. The webapp simply serves a page that displays the hostname, so I can validate that the load balancer is balancing incoming traffic.
When I "curl" the load balancer from another pod within the cluster, it distributes the traffic as expected, resulting in different hostnames that I receive in the resulting curl response's content section.
Unfortunately, I am facing problems when I am "port-forwarding" traffic outside the cluster from my Windows host to that load balancer. I am using the following command to do so:
kubectl port-forward service/mywebapp-svc :80
Now, I can access the webapp from a local browser, but it seems to always sending traffic to the same pod (even when I clear cache or use different browsers). Also, a curl from the Windows host to the forwarded port is resulting in the same pod receiving the traffic.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
labels:
app: mywebapp
spec:
replicas: 5
selector:
matchLabels:
app: mywebapp
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mycontainer
image: mywebapp:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mywebapp-svc
spec:
type: LoadBalancer
selector:
app: mywebapp
ports:
- port: 80
protocol: TCP
You are experiencing the effect of HTTP 1.1 keepalive. Test with curl instead of a browser and you will see the distribution. Kubernetes Service distributes based on the TCP connection and has no clue about HTTP on top of that.