I have a kubernetes service:
kind: "Service"
apiVersion: "v1"
metadata:
name: "aggregator"
labels:
name: "aggregator"
spec:
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
selector:
name: "aggregator"
createExternalLoadBalancer: true
sessionAffinity: "ClientIP"
This service worked fine when I had one node, one master, but the moment I up'd the amount of nodes, some pods in the cluster no longer connect to this service, when I curl the endpoint I receive from kubectl describe services aggregator
I receive "No Route to Host".
The issue was the kube-proxy systemd service. I had:
ExecStart=/opt/bin/kube-proxy\
--master=<MASTER_INTERNAL_IP>:8080 \
--logtostderr=true
However, it requires https://
infront of the master's ip address. Which begs the question how does the first node work if that is still it's systemd service, and all nodes are running the same version of kubernetes?