Using Kubernetes on Azure Container Service (not the new AKS though).
I'm deploying a front-end up like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend-deployment
spec:
selector:
matchLabels:
app: frontend
replicas: 2
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: etc/etc
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: frontend
I can see that it's started correctly from the logs.
From kubectl get services
I can see that it has been assigned an External IP. But when I try to access that via HTTP it just hangs.
I also can see in the Azure Portal that the Azure Load Balancer was created and is pointing to the correct external IP and backend pool.
Can anyone tell me if I somehow messed up the port assignments in the pod definition?
--
Update: Somehow it started working on it's own (or seemed like). But when I tried to re-create it as a Service instead of Deployment it stopped working
Here's my Service:
This is my config:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: meteor
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: http-server
selector:
app: frontend
sessionAffinity: ClientIP
type: LoadBalancer
It creates the external IP for the load balancer, and I can see that it is properly matching the pods. but I get a timeout when I try to connect to the external IP. Meanwhile the load balancer that was created as part of the deployment continues to work just fine.
It looks like the problem was a mis-specification of the targetPort. Adjusting it to the correct value and replacing the Service definition solved the problem.