I'm working on a deployment in GKE that is my first one, so I'm pretty new to the concepts, but I understand where they're going with the tools, just need the experience to be confident.
First, I have a cluster that has about five services, two of which I want to expose via external load balancer. I've defined an annotation for Gcloud to set these up under load balancing, and that seems to be working, I've also setup an annotation to setup a network endpoint groups for the services. Here's how one is configured as in the deployment and service manifests.
---
#api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f ./docker-compose.yml
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f ./docker-compose.yml
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- args:
- bash
- -c
- node src/server.js
env:
- name: NODE_ENV
value: production
- name: TZ
value: America/New_York
image: gcr.io/<PROJECT_ID>/api
imagePullPolicy: Always
name: api
ports:
- containerPort: 8087
resources: {}
restartPolicy: Always
serviceAccountName: ""
status: {}
---
#api-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/load-balancer-type: "Internal"
cloud.google.com/neg: '{"ingress": true}'
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
type: LoadBalancer
ports:
- name: "8087"
port: 8087
targetPort: 8087
status:
loadBalancer: {}
I think I may be missing some kind of configuration here, but I'm unsure.
I've also seen where I can define Liveness checks in the yaml by adding
livenessProbe:
httpGet:
path: /healthz
port: 8080
I also have my ingress configured like this:
---
# master-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-application-ingress
annotations:
ingress.kubernetes.io/secure-backends: "true"
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api
servicePort: 8087
- http:
paths:
- path: /ui
backend:
serviceName: ui
servicePort: 80
and I've seen it where it just needs the port, for TCP checks, but I've already defined these in my application, and in the load balancer. I guess I want to know where I should be defining these checks.
Also, I have an issue with the NEG's created by the annotation being empty, or is this normal with manifest created NEG's?
The health check is created based on your readinessProbe, not livenessProbe. Make sure to have a readinessProbe configured in your pod spec before creating the ingress resource.
As for the empty NEG, this might be due to a mismatch of the Health Check. The NEG will rely on the readiness gate feature (explained here), since you only have the livenessProbe defined, it is entirely possible the health check is misconfigured and thus failing.
You should also have an internal IP for the internal LB you created, can you reach the pods that way? If both are failing, the Health Check is likely the issue since the NEG is not adding pods to the group that it sees as not ready