Search code examples
kubernetesgoogle-kubernetes-enginekubernetes-ingress

GKE Ingress configuration for HTTPS-enabled Applications leads to failed_to_connect_to_backend


I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.

The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.

So are my configurations:

FrontendConfig.yaml:

apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: my-frontendconfig
  namespace: my-namespace
spec:
  redirectToHttps:
    enabled: false
  sslPolicy: my-ssl-policy

BackendConfig.yaml:

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backendconfig
  namespace: my-namespace
spec:
  sessionAffinity:
    affinityType: "CLIENT_IP"
  logging:
    enable: true
    sampleRate: 1.0
  healthCheck:
    checkIntervalSec: 60
    timeoutSec: 5
    healthyThreshold: 3
    unhealthyThreshold: 5
    type: HTTP
    requestPath: /health
    # The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
    port: 8001

Ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  namespace: my-namespace
  annotations:
    # If the class annotation is not specified it defaults to "gce".
    kubernetes.io/ingress.class: "gce"
    # Frontend Configuration Name
    networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
    # Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
    kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
  tls:
  - secretName: my-secret
  defaultBackend:
    service:
      name: my-service
      port:
        number: 443

Service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: my-namespace
annotations:
  # Specify the type of traffic accepted
  cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
  # Specify the BackendConfig to be used for the exposed ports
  cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
  # Enables the Cloud Native Load Balancer
  cloud.google.com/neg: '{"ingress": true}'
spec:
  type: ClusterIP
  selector:
    app: my-application
  ports:
    - protocol: TCP
      name: service-port
      port: 443
      targetPort: app-port # this port expects TLS traffic, no http plain connections

The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.

The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.

Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?


Solution

  • Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP