Search code examples
kubernetesnginxwebsocketreverse-proxynginx-ingress

kubernetes nginx websocket proxied close connection after 50s despite timeout configuration


I have the following setup: client application connect to a websocket application (ssl) via a nginx proxy to check headers. the websocket is configured and works properly under usage.

locally, when idle, everything works fine. nginx forward the websocket with a proxy_read_timeout and the connection remains up.

but when deployed in kubernetes, my websocket connection is closed after 50s (50.29 +- 0.05s)

  • I am using the following Kubernetes Server Version:
    version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.16", GitCommit:"51e33fadff13065ae5518db94e84598293965939", GitTreeState:"clean", BuildDate:"2023-07-19T12:19:24Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}
    
    It is not a public Google/AWS/etc. setup .
  • The Ingress controller is registry.k8s.io/ingress-nginx/controller:v1.9.3
  • I am using the Octavia load balancer.

I tried to add all possible nginx relevant parameters (timeout, workers timeout, upstream timeouts):

    upstream myUpstream {
        server myUpstream:443;
        keepalive 32;
    }

    location /ws {
            proxy_http_version 1.1;
            proxy_pass_request_headers on;

            proxy_set_header Connection 'Upgrade';
            proxy_set_header Upgrade $http_upgrade;

            proxy_read_timeout 1800s;
            proxy_connect_timeout 1800s;
            proxy_send_timeout 1800s;
            send_timeout 1800s;

            proxy_pass http://myUpstream/ws;
        }

I also tried to add the relevant parameters to my service ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "16m"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "256k"
  name: url-routing

I tried lowering the nginx proxy_read_timeout to 30s and in that case I do get the websocket correctly closing after 30s (30.29 +- 0.05s) instead

I tried editing the ingress controller in k8 but didn't seem to work.

Another websocket accessing another service in the same namespace as the nginx proxy does not have any disconnection issue.

what could be forcing a default 50s timeout and how can I get around it?

I rather avoid adding keep-alive to my websocket.


Solution

  • My colleagues found the issue.

    The 50s timeout was coming from the load balancer (Octavia) configuration: enter image description here After increasing it the connection got stable.

    Hope it can help anybody