I am getting this error in the logs.
[error] 117#117: *16706 upstream timed out (110: Operation timed out) while reading response header from upstream
.I have tried every possible way to check from where i am getting this exact 60s timeout.
I'll add more detail of how i am producing this error in details also (if needed). I don't see any timeout when I run the dotnet api (dockerized) locally. That API runs for more then 5 minutes but here in AKS cluster it gets timeout in exactly 60s.
So i am using these headers in my config map (for nginx ingress controller).I have checked it by removing and adding these headers one by one but no change in that timeout.
client-header-timeout: "7200"
keep-alive: "300"
keep-alive-requests: "100000"
keepalive-timeout: "300"
proxy-connect-timeout: "7200"
proxy-read-timeout: "7200"
proxy-send-timeout: "7200"
upstream-keepalive-requests: "100000"
upstream-keepalive-timeout: "7200"
And i have also tried adding these headers in my ingress resource/rule for that microservice.
nginx.ingress.kubernetes.io/client-body-timeout: "7200"
nginx.ingress.kubernetes.io/client-header-timeout: "7200"
nginx.ingress.kubernetes.io/client-max-body-size: 5000m
nginx.ingress.kubernetes.io/keep-alive: "300"
nginx.ingress.kubernetes.io/keepalive-timeout: "300"
nginx.ingress.kubernetes.io/large-client-header-buffers: 64 128k
nginx.ingress.kubernetes.io/proxy-body-size: 5000m
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/send_timeout: "7200"
Nginx ingress controller version:
Release: v1.0.5
Build: 7ce96cbcf668f94a0d1ee0a674e96002948bff6f
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
We are using Azure Kubernetes cluster and The core dns is resovling all the api/url from an active directory which is deployed in an Azure windows Virtual machine.
Backend API in in dotnet core (sdk:6.0.400 and ASP.NET core Runtime 6.0.8)
(all the keepalives and request-timeouts settings are already tested that are defined in the code).
Found the problem. May be i have missed something but seems these
proxy-read-timeout: "7200"
proxy-send-timeout: "7200"
headers doesn't effect the timeouts for the backend GRPC communication. I had to add the "server-snippet" to add these
grpc_read_timeout 120s; grpc_send_timeout 120s; client_body_timeout 120s;