I have a Sinatra app that is running on Thin with Nginx as a reverse proxy and receives a lot of traffic. My users are reporting 502 errors and looking at the Nginx logs i see a lot of these:
[warn] upstream server temporarily disabled while connecting to upstream
[error] connect() failed (111: Connection refused) while connecting to upstream
If i look at the logs from the Sinatra app i see no errors.
I am starting Thing with the following:
--max-conns 15360 --max-persistent-conns 2048 --threaded start
I have set the following for Ninx:
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 15360;
}
The host file for the Sinatra app:
server {
server_name my_sinatra_app;
#lots of bots try to find vulnerabilities in php sites
location ~ \.php {
return 404;
}
location / {
proxy_pass http://localhost:6903;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
#increase buffers
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
listen 443 ssl; # managed by Certbot
#...
#SSL stuff
}
Why is this happening? Too much traffic?
What's the solution? Do I keep increasing the worker_connections
and --max-conns
until the errors stop?
The output of htop
seems like the server can handle more:
Any insight/advice?
EDIT
While i don't see any errors in the Sinatra log or systemctl status
output, i did notice that the service never runs for very long so it seems Thin server is crashing often. Any idea how i can debug this further?
So the problem was actually with Thin server, for some reason it kept crashing every few minutes with a C++ error and therefore Nginx would throw those errors while attempting to connect to Thin and failing (because Thin would be crashing/rebooting).
The solution was to replace Thin with Puma, after that no more issues.