I've implemented a special (it has an artificial await
) API route in my backend which takes 1 second to return data. Having nginx configured as below I expected 2 requests sent at the same time to complete after 1 second. They complete after 2 seconds.
After analysing logs I can see that in fact each instance of my backend receives 1 requests but one of them receives request ~1s later which indicates that NGINX does sequentiall and not parallel processing.
This is my reverse proxy conifguration (/etc/nginx/conf.d/default.conf
):
upstream backend {
server host.docker.internal:4001;
server host.docker.internal:4002;
}
server {
listen 80;
listen [::]:80;
server_name something.com;
location /api/ {
proxy_pass http://backend;
}
}
This is my /etc/nginx/nginx.conf
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
My VPS has 4 cores and upon nginx startup I can see 4 lines "Starting worker process" so this seems to be working fine. What else might be the case?
Turns out my browser was using only 1 agent to send requests. When using 2 browsers/agents it works as intended