I have django running on nginx and uwsgi. The cached response loads very fast but at other times the website takes more than 30s to load. I am unable to diagnose the root cause of slowing down. Here's what I can provide as info to help narrow down the issue -
GTMetrix - For what I can conclude from waterfall report is that the waiting time for static files is too much alongwith the initial server response time. Here is a more detailed breakdown: Link to the lighthouse parameters Waterfall report
nginx.conf - Here is the nginx config file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75;
types_hash_max_size 2048;
client_max_body_size 5M;
sendfile_max_chunk 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"'
'rt="$request_time" uct="$upstream_connect_time"
uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log upstream_time;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable msie6;
# And all the gzip mime types here
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
proxy_cache_path /data/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive 60m use_temp_path off;
server {
location ~* \.(jpg|jpeg|png|gif|ico|css|js){
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_pass http://example.net;
}
}
}
Nginx Project Config -
map $sent_http_content_type $expires{
default on;
text/html epoch;
text/css max;
appplication/javascript max;
~image/ max;
}
server{
listen 80;
server_name example.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/mysite/project_dir/app_dir;
expires $expires;
}
location /images/ {
expires $expires;
root /home/mysite/project_dir/app_dir/static/images/;
}
location /media/ {
expires $expires;
root /home/mysite/project_dir/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite.sock;
gzip_static on;
proxy_buffering off;
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_set_header X-Real-IP $remote-addr;
proxy_set_header Host $http-host;
proxy_set_header Connection "";
}
listen 443 ssl http2;#Managed by certbot
#All the subsequent certbot settings not tampered with
}
Logs - So, when I log nginx using the above config, the access logs show upstream_response_time perfectly only if the website was cached loaded. When it takes >30s to load, the upstream_response_time including all parameters except response_time show hyphen '-'.
UPDATE:
Resource | Value |
---|---|
User CPU time | 964.000 msec |
System CPU time | 52.000 msec |
Total CPU time | 1016.000 msec |
System CPU time | 1019.185 msec |
All the SQL queries are taking minimal time(10.78ms). Logger too shows 0 errors.
I would highly appreciate if anyone could help me diagnose the root cause of this slowdown. Thank you!
Phew! So I figured out the solution. I used - https://www.webpagetest.org and arrived to a conclusion that the initial connection time was very high (~30s). When it happens, it is most likely some dns/firewall issue. My issue was dns based. I had 2 ips added as A record to my domain. One was a private ip. So the browser actually took ~30s to load that ip and when the website got loaded, the browser cached the response so the subsequent response times were low. Simply removing the private ip worked for me.