Search code examples
nginxopensslgunicornfirewallufw

Nginx not responding from outside the remote box


My website (Django/nginx/gunicorn/mysql) which is hosted on a remote box was working fine, until I decided to restart remote box for some reason. So after the restart, in the remote box when I say curl -IL -H -GET my.web.address it works fine. However, when I try the same command from outside, it reports curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to my.web.address.

Any help is appreciated. Please find below the relevant things which I checked for.

I'm using a CentOS 7 system. I predominantly used this tutorial

nginx is listening on the right ports

tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      3425/nginx: master  
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      3425/nginx: master

Firewall (see ufw result) is allowing my ports

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp (SSH)               ALLOW IN    Anywhere                  
224.0.0.251 5353/udp (mDNS) ALLOW IN    Anywhere                  
22                         ALLOW IN    Anywhere                  
80                         ALLOW IN    Anywhere                  
443                        ALLOW IN    Anywhere                  
22/tcp (SSH (v6))          ALLOW IN    Anywhere (v6)             
ff02::fb 5353/udp (mDNS)   ALLOW IN    Anywhere (v6)             
22 (v6)                    ALLOW IN    Anywhere (v6)             
80 (v6)                    ALLOW IN    Anywhere (v6)             
443 (v6)                   ALLOW IN    Anywhere (v6)

sestatus

SELinux status: disabled

Output of nmap to check the port status

nmap -sT my.ip.address

PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
111/tcp  open  rpcbind
443/tcp  open  https
3306/tcp open  mysql

Content of the nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

server blocks of my nginx.conf

server {
    server_name my.ip.address my.web.address;
    error_log /srv/www/myweb/logs/error.log;
    access_log /srv/www/myweb/logs/access.log;
    charset utf-8;

    location /static/{
        alias /srv/www/myweb/latestRelease/mywebDB/app/src/static/;
    }

    location /{
        proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://unix:/srv/www/myweb/latestRelease/mywebDB/mywebdb.sock;
    }


    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/my.web.address/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/my.web.address/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = my.ip.address) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = my.web.address) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

      listen 80;
      server_name my.ip.address my.web.address;
    return 404; # managed by Certbot

}

I've checked my sock file permissions, they are spinning up correctly by the respective user. The file permissions are set correctly.

contents of the gunicorn.service file

[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=user
Group=nginx
WorkingDirectory=/srv/www/myweb/latestRelease/mywebDB/app/src
ExecStart=/srv/www/myweb/latestRelease/mywebDB/app/bin/gunicorn --workers 3 --bind unix:/srv/www/myweb/latestRelease/mywebDB/mywebdb.sock app.wsgi:application

[Install]
WantedBy=multi-user.target

Meanwhile, just to ensure my issues are not related to Lets encrypt server certificates, I made changes in my nginx.conf to work only HTTP.

Below are the output of curl -IL -GET my.domain.name when I ran it on my remote box:

HTTP/1.1 200 OK
Server: nginx/1.17.9
Date: Fri, 06 Mar 2020 08:26:42 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 7918
Connection: keep-alive
X-Frame-Options: SAMEORIGIN

I get the same output as above while running with the IP address.

When I ran the curl from my laptop I get the curl: (52) Empty reply from server as the response with both domain name, and IP address.

I pinged the server (both using IP, and domain name) from my laptop, and they send/receive packets. The domain name is rightly mapped with the IP. I also validated it using nslookup

Since I disabled the HTTPS, the TLS versions doesn't matter isn't it?

Additionally, I disabled the firewall using ufw. The, I looked up the iptables -L rules. I'm a newbie in this area, but to me it seems like the results of the remote server are designed to accept any incoming connections.

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:ssh
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

Solution

  • So the problem is fixed!

    The problem was that somehow, UFW rules didn't update the iptables entry to allow the ports 80/443.

    I modified it manually with the below commands.

    iptables -I INPUT -p tcp -m state --state NEW --dport 80 -j ACCEPT
    
    iptables -I INPUT -p tcp -m state --state NEW --dport 443 -j ACCEPT