Search code examples
nginxpyramidgunicornlets-encrypt

Nginx/Pyramid custom SSL port


As a prefix, I have been using the following stack for some time with great success:

NGINX - web proxy SSL - configured in nginx Pyramid web application, served by gunicorn

The above combo works great, here is a working configuration.

    server {
        # listen on port 80
        listen       80;
        server_name portalapi.example.com;
        # Forward all traffic to SSL
        return         301 https://www.portalapi.example.com$request_uri;
    }

    server {
        # listen on port 80
        listen       80;
        server_name www.portalapi.example.com;
        # Forward all traffic to SSL
        return         301 https://www.portalapi.example.com$request_uri;
    }

    #ssl server 
    server {
    listen         443 ssl;
        ssl    on;
        ssl_certificate    /usr/local/etc/letsencrypt/live/portalapi.example.com/fullchain.pem;
        ssl_certificate_key    /usr/local/etc/letsencrypt/live/portalapi.example.com/privkey.pem;
        server_name    www.portalapi.example.com;

    client_max_body_size 10M;
    client_body_buffer_size 128k;

    location ~ /.well-known/acme-challenge/ {
            root /usr/local/www/nginx/portalapi;
            allow all;
    }

    location / {
            proxy_set_header Host $host;
            proxy_pass  http://10.1.1.16:8005;
            #proxy_intercept_errors on;
            allow   all;

    }

    error_page   404 500 502 503 504  /index.html;
    location = / {
        root   /home/luke/ecom2/dist;
    }

}

Now, this is how I serve my public facing apps, it works very well. For all my internal applications, I used to simply direct users to an internal domain example: http://subdomain.company.domain , again this worked well for a long time.

Now in the wake of KRACK attack although we have some very thorough firewall rules to prevent a lot of attacks, I want to force all internal traffic through SSL, and I don't want to use a self signed certificate, I want to use lets encrypt so I can auto-renew certificates which makes administration much easier (and cheaper).

In order to use lets encrypt, I need to have a public facing DNS and server to perform the ACME challenge (for auto renewing). Now again this was a very easy thing to setup in nginx, and the below config works perfectly for serving static content:

What it does is if a user from the internet accesses intranet.example.com it simply shows a forbidden message. However, if a local user tries, they get forwarded to intranet.example.com:8002 and the port 8002 is only available locally, so there is no way external users can access a webpage on this site

geo $local_user {
    192.168.155.0/24 0;
    172.16.10.0/28 1;
    172.16.155.0/24 1;
}

server {
    listen       80;
    server_name  intranet.example.com;

    client_max_body_size 4M;
    client_body_buffer_size 128k;

    # Space for lets encrypt to perform challenges
    location ~ /\.well-known/ {
            root /usr/local/www/nginx/intranet;
    }

    if ($local_user) {
        # If user is local, redirect them to SSL proxy only available locally
        return         301 https://intranet.example.com:8002$request_uri;
    }

# Default block all non local users see
location / {
        root   /home/luke/forbidden_html;
        index  index.html;
}


# This server block is only available to local users inside geo $local_user
# this block listens on an internal port only, so it is never availble to 
# external networks
server {
        listen       8002 default ssl; # listen on a port only accessible locally
        server_name  intranet.example.com;
        ssl_certificate    /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
        ssl_certificate_key    /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;

        client_max_body_size 4M;
        client_body_buffer_size 128k;

        location / {
        allow   192.168.155.0/24; 
        allow   172.16.10.0/28;   # also add in allow/deny rules in this block (extra security)
        allow   172.16.155.0/24;  

        root   /home/luke/ecom2/dist;
        index  index.html;


    deny   all;
        }


}

Now, here comes the pyramid/nginx marrying problem, if I use the same above configuration, but have the below settings for my server on 8002:

server {
    listen       8002 default ssl; # listen on a port only accessible locally
    server_name  intranet.example.com;
    ssl_certificate    /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
    ssl_certificate_key    /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;

    client_max_body_size 4M;
    client_body_buffer_size 128k;

    location / {
    allow   192.168.155.0/24; 
    allow   172.16.10.0/28;   # also add in allow/deny rules in this block (extra security)
    allow   172.16.155.0/24;  
    # Forward all requests to python application server
    proxy_set_header Host $host;
    proxy_pass   http://10.1.1.16:6543;
    proxy_intercept_errors on;
    deny   all;
    }

}

I run into all sorts of problems, first off inside pyramid I was using the following code in my views/templates

request.route_url # get route url for desired function

Now using request.route_url with the above settings should cause https://intranet.example.com:8002/login to route tohttps://intranet.example.com:8002/welcome but in reality, this setup would forward a user to: http://intranet.example.com/welcome Again this is not correct.

And if I use route_url with the NGINX proxy setting:

proxy_set_header Host $http_host;

I get the error: NGINX to return a 400 error:

400: The plain HTTP request was sent to HTTPS port

And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)

Then I used the same nginx settings (header $htto), but thought I would change to using:

request.route_path

My theory was this should force everything to stay on the same url prefix, and just forward a user from https://intranet.example.com:8002/login to https://intranet.example.com:8002/welcome but in reality, this setup performed the same way as using route_url.

proxy_set_header Host $http_host;

I then get an error when navigating to https://intranet.example.com:8002

400: The plain HTTP request was sent to HTTPS port

And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)

Can anyone assist with the correct setup in order for me to serve my application on https://intranet.example.com:8002

EDIT:

Have also tried:

    location / {
        allow   192.168.155.0/24;
        allow   172.16.10.0/28;   # also add in allow/deny rules in this block (extra security)
        allow   172.16.155.0/24;
        # Forward all requests to python application server
        proxy_set_header Host $host:$server_port;
        proxy_pass   http://10.1.1.16:8002;
        proxy_intercept_errors on;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # root   /home/luke/ecom2/dist;
        # index  index.html;


        deny   all;
        }

Which gives the same result.


Solution

  • The issue here is, obviously, the missing port within the Location directives that your backend produces.

    Now, why is the port missing? Most certainly, because of the following code:

    proxy_set_header Host $host;
    
    • Note that $host itself does not contain $server_port, unlike $http_host, so, your backend would have no way of knowing which port you meant if you just use $host all by itself.

    • Note that proxy_redirect default of default expects Location to correspond with the value from proxy_pass in order to do its magic (according to documentation), so, your explicit header setting likely interferes with such logic.


    As such, from the nginx point of view, I see multiple possible independent solutions:

    • remove proxy_set_header Host, and let proxy_redirect do its magic;
    • set proxy_set_header Host appropriately, to include the port number, e.g., using $host:$server_port or $http_host as you see fit (if that doesn't work, then perhaps the deficiency is actually within your upstream app itself, but fear not -- read below);
    • provide a custom proxy_redirect setting, e.g., proxy_redirect https://pyramid.lan/ / (equivalent to proxy_redirect https://pyramid.lan/ https://pyramid.lan:8002/), which will ensure that all the Location responses will have the proper port; the only way this wouldn't work is if your upstream does non-HTTP redirects with the missing port.