I've tried everything:
@Starlette:
routes = [
Mount("/static/", StaticFiles(directory=parent+fs+"decoration"+fs+"static"), name="static"),
Route(....),
Route(....),
]
@Uvicorn:
--forwarded-allow-ips=domain.com
--proxy-headers
@url_for:
_external=True
_scheme="https"
@nginx:
proxy_set_header Subdomain $subdomain;
proxy_set_header Host $http_host;
proxy_pass http://localhost:7000/;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect http://$http_host/ https://$http_host/;
include proxy_params;
server {
if ($host = sub.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name sub.domain.com;
return 404; # managed by Certbot
}
If I open a .css or .js link, nginx renders it to https.
When I allow Firefox to ignore the unsafe content, the whole page is rendered correctly at the production server.
Let's encrypt works perfectly with the whole domain, no issues with the certificate.
The problem after all was the usage of * instead of "*" through bash.
The result was to have all the filenames returned at the FORWARDED_ALLOW_IPS parameter instead of the character "*".