Search code examples
dockernginxdocker-composereverse-proxy

docker-compose nginx proxy_pass to upstream containers not behaving as expected


I'm trying to get a basic reverse-proxy working to handle multiple websites based on [this tutorial][1], but adapting it to use a single docker-compose file and proxy_pass to upstream containers. This appears to be the most concise way to go about it as it's for my learning / testing server and I will be starting and stopping containers often. I want to get this locked down before I start adding more complex app containers. I'm not confident in which part of the configuration I should be forwarding ports because most of the questions and tutorials online are not using upstream containers.

EDIT - default server was not listening on 443, fixing this removed one confusion. Now I am only getting the expected index.html from x.x.x.x/ and the reverse-proxy custom 404 page from x.x.x.x/site1 or x.x.x.x/site2 (or anything else)

From what I've read, ports are handled internally by docker as long as the containers are linked (on the same docker network) and even the expose statement is not required in docker-compose.yml so long as the container is started with docker-compose up

And I've tried forwarding custom ports to the containers with this in docker-compose.yml

ports:
  - 8081:443

and this in nginx default.conf

upstream docker-site1 {
    server website1-container:8081;
}

But this gives me 502 Bad Gateway

I am using named containers and external networks to keep names static, in an effort to keep inter-container networking separate from the host, and to take advantage of Docker features in that regard.

I've spent two days on this now and I really need some direction to keep from going around in circles!

EDIT- still going around in circles. Updated default.conf thanks to lmsec, and also added /site1 to the volume path in docker-compose.yml

My docker-compose.yml (in the top level directory) EDITED - my best working config

version: '3.6'
services:
  proxy:
    build: ./proxy/
    container_name: reverse-proxy
    hostname: reverse-proxy

    networks:
      - public
      - website1
      - website2

    ports:
      - 80:80
      - 443:443


  site1_app:
    build:
      ./site1/
    volumes:
      - ./site1/html:/usr/share/nginx/html/site1
    container_name: website1-container
    hostname: website1-container
    networks:
      - website1
 
  site2_app:
    build:
      ./site2/
    volumes:
      - ./site2/html:/usr/share/nginx/html/site2
    container_name: website2-container
    hostname: website2-container
    networks:
      - website2

networks:
  public:
    external: true
  website1:
    external: true
  website2:
    external: true

Dockerfile in ./proxy/

FROM nginx:1.20-alpine

COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./backend-not-found.html /var/www/html/backend-not-found.html
COPY ./index.html /var/www/html/index.html

#  Proxy and SSL configurations
COPY ./includes/ /etc/nginx/includes/
# Proxy SSL certificates
COPY ./ssl/ /etc/ssl/certs/nginx/

the website Dockerfiles only contain FROM nginx:1.20-alpine

default.conf in ./proxy/ EDITED - My most working config, doesn't link JS,CSS,Images

# Default
server {
    # listen on port 80 (http)
    listen 80 default_server;
    server_name _;
    
    location / {
        # redirect any requests to the same URL but on https
        return 301 https://$host$request_uri;
    }
}
    
server {
  listen 443 ssl http2 default_server;

  server_name _;
  root /var/www/html;

  charset UTF-8;

  # Path for SSL config/key/certificate
  ssl_certificate /etc/ssl/certs/nginx/proxy.crt;
  ssl_certificate_key /etc/ssl/certs/nginx/proxy.key;
  include /etc/nginx/includes/ssl.conf;


  error_page 404 /backend-not-found.html;
  location = /backend-not-found.html {
    allow   all;
  }

  location / {
    index index.html;
  }
  location /site1 {
    include /etc/nginx/includes/proxy.conf;
    proxy_pass http://website1-container;
  }
  location /site2 {
    include /etc/nginx/includes/proxy.conf;
    proxy_pass http://website2-container;
  }


  access_log off;
  log_not_found off;
  error_log  /var/log/nginx/error.log error;
}

proxy.conf in ./proxy/includes/

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_intercept_errors on;

Each website container has its own network which it shares with the proxy container.

 {
    "Name": "website1",
    "Id": "9477470a8689d08776b38c4315882caff75573b7244f77091aa5e5438804ce36",
    "Created": "2021-06-21T02:52:25.402118801Z",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
        "Driver": "default",
        "Options": {},
        "Config": [
            {
                "Subnet": "192.168.160.0/20",
                "Gateway": "192.168.160.1"
            }
        ]
    },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
        "Network": ""
    },
    "ConfigOnly": false,
    "Containers": {
        "7c1a8b62864642afd5366ef88d762e4c5450eee02acb8c3f1890444b59379340": {
            "Name": "website1-container",
            "EndpointID": "f04d96343737574ca869270954461774f731851b781120119c21e02c0aa9968e",
            "MacAddress": "02:42:c0:a8:a0:02",
            "IPv4Address": "192.168.160.2/20",
            "IPv6Address": ""
        },
        "a88326952fb5f25f9084eb038f22f56b7331032a5ba71848ea6ada677a2ed998": {
            "Name": "reverse-proxy",
            "EndpointID": "b0c97c7f8dfe0febddbd6668481a009cce0c4f20dae3c3d3280dad0069c90394",
            "MacAddress": "02:42:c0:a8:a0:03",
            "IPv4Address": "192.168.160.3/20",
            "IPv6Address": ""
        }
    },
    "Options": {},
    "Labels": {}
}

I can access the website containers through this network and even get index.html with curl: sudo docker exec reverse-proxy curl 192.168.160.2/site1/index.html

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0<!DOCTYPE html>
<html>
  <head>
    <title>Site 1</title>
  </head>
  <body>
    <h1>This is a sample "site1" response</h1>
  </body>
</html>
100   142  100   142    0     0  20285      0 --:--:-- --:--:-- --:--:-- 23666

I'm marking this question closed. I have come to the conclusion that recent versions of docker do not require any special port forwarding when using proxy_pass to a docker container, although if required it can be done in docker-compose and nginx default.conf - as lmsec answers explain.


Solution

  • [..] in which part of the configuration I should be forwarding ports [..] using upstream containers.

    You may do it in the upstream definition (abstract from nginx docs below) :

    upstream backend {
        server backend1.example.com       weight=5;
        server backend2.example.com:8080;
        # [..]
    }
    

    [..] when I request my server's root x.x.x.x, I get website1 and when I request x.x.x.x/site1 I get a 404 error.

    You did not define a default_server for https (443), so the first server is used as default for 443. (Not sure about why you get a 404.)

    never got a response from website2

    You'll need to request site2 to get a response from it (because of server_name site2;). For testing purpose, you can put it in you hosts file.

    site1     127.0.0.1
    site2     127.0.0.1
    

    Here are some other keys to begin faster with nginx-as-a-proxy :

    • server_name acts like a requests filter ;
    • use proxy_pass http://docker-site1/; (with the trailing /) so that /example goes to http://docker-site1/example, not http://docker-site1 ;
    • you may proxy to different hosts or upstreams based on the URI (example below : /site2 and /site3).
    server {
      # Filter requests having 'Host: site1' (ignore the others)
      server_name site1;
    
      location / {
        # Send everything beginning with '/' to docker-site1
        proxy_pass http://docker-site1/;
      }
    
      location /site2/ {
        # Send everything beginning with '/site2/' to docker-site2
        #   removing the leading `/site2`
        proxy_pass http://docker-site2/;
      }
    
      location /site3/ {
        # Send everything beginning with '/site3/' to docker-site3
        #   keeping the leading `/site2`
        proxy_pass http://docker-site3/site3/;
      }
    }
    
    server {
      # do something else if the requested Host is site2
      server_name site2;  
    }
    

    (Of course?) this also works without upstream, with your servers' adresses in the proxy_pass instead of the upstream identifier.


    EDIT - Bonus: Docker(-compose) ports and networking

    site1_app:
      ports:
        - 8081:443
    
    • from "outside" Docker, you'll access site1_app's 443 port from localhost:8081 (or x.x.x.x:8081)
    • from another container on the same network, you'll access site1_app's 443 port from site1_app:443* (or https://site1_app)

    (Let's imagine site1_app also listens on port 80) :

    • from "outsite" Docker, you can't access site1_app's 80 port : it is not forwarded (here, only 443 is)
    • from another container on the same network, you'll access site1_app's 80 port from site1_app:80* (or http://site1_app)

    *Not sure this works with docker-compose's version: '2', but it does with version: '3.9'.

    The following lines you wrote allow you to call website1_container instead of site1_app :

    container_name: website1-container 
    hostname: website1-container
    

    So if you do :

    # 3
    upstream docker-site1 {
        server website1-container:8081;
    }
    server {
      # 1
      listen 80;
      listen 443 ssl http2;
      server_name site1;
    
      # [..] SSL config/key/certificate
    
      location / {
        # 2
        proxy_pass http://docker-site1/;
      }
    
    

    Supposing you're setting the request header to Host: site1 (thanks to your hosts file or forging the request headers yourself) :

    1. the request, HTTP or HTTPS arrives to the site1 block
    2. it gets proxied to http://docker-site1/ (http)
    3. docker-site1 is resolved as the server group containing only one server : website1-container:8081
    4. The container site1_app receives the request on its 8081 port (not 443).
    5. Even if it did, site1_app probably wants HTTPS on the 443 port.

    So you should :

    1. Use the internal port instead of external,
    2. Check that you send HTTP (resp. HTTPS) to a port awaiting for HTTP (resp. HTTPS)