Search code examples
webdevopsdocker-swarm

how to give static IPs to swarm services


the use case is:

I have multiple web applications deployed on a docker swarm cluster, each web application has its own Nginx and there is one proxy Nginx that forwards traffic of each web app to its Nginx based on the server_name.

Those are the configuration of nginx proxy:

server {

    listen 80;

    server_name www.app1.com;

    location / {
        proxy_pass http://app_1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host:$server_port;
        proxy_http_version 1.1;
    }
}
server {

    listen 80;

    server_name www.app2.com;

    location / {
        proxy_pass http://app_2;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host:$server_port;
        proxy_http_version 1.1;
    }
}

both app_1 and app_2 refer to their service name in the docker stack:

version: '3.9'

services:
  app_1:
    {...}
  app_2:
    {...}

In the best case, both app_1 and app_2 will work and the docker DNS will resolve their IP addresses and both apps will run successfully, However when one of them crashes then the docker DNS can not revolve that service name to an IP address because it doesn’t exist(crashed) then the proxy nginx will also crash and all web applications will be down, the goal is to make the apps independent and crash in one of them doesn’t affect the others.

To chive that there are some solutions:

  1. give each service a static IP regardless of whether it's running or not. (I think it's the best option)
  2. configure nginx to resolve the IP address at runtime with the docker DNS provider. (I think it will affect the performance)

The question is how to achieve the first point or find an alternative to avoid the second point.


Solution

  • I found another solution

    I wrote a little script that will wait until all dependencies are up and running and then run the nginx_proxy

    this is the script

    also, add this to your nginx Dockerfile:

    COPY --chmod=500 ./conf/wait_for_dependencies /wait_for_dependencies
    
    CMD /wait_for_dependencies /docker-entrypoint.sh nginx -g 'daemon off;'
    

    and pass the dependencies as an environment variable separated by |:

      nginx_proxy:
        {...}
        environment:
          - DEPENDENCIES=nginx_1|nginx_2