Search code examples
dockerreverse-proxydocker-network

docker: split structure into usefull networks


I'm not quite sure about the correct usage of docker networks.

I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy, frontend and backend. The last one for project1, assuming there could be multiple projects at the end. But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.

So this is my current working structure with only one network (bridge) - which doesn't make sense:

  1. Reverse proxy (network: reverse-proxy):
    • jwilder/nginx-proxy
    • jrcs/letsencrypt-nginx-proxy-companion
  2. Database
    • mongo:3.6.2
  3. Project 1
    • one/frontend
    • one/backend
    • two/frontend
    • two/backend

So my first docker-compose looks like this:

version: '3.5'

services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    networks:
      - reverse-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
      - html:/usr/share/nginx/html
      - /opt/nginx/certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro

  nginx-letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: nginx-letsencrypt
    networks:
      - reverse-proxy
    depends_on:
      - nginx-proxy
    volumes:
      - /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
      - html:/usr/share/nginx/html
      - /opt/nginx/certs:/etc/nginx/certs:rw
      - /var/run/docker.sock:/var/run/docker.sock:rw
    environment:
      NGINX_PROXY_CONTAINER: "nginx-proxy"

  mongodb:
    container_name: mongodb
    image: mongo:3.6.2
    networks:
      - reverse-proxy

volumes:
  html:

networks:
  reverse-proxy:
    external:
      name: reverse-proxy

That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.

The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):

docker run
  --name project1-one-frontend
  --network reverse-proxy
  --detach
  -e VIRTUAL_HOST=project1.my-server.com
  -e LETSENCRYPT_HOST=project1.my-server.com
  -e LETSENCRYPT_EMAIL=mail@my-server.com
  project1-one-frontend:latest

How should I split this into useful networks?


Solution

  • TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.

    useful networks

    Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.

    I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.

    This is pretty simple with docker-compose. Just specify the networks you want at the top level, just like you've done for reverse-proxy:

    networks:
      reverse-proxy:
        external:
          name: reverse-proxy
      frontend:
      backend:
    

    Then something like this:

    version: '3.5'
    
    services:
      nginx-proxy:
        image: jwilder/nginx-proxy
        container_name: nginx-proxy
        networks:
          - reverse-proxy
        ports:
          - "80:80"
          - "443:443"
        volumes:
          ...
    
      frontend1:
        image: some/image
        networks:
          - reverse-proxy
          - backend
    
      backend1:
        image: some/otherimage
        networks:
          - backend
    
      backend2:
        image: some/otherimage
        networks:
          - backend
    
      ...
    

    Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.

    So how can you do what's illustrated in docker-compose.yml above? I found this: https://success.docker.com/article/multiple-docker-networks

    To summarize, you can only attach one network using docker run, but you can use docker network connect <container> <network> to connect running containers to more networks after they're started.

    The order in which you create networks, run docker-compose up, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml if you like, or use docker network create and import them into your docker-compose stack. It depend on how you're using this stack, and that will determine the order of operations here.

    The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..

    1. docker-compose up with all networks defined in the docker-compose.yml
    2. for each app container:

      docker run the container

      docker network attach the right networks