Search code examples
djangodockerdocker-composedjango-storage

With docker compose, how do I access a service internally and externally using the same address?


My problem boils down to this: I have two services in docker compose: app and storage. I'm looking for a way to access the storage service (port 9000) from inside app and from outside using the same address.

app is a Django app using django-storages with S3 backend. storage is a minio server (S3 compatible, used only for development).

From app, I can access storage using http://storage:9000. From outside docker, I can access storage at http://localhost:9000, or http://0.0.0.0:9000, or even at http://192.168.xxx.yyy (using a different device on the network). No surprises there.

However, when the URL is generated, I don't know whether it's going to be used internally or externally (or both).

docker-compose.yml

services:

  app:
    build: backend/
    ports:
      - "8000:8000"
    volumes:
      - ./backend/src:/app/src
    command: /usr/local/bin/python src/manage.py runserver 0.0.0.0:8000

  storage:
    image: minio/minio:RELEASE.2019-06-19T18-24-42Z
    volumes:
      - storage:/data
    environment:
      MINIO_ACCESS_KEY: "DevelopmentAccessKey"
      MINIO_SECRET_KEY: "DevelopmentSecretKey"
    ports:
      - "9000:9000"
    command: minio server /data

volumes:
  storage:

I've looked into changing the backend to yield endpoint urls depending on the context, but that is far from trivial (and would only be for development, production uses external S3 storage, I like to keep them as similar as possible).

I've played around with docker-compose network configs but I cannot seem to make this work.

Any thoughts on how to approach this in docker-compose?

Additional info:

I've played around with host.docker.internal (and gateway.docker.internal) but to no avail. host.docker.internal resolves to 192.168.65.2, I can access storage from app with that ip, but from the browser 192.168.65.2:9000 gives a timeout.

But it seems that using my computers external ip works. If I use 192.168.3.177:9000 I can access storage from both app, the browser and even external devices (perfect!). However, this ip is not fixed and obviously not the same for my colleagues, so it seems all I need is a way to dynamically assign it when doing docker-compose up


Solution

  • It's been a while but I thought I'd share how I ended up solving this issue for my situation, should anyone ever come across a similar problem. Relevant XKCD

    Practical solution

    After spending quite some time to make it work with docker only (see below), I ended up going down the practical road and fix it on the Django side of things.

    Since I'm using Django Rest Framework to expose the urls of objects in the store, I had to patch the default output of object urls created by the Django Storages S3 backend, to swap the host when developing locally. Internally, Django uses the API key to connect directly to the object store, but externally the files are only accessible with signed urls (private bucket). And because the hostname can be part of what is signed, it needs to be set correctly before the signature is generated (otherwise a dirty find-and-replace for hostname would suffice.)

    Three situations I had to patch:

    • signed urls (for viewing in the browser)
    • signed download urls (to provide a download button)
    • presigned post urls (for uploading)

    I wanted to use the host of the current request as host of the object links (but on port 9000 for Minio). The advantages of this are:

    • works with localhost, 127.0.0.1, and whatever ip address my machine is assigned. So I can use localhost on my machine and use my 192.168.x.x address from a mobile for testing without changing code.
    • requires no setup for different developers
    • doesn't require a container restart when ip is changed

    The situations above were implemented as follows:

    # dev settings, should be read from env for production etc.
    
    AWS_S3_ENDPOINT_URL = 'http://storage:9000'
    AWS_S3_DEV_ENDPOINT_URL = 'http://{}:9000'
    
    def get_client_for_presigned_url(request=None):
        # specific client for presigned urls
        endpoint_url = settings.AWS_S3_ENDPOINT_URL
    
        if request and settings.DEBUG and settings.AWS_S3_DEV_ENDPOINT_URL:
            endpoint_url = settings.AWS_S3_DEV_ENDPOINT_URL.format(request.META.get('SERVER_NAME', 'localhost'))
    
        storage = S3Boto3Storage(
            endpoint_url=endpoint_url,
            access_key=settings.AWS_ACCESS_KEY_ID,
            secret_key=settings.AWS_SECRET_ACCESS_KEY,
        )
        return storage.connection.meta.client
    
    class DownloadUrlField(serializers.ReadOnlyField):
        # example usage as pre-signed download url
        def to_representation(self, obj):
            url = get_client_for_presigned_url(self.context.get('request')).generate_presigned_url(
                "get_object",
                Params={
                    "Bucket": settings.AWS_STORAGE_BUCKET_NAME,
                    "Key": str(obj.file_object),  # file_object is key for object store
                    "ResponseContentDisposition": f'filename="{obj.name}"', # name is user readable filename
                },
                ExpiresIn=3600,
            )
            return url
    
    # similar for normal url and pre-signed post
    
    

    This gives me and other developers an easy to use, local, offline available development object store, at the price of a small check in code.

    Alternative solution

    I quickly found out that to fix it on the docker side, what I really needed was to get the ip address of the host machine (not the docker host) and use that to create links to my Minio storage. Like I mentioned in my question, this was not the same as the docker.host.internal address.

    Solution: using env variable to pass in the host ip.

    docker-compose.yml

    services:
     
      app:
        build: backend/
        ports:
          - "8000:8000"
        environment:
          HOST_IP: $DOCKER_HOST_IP
        volumes:
          - ./backend/src:/app/src
        command: /usr/local/bin/python src/manage.py runserver 0.0.0.0:8000
    
        # ... same as in question
    

    settings.py

        AWS_S3_ENDPOINT_URL = f'http://{os.environ['HOST_IP']}:9000'
    

    When the environment variable DOCKER_HOST_IP is set when calling docker-compose up this will create urls that use that IP, properly signed. Several ways to get the environment variable to docker-compose:

    • set it up in .bash_profile
    • pass it to the command with the -e flag
    • set it up in PyCharm

    For .bash_profile I used the following shortcut:

    alias myip='ifconfig | grep "inet " | grep -v 127.0.0.1 | cut -d\  -f2'
    export DOCKER_HOST_IP=$(myip)
    

    For PyCharm (very useful for debugging) setup was a little more tricky, since the default environment variables cannot be dynamic. You can, however, define a script that runs 'Before launch' for a 'run configuration'. I created a command that sets the environment variable in the same way as in .bash_profile and miraculously it seems that PyCharm keeps that environment when running the docker-compose command, making it work they way I want.

    Issues:

    • need to restart container when ip changes (wifi off/on, sleep/wake in different location, ethernet unplugged)
    • needs a dynamic value in the environment, finicky to set up properly
    • doesn't work when not connected to a network (no ethernet and no wifi)
    • cannot use localhost, need to use current ip address
    • only works for one ip address (so need to pick one when using ethernet and wifi)

    Because of these issues I ended up going with the practical solution.