Search code examples
postgresqldockerdocker-composedocker-volume

Receiving an error from a docker-compose that the user must own the data directory


Every time I try to build my image, I get the following error:

The server must be started by the user that owns the data directory.

The following is my docker file:

version: "3.7"

services:
  db:
    image: postgres
    container_name: xxxxxxxxxxxx
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: $POSTGRES_DB
      POSTGRES_USER: $POSTGRES_USER
      POSTGRES_PASSWORD: $POSTGRES_PASSWORD

  nginx:
    image: nginx:latest
    restart: always
    container_name: xxxxxxxxxxxx-nginx
    volumes:
      - ./deployment/nginx:/etc/nginx
    logging:
      driver: none
    depends_on: ["radio"]
    ports:
      - 8080:80
      - 8081:443

  radio:
    build:
      context: .
      dockerfile: "./deployment/Dockerfile"
    image: test-radio
    command: './manage.py runserver 0:3000'
    container_name: xxxxxxxxxxxxxxx
    restart: always
    depends_on: ["db"]
    volumes:
      - type: bind
        source: ./api
        target: /app/api
      - type: bind
        source: ./xxxxxx
        target: /app/xxxxx
    environment:
      POSTGRES_DB: $POSTGRES_DB
      POSTGRES_USER: $POSTGRES_USER
      POSTGRES_PASSWORD: $POSTGRES_PASSWORD
      POSTGRES_HOST: $POSTGRES_HOST
      AWS_KEY_ID: $AWS_KEY_ID
      AWS_ACCESS_KEY: $AWS_ACCESS_KEY
      AWS_S3_BUCKET_NAME: $AWS_S3_BUCKET_NAME

networks:
  default:

The image is built with the following run.sh file:

 #!/usr/bin/env sh

if [ ! -f .pass ]; then
    openssl rand -base64 32 > .pass
fi

#export POSTGRES_DB="xxxxxxxxxxxxxxxxx"
#export POSTGRES_USER="xxxxxxxxxxxxxx"
#export POSTGRES_PASSWORD="xxxxxxxxxxxxxxxxxxxx"
#export POSTGRES_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

export POSTGRES_DB="xxxxxxxxxxxxxxxxxx"
export POSTGRES_USER="xxxxxxxxxxxxxxxxxxxx"
export POSTGRES_PASSWORD="`cat .pass`"
export POSTGRES_HOST="db"

export AWS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_S3_BUCKET_NAME=""

echo "Your psql password is in .pass do not commit this file."
echo "The app will be available on localhost:8080 shortly"

if [ -z "$1" ]; then
    docker-compose up
else
    docker-compose up $1
fi

I'm wondering if my error is being caused by attempting to use a bash script to deploy the service on a Windows machine?


Solution

  • Details on the issue

    The behavior observed by the OP definetely comes from a UID/GID mismatch, given that the specification

    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    

    (which can be viewed as a docker-compose equivalent of docker run -v "$PWD/postgres-data:/var/lib/postgresql/data" …) bind-mounts the $PWD/postgres-data folder inside the container, giving access to its files as is (including owner/group metadata).

    Also, note that the handling of owner/group metadata between host and containers only relies on the numeric UID and GID, not on the owner and group names.

    For more information about UIDs and GIDs in a Docker context, see also that article on Medium.

    Workarounds if the bind-mount is necessary

    For completeness, several possible solutions to workaround the bind-mount UID-mismatch issue (including the most straightforward one that consists in changing the files' UID :) are described in this answer on StackOverflow: How to have host and container read/write the same files with Docker?

    Other Solutions

    Following @ParanoidPenguin's comment, you may want to use a named volume, which mainly consists in using:

    Remarks:

    • docker run -v PATH1:PATH2 … triggers a bind-mount of PATH1 (host) to PATH2 (container) if and only if PATH1 is absolute (i.e., starts with a /) (e.g., -v "$PWD:$PWD" is a common idiom)

    • docker run -v NAME:PATH2 … mounts volume NAME to PATH2 (container) if and only if NAME does not contain any / (i.e., matches regexp [a-zA-Z0-9][a-zA-Z0-9_.-]).

    • even if we don't run docker volume create foo beforehand by hand, docker run -v foo:/data --rm -it debian will create the named volume foo if need be.

    • in order to populate the files of a named volume (or respectively, backup them) you can use an ephemeral container of image debian, ubuntu or so, combining at the same time a bind-mount and a volume mount:

      Add a file /home/user/bar.txt in a new volume foo

        file1=/home/user/bar.txt  # initial file
        uid=2000  # target User-ID in the volume
        gid=2000  # target Group-ID in the volume
        docker pull debian
        docker run -v "$file1:$file1:ro" -v foo:/data \
          -e file1="$file1" -e uid="$uid" -e gid="$gid" \
          --rm -it debian bash -exc \
          'cp -v -- "$file1" /data/bar.txt && chown -v $uid:$gid /data/bar.txt'
        docker volume ls
      

      Backup the foo volume in a tarball

        date=$(date +'%Y%m%d_%H%M%S')
        back="backup_$date.tar.gz"
        destdir=/home/user/backup
        mkdir -p "$destdir"
        docker run -v foo:/data -v "$destdir:/backup" -e back="$back" \
          --rm -it debian bash -exc 'tar cvzf "/backup/$back" /data'