Search code examples
djangodockerdocker-composecontainers

Docker socket is not accesible in Docker.prod


I have the following docker-compose file which builds and starts 4 containers one of them is Django container for which I am mounting the /var/run/docker.sock in volumes so that Django container can access the host docker engine.

version: '3.8'

services:
  web:
    build:
      context: ./app
      dockerfile: Dockerfile.prod
    command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - static_volume:/home/app/web/staticfiles
      - media_volume:/home/app/web/mediafiles
      - /var/run/docker.sock:/var/run/docker.sock
    expose:
      - 8000
    env_file:
      - ./.env.prod
    depends_on:
      - postgresdb
    restart: always

  postgresdb:
    container_name: postgresdb
    image: timescale/timescaledb:latest-pg11
    volumes:
      - ./:/imports
      - postgres_data:/var/lib/postgresql/data/
    command: 'postgres -cshared_preload_libraries=timescaledb'
    ports:
      - "5432:5432"
    env_file:
      - ./.env.prod.db
    restart: always

  nginx:
    build: ./nginx
    volumes:
      - static_volume:/home/app/web/staticfiles
      - media_volume:/home/app/web/mediafiles
    ports:
      - 80:80
    depends_on:
      - web
    restart: always
    
  volttron1:
    container_name: volttron1
    hostname: volttron1
    build:
      context: ./volttron
      dockerfile: Dockerfile
    image: volttron/volttron:develop
    volumes:
      - ./volttron/platform_config.yml:/platform_config.yml
      - ./volttron/configs:/home/volttron/configs
      - ./volttron/volttronThingCerts:/home/volttron/volttronThingCerts
    environment:
      - CONFIG=/home/volttron/configs
      - LOCAL_USER_ID=1000
    network_mode: host
    restart: always
    mem_limit: 700m
    cpus: 1.5

volumes:
  postgres_data:
  static_volume:
  media_volume:

The content of the Docker.prod for django web container is following

###########
# BUILDER #
###########

# pull official base image
FROM python:3.9.6-alpine as builder

# set work directory
WORKDIR /usr/src/app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# install psycopg2 dependencies
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev

RUN apk add libc-dev
RUN apk add --update-cache
RUN apk add --update alpine-sdk && apk add libffi-dev openssl-dev && apk --no-cache --update add build-base

# lint
RUN pip install -U pip
RUN pip install flake8==3.9.2
COPY . .
RUN flake8 --ignore=E501,F401 ./hello_django

# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt


#########
# FINAL #
#########

# pull official base image
FROM python:3.9.6-alpine

# create directory for the app user
RUN mkdir -p /home/app

# create the app user
RUN addgroup -S app && adduser -S app -G app

# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
RUN mkdir $APP_HOME/mediafiles
WORKDIR $APP_HOME

# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*

# copy entrypoint.prod.sh
COPY ./entrypoint.prod.sh .
RUN sed -i 's/\r$//g'  $APP_HOME/entrypoint.prod.sh
RUN chmod +x  $APP_HOME/entrypoint.prod.sh

# copy project
COPY . $APP_HOME

# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chmod 666 /var/run/docker.sock

# change to the app user
USER app

# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]

The problem is in the statement RUN chmod 666 /var/run/docker.sock which raises the following error

chmod: cannot access "/var/run/docker.sock": No such file or directory

but why I am getting this error? when I have mounted the /var/run/docker.sock in docker.compose.yml file


Solution

  • You're trying to chmod the docker.sock file when building the image. The volume is only mounted and used when running the container. You'll probably need to change permissions of the socket file on the host if needed.