Search code examples
pythondockerubuntucontainersfastapi

Permission denied When uploading file Docker Fastapi


I'm facing an error when am try to upload a file to FastAPI endpoint it work fine on localhost but I get an error when run it on a container.

The error I get:

File "/app/main.py", line 26, in say_hello
    with open(file_path, "wb") as f:
         ^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'uploads/h.png'

My upload endpoint:

@app.post("/letter")
async def say_hello(file: UploadFile = File(...)):
    
    file_path = f"uploads/{file.filename}"

    with open(file_path, "wb") as f:
        f.write(await file.read())
    return process_image(file_path)

My Dockerfile:

# syntax=docker/dockerfile:1

ARG PYTHON_VERSION=3.12.3
FROM python:${PYTHON_VERSION}-slim as base

ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

WORKDIR /app

ARG UID=10001
RUN adduser \
    --disabled-password \
    --gecos "" \
    --home "/nonexistent" \
    --shell "/sbin/nologin" \
    --no-create-home \
    --uid "${UID}" \
    appuser

# Set the permissions for all files
RUN chown -R appuser:appgroup /app

RUN --mount=type=cache,target=/root/.cache/pip \
    --mount=type=bind,source=requirements.txt,target=requirements.txt \
    python -m pip install -r requirements.txt

# Switch to the non-privileged user to run the application.
USER appuser

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

I've tried setting permission for the default user by adding permission on Dockerfile but nothing changed :

# Set the permissions for all files
RUN chown -R appuser:appgroup /app

Solution

  • Permissions on Copied Folder

    I believe that you need to change the file ownership after you have copied them across onto your image.

    ARG PYTHON_VERSION=3.12.3
    FROM python:${PYTHON_VERSION}-slim as base
    
    ENV PYTHONDONTWRITEBYTECODE=1
    ENV PYTHONUNBUFFERED=1
    
    WORKDIR /app
    
    ARG UID=10001
    RUN adduser \
        --disabled-password \
        --gecos "" \
        --home "/nonexistent" \
        --shell "/sbin/nologin" \
        --no-create-home \
        --uid "${UID}" \
        appuser
    
    RUN --mount=type=cache,target=/root/.cache/pip \
        --mount=type=bind,source=requirements.txt,target=requirements.txt \
        python -m pip install -r requirements.txt
    
    COPY . .
    
    RUN chown -R appuser:appuser /app/uploads
    
    EXPOSE 8000
    
    USER appuser
    
    CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
    

    That will result in the /app/uploads directory on any container being writable by appuser. It assumes that there is a uploads directory on the host that will be copied across onto the image.

    There is a problem with this though: any files uploaded onto a running container will only exist within the container and will be lost when the container stops.

    Volume Mount

    If you want to retain those files then you probably want to do a volume mount when running the image. You'd do something like -v ./uploads/:/app/uploads when you execute docker run. However, in this case you need to ensure that the uploads/ directory on the host will be writable on the container too. One way to do this would be to give the user, group and other write access to the folder on the host.

    chmod a+w uploads/
    

    There are probably better ways to do this, but this is certainly an option that works.