I’m developing a Python application that uses Flask, running in a Docker container on a Linux server with NGINX. The application works perfectly on my local machine, but when I deploy it on the server, I encounter the following error:
ERROR:app:Exception: Traceback (most recent call last): File "/app/app.py", line 32, in analyze_face analyzer = FaceFeatureAnalyzer() # Create an instance here File "/app/face_feature_analyzer/main_face_analyzer.py", line 43, in init self.face_app = FaceAnalysis(name='antelopev2', root=self.model_root) File "/usr/local/lib/python3.9/site-packages/insightface/app/face_analysis.py", line 43, in init assert 'detection' in self.models AssertionError
Here is the code
class FaceFeatureAnalyzer:
def __init__(self):
self.model_root = "/root/.insightface"
self.model_path = os.path.join(self.model_root, "models/antelopev2")
self.zip_path = os.path.join(self.model_root, "models/antelopev2.zip")
self.model_url = "https://github.com/deepinsight/insightface/releases/download/v0.7/antelopev2.zip"
# Initialize FaceAnalysis
self.face_app = FaceAnalysis(name='antelopev2', root=self.model_root)
self.face_app.prepare(ctx_id=0, det_size=(640, 640))
I have also tried to download it in same directory but that attempt also results in same error.. here is what i additionally tried
class FaceFeatureAnalyzer:
def __init__(self):
# Initialize the InsightFace model
self.face_app = FaceAnalysis(name='antelopev2')
self.face_app.prepare(ctx_id=0, det_size=(640, 640))
logger.info("Initialized FaceAnalysis with model 'antelopev2'.")
What I’ve Observed and Tried: Model Download and Extraction Logs: • During startup, the model antelopev2 is downloaded and extracted to /root/.insightface/models/antelopev2. The logs confirm this:
Download completed.
Extracting /root/.insightface/models/antelopev2.zip to /root/.insightface/models/antelopev2...
Extraction completed.
However, when checking the directory, it appears empty or the program cannot detect the models.
Manually Adding the Models Previously, manually downloading the antelopev2 model and placing it in /root/.insightface/models/antelopev2 resolved the issue. I also set appropriate permissions using:
chmod -R 755 /root/.insightface/models/antelopev2
After making updates to the codebase and rebuilding the Docker container, the issue reappeared.
Directory Contents: The following files exist in /root/.insightface/models/antelopev2:#
1k3d68.onnx
2d106det.onnx
genderage.onnx
glintr100.onnx
scrfd_10g_bnkps.onnx
These are the expected .onnx files for antelopev2.
The application works locally without any errors. The issue only arises in the Docker container on the Linux server.
Even though the files are present and permissions are set correctly, the application seems unable to detect them. How can I debug or fix this issue?
Dockerfile
FROM python:3.9-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set the working directory in the container
WORKDIR /app
# Install system dependencies including libgl1-mesa-glx and others
RUN apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx \
libglib2.0-0 \
g++ \
build-essential \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy the requirements file into the container
COPY requirements.txt /app/
# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code into the container
COPY . /app
EXPOSE 7002
# Run the Flask application
CMD ["python", "app.py"]
Docker-compose.yml
version: '3.8'
services:
flask-app:
build:
context: ./backend
container_name: flask-app
ports:
- "7000:7000"
environment:
- FLASK_RUN_HOST=0.0.0.0
- FLASK_RUN_PORT=7000
volumes:
- ./backend:/app
depends_on:
- nginx
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/sites-enabled
- ./nginx-certificates:/etc/letsencrypt
I met the same issue today with Ubuntu 22.04 + antelopev2 model. I changed the model to buffalo_l
and the error is gone.
# antelopev2 + Ubuntu 22.04: assert 'detection' in self.models
app = FaceAnalysis(name='buffalo_l', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
I don't know why, both antelopev2 and buffalo_l works on Mac M3 Max, but antelopev2 just won't work on Linux.