Search code examples
dockernginxflaskgunicorn

How to run nginx and gunicorn in same docker container


I am trying to deploy a python flask application with gunicorn and nginx . I am trying to run both gunicorn(wsgi) and nginx in same container . But my nginx is not started . By login into container I am able to start nginx. Below is my dockerfile


RUN apt-get clean && apt-get -y update

RUN apt-get -y install \
    nginx \
    python3-dev \
    curl \
    vim \
    build-essential \
    procps

WORKDIR /app

COPY requirements.txt /app/requirements.txt
COPY nginx-conf  /etc/nginx/sites-available/default
RUN pip install -r requirements.txt --src /usr/local/src

COPY . .

EXPOSE 8000
EXPOSE 80
CMD ["bash" , "server.sh"]

server.sh file looks like


# turn on bash's job control
set -m

gunicorn  --bind  :8000  --workers 3 wsgi:app
service nginx start or /etc/init.d/nginx

gunicorn is started by server.sh but nginx is not started.

My aim is to later run these containers in kubernetes. Should i) I run both nginx and gunicorn in separate pod or ii) run it in same pod with separate container or iii) run in same container in same pod


Solution

  • My aim is to later run these containers in kubernetes. Should i) I run both nginx and gunicorn in separate pod

    Yes, this. This is very straightforward to set up (considering YAML files with dozens of lines "straightforward"): write a Deployment and a matching (ClusterIP-type) Service for the GUnicorn backend, and then write a separate Deployment and matching (NodePort- or LoadBalancer-type) Service for the Nginx proxy. In the Nginx configuration, use a proxy_pass directive, pointing at the name of the GUnicorn Service as the backend host name.

    There's a couple of advantages to doing this. If the Python service fails for whatever reason, you don't have to restart the Nginx proxy as well. If you're handling enough load that you need to scale up the application, you can run a minimum number of lightweight Nginx proxies (maybe 3 for redundancy) with a larger number of backends depending on the load. If you update the application, Kubernetes will delete and recreate the Deployment-managed Pods for you, and again, using a separate Deployment for the proxies and backends means you won't have to restart the proxies if only the application code changes.

    So, to address the first part of the question:

    I am trying to deploy a python flask application with gunicorn and nginx.

    In plain Docker, for similar reasons, you can run two separate containers. You could manage this in Docker Compose, which has a much simpler YAML file layout; it would look something like

    version: '3.8'
    services:
      backend:
        build: . # Dockerfile just installs GUnicorn, CMD starts it
      proxy:
        image: nginx
        volumes:
          - ./nginx-conf:/etc/nginx/conf.d # could build a custom image too
            # configuration specifies `proxy_pass http://backend:8000`
        ports:
          - '8888:80'
    

    This sidesteps all of the trouble of trying to get multiple processes running in the same container. You can simplify the Dockerfile you show:

    # Dockerfile
    FROM python:3.9
    RUN apt-get update \
     && DEBIAN_FRONTEND=noninteractive \
        apt-get install --no-install-recommends --assume-yes \
        python3-dev \
        build-essential
    # (don't install irrelevant packages like vim or procps)
    WORKDIR /app
    COPY requirements.txt .
    RUN pip install -r requirements.txt
    COPY . .
    EXPOSE 8000
    # (don't need a shell script wrapper)
    CMD gunicorn --bind :8000 --workers 3 wsgi:app