I'm building a docker container that has a frontend (vite+typescript) and backend (flask), separated into two docker images. I can successfully build the container with docker compose and access it without any issues on my computer. When I access it from another device on the same network, the backend doesn't work.
App Description
This is a simple application where the date and time are sent from backend to the frontend, where it is displayed. If the date is displayed, then the app was "successful".
Issue
When I try to connect to the docker container from my phone on the same network, only the frontend works. I believe this is because in my fetch
code, I specify localhost:port
like shown below (the docker container is production, so 8000
is used):
export const host =
import.meta.env.MODE !== 'production'
? 'http://localhost:5001'
: 'http://localhost:8000'
export const get = (
route: string,
callback: (response: object) => void,
errorCallback?: (response: object) => void,
) => {
fetch(`${host}/${route}`)
.then((response) => response.json())
.then(callback)
.catch((error) =>
errorCallback ? errorCallback(error as Error) : console.error(error),
)
}
When I change host
to use the ip address of my computer instead, I'm able to access the container from my phone successfully (i.e., the date and time are displayed):
export const host =
import.meta.env.MODE !== 'production'
? 'http://localhost:5001'
: 'http://<ipaddress of computer>:8000'
I thought that the way I set up the docker compose yaml, it shouldn't matter that it was always localhost, but apparently it does.
Question: What is the correct way to set up my host
variable and/or docker compose yaml so that I don't have to specify the ip address of my computer to access the container from another device on the same network?
Relevant docker files
compose.yaml
services:
frontend:
build:
context: .
dockerfile: Dockerfile
environment:
NODE_ENV: production
TZ: America/New_York
ports:
- 7023:7023
networks:
- backend
depends_on:
- backend
backend:
build:
context: ./backend
dockerfile: Dockerfile
environment:
NODE_ENV: production
TZ: America/New_York
ports:
- 8000:8000
networks:
- frontend
networks:
frontend:
backend:
./Dockerfile
# syntax=docker/dockerfile:1
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Dockerfile reference guide at
# https://docs.docker.com/go/dockerfile-reference/
ARG NODE_VERSION=18.18.0
################################################################################
# Use node image for base image for all stages.
FROM node:${NODE_VERSION}-alpine as base
# Set working directory for all build stages.
WORKDIR /usr/src/app
################################################################################
# Create a stage for installing production dependecies.
FROM base as deps
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.npm to speed up subsequent builds.
# Leverage bind mounts to package.json and package-lock.json to avoid having to copy them
# into this layer.
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci --omit=dev
################################################################################
# Create a stage for building the application.
FROM deps as build
# Download additional development dependencies before building, as some projects require
# "devDependencies" to be installed to build. If you don't need this, remove this step.
RUN --mount=type=bind,source=package.json,target=package.json \
--mount=type=bind,source=package-lock.json,target=package-lock.json \
--mount=type=cache,target=/root/.npm \
npm ci
# Copy the rest of the source files into the image.
COPY . .
# Run the build script.
RUN npm run build
################################################################################
# Create a new stage to run the application with minimal runtime dependencies
# where the necessary files are copied from the build stage.
FROM base as final
# Use production node environment by default.
ENV NODE_ENV production
# Run the application as a non-root user.
USER node
# Copy package.json so that package manager commands can be used.
COPY package.json .
# Copy the production dependencies from the deps stage and also
# the built application from the build stage into the image.
COPY --from=deps /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
# Expose the port that the application listens on.
EXPOSE 7023
# Run the application.
CMD npx http-server ./dist -p 7023
./backend/Dockerfile
# syntax=docker/dockerfile:1
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Dockerfile reference guide at
# https://docs.docker.com/go/dockerfile-reference/
ARG PYTHON_VERSION=3.11.4
FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Use production environment by default.
ENV FLASK_ENV production
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/sbin/nologin" \
--no-create-home \
--uid "${UID}" \
appuser
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,source=requirements.txt,target=requirements.txt \
python -m pip install -r requirements.txt
# Switch to the non-privileged user to run the application.
USER appuser
# Copy the source code into the container.
COPY . .
# Expose the port that the application listens on.
EXPOSE 8000
# Run the application.
CMD gunicorn 'app:app' --bind=0.0.0.0:8000
My backend wasn't working when accessing from another device on the network because I needed to use nginx as a reverse proxy. In my nginx.conf file, was able to specify the name of the docker service and docker port being used.
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
upstream loadbalancer {
server backend:7001; #### docker service name and port ####
}
server {
listen 7000; #### docker port ####
server_name localhost;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html /index.htm =404;
}
location /api {
proxy_pass http://loadbalancer;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
Then, I was able to set my host
variable like this:
export const host = import.meta.env.MODE !== 'production' ? 'http://localhost:5001/api' : '/api'