I've recently tried to dockerize my FastAPI Python server (also to replicate / double it). Before I had only MySQL server in a docker container and had everything fine, but when I also made a service out of my web server, it couldn't ever connect to MySQL server, so the app doesn't work now.
Here is the snippet of server DB init connector in app
from fastapi import FastAPI
import mysql.connector
app = FastAPI()
dbconfig = {
"host": "localhost",
"database": "server_db",
"user": "db_user",
"password": "user-password"
}
# Checking DB connection
try:
init_cnx = mysql.connector.connect(
host='localhost',
user='db_user',
password='user-password'
)
cursor = init_cnx.cursor()
cursor.execute("SHOW DATABASES LIKE 'server_db'")
if cursor.fetchone() == None:
# Create DB in case one doesn't exist
cursor.execute("CREATE DATABASE server_db")
cursor.execute("USE server_db")
cursor.execute("CREATE TABLE Messages ("
"message_id INT NOT NULL AUTO_INCREMENT,"
"sender_name VARCHAR(32),"
"message_text VARCHAR(64),"
"created_at DATE,"
"user_messages_count INT,"
"PRIMARY KEY (message_id));")
print('DB Created!')
cursor.close()
init_cnx.close()
except mysql.connector.Error as err:
print("On init_cnx:", err)
# DB I/O function
async def execute_db_query(query, cursor_buffered=False):
cnx = mysql.connector.connect(**dbconfig)
try:
cursor = cnx.cursor(buffered=cursor_buffered)
cursor.execute("USE server_db")
cursor.execute(query)
result = cursor.fetchall()
cnx.commit()
print("Query executed successfully!")
return result
except Exception as e:
print("Error executing query:", e)
finally:
if cnx:
cnx.close()
# Get root function, just to check if app is connected to DB
@app.get("/")
async def get_root():
try:
entries_count = await execute_db_query("SELECT COUNT(*) FROM Messages", cursor_buffered=True)
return {"Messages entries": entries_count[0][0]}
except Exception as e:
return {"Error": e}
Dockerfile for server
FROM python:3.11.4-slim-bookworm
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py .
EXPOSE 8000
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000"]
init.sql script
CREATE USER 'db_user'@'%' IDENTIFIED BY 'user-password';
GRANT ALL PRIVILEGES ON *.* TO 'db_user'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
And docker-compose.yml
version: "3.8"
services:
db_mysql:
image: mysql:8
restart: always
environment:
MYSQL_ROOT_PASSWORD: "root"
volumes:
- "./mysql/init.sql:/docker-entrypoint-initdb.d/init.sql"
- "./mysql/db_mysql_data:/var/lib/mysql"
- "./mysql/mysql_logs:/var/log/mysql"
networks:
- dummy_network
server_1:
image: dummy_msg_server
ports:
- "8081:8000"
networks:
- dummy_network
#command: sh -c "sleep 60s"
depends_on:
- db_mysql
server_2:
image: dummy_msg_server
ports:
- "8082:8000"
networks:
- dummy_network
#command: sh -c "sleep 60s"
depends_on:
- db_mysql
volumes:
db_mysql_data: #external: true
networks:
dummy_network:
driver: bridge
Though, trying to use API before MySQL container fully initialized may cause errors, it's not the case, because I'm waiting till MySQL server says it's ready to handle requests. Unless that, I don't try to connect to MySQL server.
I tried connecting with hostname/ip-address. Tried to change python:3.11.4 image in dockerfile to earlier debian release and to not slim image. Tried to excplicitly use one common network for containers. Docker keeps showing, containers are in one network, and curl request from server container returns something. Also, docker-compose.yml used to have ports for db_mysql service 3306:3306. Guess, it's not the case as well.
UPD 1. While investigating, found out that if the DB is already created, then the app has no problems with sending requests to it and getting proper responses. Its only problem is that it can't create a DB, using creating script in the code.
(Guess, I should update code blocks, since the project is quite on another stage right now.)
I encountered a problem where both the servers and database containers were starting simultaneously, causing issues. The first (and last) attempt to connect to the database server happened before it was ready to accept any connections.
To solve this, I decided to add a health check to the docker-compose.yml file:
version: "3.8"
services:
db_mysql:
image: mysql:8
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: "root"
volumes:
- "./mysql/init.sql:/docker-entrypoint-initdb.d/init.sql"
- "./mysql/db_mysql_data:/var/lib/mysql"
- "./mysql/mysql_logs:/var/log/mysql"
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-proot"]
timeout: 1s
interval: 40s
retries: 5
server_1:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "8081:8000"
depends_on:
db_mysql:
condition: service_healthy
server_2:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "8082:8000"
depends_on:
db_mysql:
condition: service_healthy
volumes:
db_mysql_data: #external: true
With this configuration, the containers for the servers won't start until the health check confirms that the database server is ready.
However, there's a possibly better approach to handle this situation, which involves using the wait-for-it.sh script. I personally know some experienced backend developers, who also use Docker containers to split their apps into microservices. They have spoken positively about using this script. Although I haven't personally tried it, I recommend considering it as an alternative solution.