I have a stack in a swarm that works well on its own (at least I think it does...). It has a postgresql working on port 5432 and a web server on port 80. The web server can properly be accessed from the outside.
For unit tests, I run only the database side, in a stack mode:
version: "3"
services:
sql:
image: sql
networks:
- service_network
ports:
- 5432:5432
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- ./local_storage_sql:/var/lib/postgresql/data
environment:
# provide your credentials here
- POSTGRES_USER=xxxx
- POSTGRES_PASSWORD=xxxx
networks:
service_network:
Then, the unit tests starts by connecting to the db in another simple python container:
FROM python:latest
LABEL version="0.1.0"
LABEL org.sgnn7.name="unittest"
# Make sure we are fully up to date
RUN apt-get update -q && \
apt-get dist-upgrade -y && \
apt-get clean && \
apt-get autoclean
RUN python -m pip install psycopg2-binary
RUN mkdir /testing
COPY ./*.py /testing/
The test script fail when connecting:
conn = connect(dbname=database, user=user, host=host, password=password)
with:
File "/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
But it only fails when I run it inside the container. From outside, it works like a charm. I also tried setting up an external network and use it (same docker node):
docker run --rm --net service_network -t UnitTest-image py.test /testing
Obviously, I would have expected to be more difficult to access the database from the outside of the network, than from inside, so obviously, I missed something here, but I don't know what...
When you deploy a stack with Compose file, the full name of the network is created by combining the stack name with the base network name. So, let's say you deployed your stack with the name foo
like so.
docker stack deploy -c compose-file.yml foo
Then, the full network name will be foo_service_network
.
When you run your Python container, you need to connect it to foo_service_network
, not service_network
.
docker container run --rm --net foo_service_network -t UnitTest-image py.test /testing
You can also customize the network name by specifying the name property in your Compose file (version 3.5 and up).
networks:
service_network:
name: service_network
In which case, you would connect your container to the network with that custom name.
docker container run --rm --net service_network -t UnitTest-image py.test /testing
Edit 1/28: Added Compose file example.
version: "3.7"
services:
sql:
image: sql
networks:
- service_network
ports:
- 5432:5432
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- ./local_storage_sql:/var/lib/postgresql/data
environment:
# provide your credentials here
- POSTGRES_USER=xxxx
- POSTGRES_PASSWORD=xxxx
networks:
service_network:
name: service_network
attachable: true