I have a situation where we use docker-compose to run a database populated with test data during development. Backend programmer (me) runs just the database alone. Frontend programmer runs database + backend with a different configuration file.
In the first scenario the backend needs to find the database on localhost port 1433. In the second the backend needs to find the database on the hostname I've assigned to the docker container which is different from localhost. This means that configurations cannot be shared which I have up til now solved by having different configuration files.
Unfortunately I now need to put the hostname in the database (being used for dynamic lookup) where I cannot do this.
Is there a way for a local administrator on Windows with docker-compose under Docker Desktop to have a host name that works well in both scenarios?
The root of the problem is that without extra host configuration it isn't possible to make container names visible to the host. Thus, it's either editing hosts
file for extra records (or even bringing up your own DNS, running in Docker) or using localhost
.
It is possible to make one container to use network stack of another by using network_mode
parameter (ignored in swarm). That should make both containers the ability to communicate with one another via localhost
, and the developer can use localhost
from the host as well.
Unfortunately, this method applies some limitations:
docker-compose
files, or there is an error:ERROR: Service 'client' uses the network stack of container 'some_unique_container_name' which does not exist.
Here's an example:
# server.yml
version: "3"
services:
server:
image: nginx
container_name: some_unique_container_name
---
# client.yml
version: "3"
services:
client:
image: curlimages/curl
network_mode: container:some_unique_container_name
command: curl localhost
To bring it up, first run docker-compose -f server.yml up -d
, then docker-compose -f client.yml up
.