I have a docker swarm with 6 services that I want to have logging to one central logging container. I have this working in my dev environment (non swarm) - see the docker-compose file below.
When I try to transition the configuration from the dev env to my swarm prod env, it fails.
first I raise the log service:
docker service create --replicas 1 \
--name logserver \
--network phototankswarm \
--constraint=node.hostname==pi2 \
-p localhost:5514:514 \
kaninfod/pt-syslog
This runs up nicely.
Then one of the services (e.g a redis):
docker service create --replicas 1 \
--name redis \
--network phototankswarm \
--constraint=node.hostname==pi1 \
--log-driver=syslog \
--log-opt syslog-facility="daemon" \
--log-opt tag="rails" \
--log-opt syslog-address="tcp://localhost:5514" \
-p 6379:6379 \
armhf/redis
And this fails with:
starting container failed: Failed to initialize logging driver: dial tcp [::1]:5514: getsockopt: connection refused
I had problems with the syslog-address when setting up the dev env...I find it strange that I have to use the localhost rather than the docker dns name of the log container...but en dev it works with localhost.
This is the compose file I use for my dev env:
version: '3'
services:
nginx:
image: nginx
depends_on:
- api
- syslog
ports:
- "80:8080"
logging:
driver: syslog
options:
syslog-facility: "daemon"
tag: "nginx"
syslog-address: "tcp://localhost:5514"
networks:
- phototankswarm
env_file: .env.dev
volumes:
- ./frontend/nginx/conf.d:/etc/nginx/conf.d
- ./frontend/public:/www
db:
image: mysql
env_file: .env.dev
depends_on:
- syslog
networks:
- phototankswarm
ports:
- "3306:3306"
volumes:
- ./sql/data:/var/lib/mysql
logging:
driver: syslog
options:
syslog-facility: "daemon"
tag: "mysql"
syslog-address: "tcp://localhost:5514"
redis:
image: redis
depends_on:
- syslog
networks:
- phototankswarm
logging:
driver: syslog
options:
syslog-facility: "daemon"
tag: "redis"
syslog-address: "tcp://localhost:5514"
api:
image: pt-rails
env_file: .env.dev
networks:
- phototankswarm
command: >
sh -c '
bundle exec sidekiq -d && bundle exec rails s -p 3000 -b 0.0.0.0
'
volumes:
- /Users/martinhinge/Pictures/docker/phototank:/media/phototank
- ./backend:/usr/src/app
ports:
- "3000:3000"
depends_on:
- db
- redis
- syslog
logging:
driver: syslog
options:
syslog-facility: "daemon"
tag: "rails"
syslog-address: "tcp://localhost:5514"
syslog:
image: syslog
ports:
- "localhost:5514:514"
networks:
- phototankswarm
volumes:
- /tmp:/var/log/syslog
networks:
phototankswarm:
EDIT
running $ docker run -it --rm --net container:dac082edcd6f hypriot/rpi-alpine-scratch netstat -lnt
yields:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:35314 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN
tcp 0 0 :::514 :::* LISTEN
Change:
--log-opt syslog-address="tcp://localhost:5514" \
to:
--log-opt syslog-address="tcp://logserver:514" \
Networking between containers will use DNS based discovery as long as they are running on the same network. Localhost inside a container is a separate namespace from the localhost on the docker host outside the container, so the container will not see ports published on the docker host with the loopback address. You may be able to connect to your docker host by it's hostname instead of localhost. However, container-to-container networking is more portable.
The only reason I can think that this would work in dev with localhost is if your service is using the host network mode, rather than being connected to the overlay network.