Search code examples
network-programmingloggingdockerelastic-stackdocker-swarm

Docker: sending logs to localhost doesn't work but 0.0.0.0 does work, why?


UPDATE: Now I have a problem. When a container was started before logstash the UDP logs do not arrive in logstash. According to this bug report this is because

The linux kernel keeps state of each connection. Even though udp is connectionless

as far as I understand it these connections are cached. Logstash receives a new IP and therefore no packages, this would not happen when I use localhost (instead of 0.0.0.0), as that IP never changes and docker forwards the messages to the exposed ports.

Any idea how I can make it work with localhost (or a service name since I am using docker swarm)?


I have a dockerized application (logstash) which publishes port 12201/udp to listen to logs. This accoring to the docker documentation

This binds port 8080 of the container to port 80 on 127.0.0.1 of the host machine.

However if I send messages with netcat to localhost:12201/udp the application receives nothing. Whereas when I send messages to 0.0.0.0:12201/udp everything works as intended.

Why? Does this introduce performance/security issues? Could it be a bug?

As far as I know 0.0.0.0 is translated to all ip addresses of the machine and the machine has its own ip address and some 172..0. addresses/networks which docker creates according to the docker container networking documentation.

For some reason it seems that the published port is not mapped to localhost but some other network. I am running my application in a swarm and 0.0.0.0 works from any machine on the swarm.


Here the relevant part of my compose file:

networks:
  logging:

volumes:
    logging_data:

services:
  logstash:
    image: docker.elastic.co/logstash/logstash:5.3.1
    logging:
      driver: "json-file"
    networks:
      - logging
    ports:
      - "12201:12201"
      - "12201:12201/udp"
    entrypoint: logstash -e 'input { gelf { } } output { stdout{ } elasticsearch { hosts => ["http://elasticsearch:9200"] } }'
    depends_on:
      - elasticsearch


    test:
    image: ubuntu
    networks:
      - logging
    logging:
      driver: gelf
      options:
        gelf-address: "udp://0.0.0.0:12201"
        tag: "log-test-tagi-docker"
    entrypoint: /bin/sh -c 'while true; do date +%H:%M:%S:%3N ; sleep 1; done'
    depends_on:
      - logstash
      - elasticsearch

Solution

  • I found a way to avoid using 0.0.0.0. I now publish 127.0.0.1:12201:12201/udp. I can now use 127.0.0.1 from any machine in the swarm instead of 0.0.0.0. However port 12201 is still accessible from outside the swarm.

     logstash:
        image: docker.elastic.co/logstash/logstash:5.3.1
        logging:
          driver: "json-file"
        networks:
          - logging
        ports:
          - "127.0.0.1:12201:12201/udp"
    

    The problem that remains is that logs do not arrive after logstash is killed and restarted. That is because of conntrack tracking a connection (in linux) although udp is connectionless and should be possible to fix it as described here.

    I hope this helps someone with a similar issue.