Search code examples
dockerapache-kafkasarama

docker-compose kafka - local machine client cannot produce message to kafka


I've read a lot of similar subjects but they aren't able to answer my problem here.

Trying to run some short integration tests, I'm using docker-compose 3, a single-node kafka. On client side I'm using Go shopify/sarama to consume / produce

zookeeper:
  image: confluentinc/cp-zookeeper:5.2.2
  hostname: zookeeper
  container_name: zookeeper
  ports:
    - "2181:2181"
  environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000
kafka:
  image: confluentinc/cp-enterprise-kafka:5.2.2
  hostname: kafka
  container_name: kafka
  depends_on:
    - zookeeper
  ports:
    - "29092:29092"
  expose:
    - 9092
  environment:
    KAFKA_BROKER_ID: 1
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
    KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
    KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
    KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

I have another container from the docker-compose that will listen to

- "BROKERS_URL=kafka:9092"

the consumer is working just fine:

Sarama consumer up and running. {"brokers": ["kafka:9092"], "topics": ["validated"], "group": "event-service"}

But on the producer part, running directly from my machine:

kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

producer, err := sarama.NewSyncProducer([]string{"http://localhost:29092"}, nil)
...
msg := &sarama.ProducerMessage{
    Topic: "validated",
    Key:   sarama.StringEncoder(""),
    Value: sarama.ByteEncoder(payload),
}

partition, offset, err := producer.SendMessage(msg)
...

Nothing weird / extravagante here, but it's not working and I'm confused.

also: nc -vz localhost 29092

Connection to localhost port 29092 [tcp/*] succeeded!


Solution

  • Instead of

        KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
    

    you need

        KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://0.0.0.0:29092
    

    Testing connectivity from my host machine using kafkacat shows that this works:

    ➜ kafkacat -b localhost:29092 -L
    Metadata for all topics (from broker 1: localhost:29092/1):
     1 brokers:
      broker 1 at localhost:29092 (controller)
     0 topics:
    

    This difference is that the listener is binding to all available interfaces (0.0.0.0). With your original configuration it binds to the loopback interface (lo) for localhost, and so only accepts traffic on this and not externally.