Search code examples
apache-kafkakafka-consumer-apipykafka

Not able to communicate from a remote machine to Kafka cluster


So i have been trying to send messages using pykafka on non kafka cluster (it doesn't have kafka on it, only necessary libraries) Using the attached code snippet i send message to kafka cluster node to be consumed. But it returns time out exception.

I have tried almost everything available even the already created question on stackoverflow.

Question:

Is it required to have kafka also on my non kafka cluster to successfully communicate? (Dont think so)

Any help would be appreciated ??

server config:

# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://0.0.0.0:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://abc-Kka-00:9092```

Python code:

from pykafka import KafkaClient
import threading

KAFKA_HOST = "a.b.c.d:9092" # Or the address you want

client = KafkaClient(hosts = KAFKA_HOST)
topic = client.topics["kafkat"]

with topic.get_sync_producer() as producer:
    for i in range(10):
        message = "Test message " + str(i)
        encoded_message = message.encode("utf-8")
        producer.produce(encoded_message)


Error recived:
```pykafka.exceptions.ProduceFailureError: Delivery report not received after timeout```

Expected result:

Message to be passed and consumed on kafka node

Solution

  • The issue was solved.

    The problem was due to one of the host in the cluster had conflict in configuration, that is why messages were not being through. After updating the configs it worked like charm.