I have deployed kafka on minikube by following https://docs.bitnami.com/tutorials/deploy-scalable-kafka-zookeeper-cluster-kubernetes.
I'm able to create kafka topics, publish and consume messages from the topics through kubectl
commands. When I installed kafka through helm chart, kafka.kafka.svc.cluster.local
is the DNS name within the cluster.
helm install kafka bitnami/kafka --set zookeeper.enabled=false --set replicaCount=3 --set externalZookeeper.servers=zookeeper.kafka.svc.cluster.local -n kafka
I have tried in multiple ways, but not able to access this kafka cluster outside. I'm trying to publish messages to a kafka topic through sample producer code in IntelliJ, but the bootstrap server kafka.kafka.svc.cluster.local
is not reachable.
but the bootstrap server kafka.kafka.svc.cluster.local is not reachable.
That's internal CoreDNS record address, only. You'll need to define a headless service with exposed NodePort and optional TCP LoadBalancer that'll direct Ingress traffic into the cluster, along with an appropriate NetworkPolicy. Search the config for "external" - https://github.com/bitnami/charts/tree/master/bitnami/kafka#traffic-exposure-parameters
Kafka is not unique in this way, so I suggest learning more about accessing k8s services outside the cluster. Or switch back to Docker Compose for simply testing a local Kafka environment with containers.
Note that the advertised listeners setting of each broker pods would need to return their individual, external broker addresses back to the client