I am creating kafka topic as following:
kafka-topics --create --zookeeper xx.xxx.xx:2181 --replication-factor 2 --partitions 200 --topic test6 --config retention.ms=900000
and then I produce messages with golang using the following library:
"gopkg.in/confluentinc/confluent-kafka-go.v1/kafka"
the producer configuration looks like this:
for _, message := range bigslice {
topic := "test6"
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic},
Value: []byte(message),
}, nil)
}
the problem that I've sent more than 200K messages but they all lands in partition 0.
what could be wrong in this situation?
Messages with the same key are being added to the same partition. If this is not the case, then try to include Partition: kafka.PartitionAny
:
for _, message := range bigslice {
topic := "test6"
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(message),
}, nil)
}