Search code examples
apache-kafkakafka-producer-api

Kafka variable event payload size


I am trying to figure out an optimal event size to produce into Kafka. I may have events ranging from 1KB to 20KB and wonder if this will be an issue.

It is possible that I could make some producer changes to make them all roughly a similar size, say 1KB-3KB. Would this be an advantage or will Kafka have no issue with the variable event size?

Is there an optimal event size for Kafka or does that depend on the configured Segment settings?

Thanks.


Solution

  • By default, Kafka supports up to 1MB messages, and this can be changed to be larger, of course sacrificing network IO and latency as a result of making it larger.

    That being said, I don't think it really matters if messages are consistently sized or not for the sizes of data that you are talking about.

    If you really want to squeeze your payloads, you can look into different serialization frameworks and compression algorithms offered in the Kafka API.