Search code examples
spring-bootapache-kafkaspring-kafka

Kafka batch listener, polling fixed numbers of records ( as much as possible)


I'm using Spring boot version 1.5.4.RELEASE & spring Kafka version 1.3.8.RELEASE.

My Kafka consumer is doing batch processing in chunks of 100. Topic I'm trying to consume has 10 partitions & I do have 10 instances of Kafka consumer.

Is there a I can enforce to get 100 fixed number of records ( as much as possible), apart from last chunk in particular partition.


Solution

  • Kafka has no property fetch.min.records.

    The best you can do is simulate it with:

    fetch.min.bytes: The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.

    and

    fetch.max.wait.ms: The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.

    Which will work if your records have similar lengths.

    By the way Spring Boot 1.5.x is end of life and no longer supported. The current Boot version is 2.2.3.