Consumer's property queuedchunks.max meaning is a bit shadowy.
I see, that each stream in consumer has a queue with capacity defined (queuedchunks.max).
1) But, if my consumer starts consume more topics (within single stream), will that affect maximum size of objects, that this queue is able to hold.
For example: If I set fetchSize = 1000, and queuedchunks.max = 10, does it mean, that no matter how many topics I consume, the queue in memory will never be greater than 1000*10?
2) Is this queue - effective method to collect messages in async way for my consumer before flashing them on disk? Disk IO is usually slow, so collect messages in the queue would be better than try write them immediately on disk?
3) How messages ordered in this queue if I consume from N topics?
Each queue node(entry) keeps messages only of 1(single) topic:
[T1], [T2], [T3], [T1], [T2]..?
OR it is possible, that each node keeps messages of different topics?
[T1, T2], [T3, T1, T2]..?
4) will node(entry) be of fetchSize maximum size?
5) is it possible set fetchSize per topic? Or it id consumer property only?
Thank you.
1 - no, consumer stream size is not to change from number of topics, the streams number will remain equal to what you defined on start of consumer.
2 - yes
3 - each "message" is fetched chunk of defined max size (fetchSize), it may contain only messages of single partition of single topic.
So it looks like: [T1-P1][T1-P2][T2-P1]...[Tn-Pk]
There is no order between partitions, but all messages of 1 partition always come in order
it MAY REACH fetchSize size, but may not.
no