I'm interested in a version of parEvalMap with a bounded buffer so that it will keep progressing the underlying stream as long as its buffer is not full.
What I mean is onsidering a stream of values [1,2,3,4,5,6,7,8...] with a parallelism of 4 and a buffer size of 6 would mean:
Any idea if there is such a combinator, or what would be a good approach to implement it?
The problem we are trying to solve is to be able to process a stream of messages/items where the processing of some items take a lot longer than the average. With normal parEvalMap this slows down processing because once we have a slow request running and then even though the next N-1 requests have already returned, parEvalMap will not start processing the next request because it is bounded by its paralellism, and can't continue because it's waiting for the result of the "slow" item.
I'd like to be able to save the 2,3,4,..n results in a buffer, and continue with max paralellism, and emit all results once the slow request completes.
With parEvalMapUnordered the maximum number of paralellism is kept because the results are returned out-of-order, but with parEvalMap a slow request would halt the processing. I'd like to lift this restriction by introducing a result buffer.
I wound up rolling my own, using a
Semaphore (to control buffer size),
zipWithIndex to have the ability to restore ordering of results after executing them with
parEvalMapUnordered to execute without stalling in case of outliers, and afterwards collecting the results into a HashMap[Long, Result] in
scanChunks. I'm not sure if I can share code publicly, I'll try to get permission and do so.