Search code examples
activemq-classicmessage-queueprefetch

ActiveMQ prefetchPolicy of 1 causes messages to timeout, while 0 appears to work. Misunderstanding of concept?


We have queues where some message may take milliseconds to process and some minutes (i.e. both fast and slow messages). A problem we have been seeing is that messages get dropped due to timeout (no consumer available within TTL) even though there are plenty of consumers available.

For this reason we use jms.prefetchPolicy.all=1 as part of the connection string for all consumers. This value was chosen from this information:

Large prefetch values are recommended for high performance with high message volumes. However, for lower message volumes, where each message takes a long time to process, the prefetch should be set to 1. This ensures that a consumer is only processing one message at a time. Specifying a prefetch limit of zero, however, will cause the consumer to poll for messages, one at a time, instead of the message being pushed to the consumer.

However, we still see the problem. As a test I instead changed the value to 0 and after having used this configuration for about two weeks we are yet set to see dropped messages. Previously it would happen several times per day.

Perhaps I'm misunderstanding the documentation, but my end-goal was that no messages should be given to a consumer until it's actually available. But a prefetch value of 1 perhaps means that there may be a single message given to a consumer, even though it's processing something?

Specifying a prefetch limit of zero, however, will cause the consumer to poll for messages, one at a time, instead of the message being pushed to the consumer.

Is this necessarily a bad thing? The documentation make it out to be something to avoid (poll bad, push good). Perhaps polling is the only way it can work because only the worker/consumer knows when it's ready to process?

As an alternative solution, perhaps it's bad practice to mix "fast" and "slow" messages on the same queue, but I'd rather not make architectural changes unless necessary.


Solution

  • The documentation is somewhat misleading because using a prefetch of 0 and polling for messages is not a bad thing if it gives you the behavior you actually want.

    Generally speaking, prefetch is a performance optimization to avoid consumers polling the broker since repeated network round-trips to fetch each message can add up over time, especially if the consumer is fast. However, the value is configurable exactly because not all use-cases are the same. If some of your consumers are starving and messages are timing out then by all means lower the prefetch until everything is working as you expect.

    It is usually simpler to segregate fast and slow consumers/messages on different queues, but it's not required. If the variability is in the consumers themselves then each consumer can have its own prefetch value. As long as it's adjusted properly no consumer should starve and performance should be close to optimal. If message variability is instead in the message then you'll have to use a "lowest common denominator" prefetch value which means performance won't be optimal, but at least no consumers will starve.