Search code examples
rabbitmqqueuetimeoutconsumersubscriber

How to redelivery messages that RabbitMQ already sent to consummers


I created some consumers that connect to a queue in Rabbitmq, unack a number X of messages, like 10,50,100 each time to avoid unnecessary connections taking one by one. Sometimes we have situations that the queue was almost empty and just one consumer got all the messages. Unfortunetly it's possible that one of the messages is slow to process (a third party web service timed out for example) and all the others messages have to wait in line this one finishes, even if they are faster. While this, other consumers are empty and have nothing to do, but they can't take the messages that the first one still haven't processed.

If I could say to Rabbitmq delivery a sort of messages to consummer and if it don't ack after a period of time the massages have to be delivered to queue and be taken by another consumer. Somebody knows if there is a work around?


Solution

  • Have a look at my answer to this question.

    Consider the following scenario:

    • A queue has thousands of messages sitting in it
    • A single consumer subscribes to the queue with AutoAck=true and no pre-fetch count set

    What is going to happen?

    RabbitMQ's implementation is to deliver an arbitrary number of messages to a client who has not pre-fetch count. Further, with Auto-Ack, prefetch count is irrelevant, because messages are acknowledged upon delivery to the consumer.

    Thus, every message in the queue at that point will be delivered to the consumer immediately and the consumer will be inundated with messages. Assuming each message is small, but takes 5 minutes to process, it is entirely possible that this one consumer will be able to drain the entire queue before any other consumers can attach to it. And since AutoAck is turned on, the broker will forget about these messages immediately after delivery.

    Obviously this is not a good scenario if you'd like to get those messages processed, because they've left the relative safety of the broker and are now sitting in RAM at the consuming endpoint. Let's say an exception is encountered that crashes the consuming endpoint - poof, all the messages are gone.

    I believe what you're seeing is a case where you have AutoAck set to true. When this happens, the first consumer to connect is going to drain the whole queue if no other consumers connect before it has the chance to do so. Try setting AutoAck to false, then select a reasonable pre-fetch count (0-1 maybe?) and you won't see this behavior continue.