Search code examples
javajmsactivemq-artemis

Limit the number of active consumed messages


Our flow today is; multiple servers send messages to an ActiveMQ Queue and a MDB (consumer) with maxSession=100 polls this Queue. The MDB reads the message and delivers the message to other third party REST services.

Some of the third party REST services have a limit on how many parallel connections our MDB and their REST service can have at once. So because of the maxSession=100, we can break this limit if multiple messages in a row is supposed to go to the same third party REST service.

I have thought of the some possible solutions (good and bad):

Reject the message and put it back on the queue with a scheduled delay

This seems like the most forward solution, we consume the message, check if we have reach the parallel connections threshold towards that REST service, if yes, we add the message back to queue with a scheduled delay. But this has obvious issues such reading one message potentially many many times, and we do not know what the correct delay is.

Multiple Queues

We have more than a thousand different REST services that we send to, having a MDB and Queue for each one is not practical.

Manually consuming messages after browsing the messages

Instead of automatically consuming messages with our MDB @MessageDriven we could first inspect the queue, check what keys the messages that are on the queue have, and only consume a message with a key/messageSelector that has not reached the threshold. But this seems to be a lot of coding.

Limit the number of active messages based on the messageSelector

I have no idea if this is possible, but this seems like a good solution if possible.


How is this limitation normally handled?


Solution

  • Typically situations like this are handled with redelivery. ActiveMQ Artemis supports a number of different parameters for redelivery related to delay so that you don't end up hammering your back-end REST services.

    I recommend using container-managed transactions in your MDB and if the back-end REST service refuses your request then simply mark the transaction as rollback-only and return from your onMessage. This will immediately free up the MDB to work on another message which may not be subject to any blockage/throttling.

    The problem of "reading one message potentially many many times" probably won't be a real concern in the long run. You can always limit the number of times a message can be read by using max-delivery-attempts and when this is exceeded you can either drop the message or send it to a dead-letter address. If you choose the latter then you can set up alerts, inspect the message via the web console, and replay it if you want. This kind of pattern is very common with messaging.

    Redelivery is conceptually simple, requires very little code, and is generally effective. If this solution doesn't work you can always investigate more complex solutions later. There's no benefit to adding complexity until you know you really need it.

    Alternatively, it's possible that you could use a combination of multiple MDBs with selectors and rate-limited flow-control to target particular kinds of messages that are subject to throttling and then another MDB that catches all the rest. I don't know if this would fit with your use-case, but it's the next most logical option in my opinion.