The scenario (I've simplified things):
This fairly standard pattern is working fine.
The problem: if a user starts 10 jobs in the same minute, and only 10 worker applications are up at that time of day, this end user is effectively taking over all the compute time for himself.
The question: How can I make sure only one job per end user is processed at any time ? (Bonus: some end users (admins for example) must not be throttled)
Also, I do not want the front end application to block end users from starting concurrent jobs. I just want the end users to wait for their concurrent jobs to finish one at a time.
The solution?: Should I dynamically create one auto-delete exclusive queue per end users ? If yes, how can I tell the worker applications to start consuming this queue ? How to ensure one (and only one) worker will consume from this queue ?
Such a feature is not provided natively by rabbitMQ. However, you could implement it in the following way. You will have to use polling though, which is not so efficient (compared to subscribing/publishing). You will also have to leverage Zookeeper for the coordination between the different workers.
You will create 2 queues: 1 high-priority queue (for the admin jobs) and 1 low-priority queue (for the normal user jobs). The 10 workers will be retrieving messages from both queues. Each worker will execute an infinite loop (with intervals of sleep ideally, when queues are empty), where it will attempt to retrieve a message from each queue interchangeably :