Trying to use python-rq
to support the back end to our Web application, but pushing new jobs takes very long - up to 12 seconds.
The performance hit happens when executing the enqueue_call
function call, particularly when the number of worker processes connected to the system increases (over 200).
The system works as follows:
enqueue_call
function to pass in arguments to the job (such as timeout and ttl), in addition to the actual arguments to the function to be executed.screen
. The workers follow the pattern provided in the documentation, executing the Worker.work()
infinite loop function to listen on the queues.About the infrastructure:
redis-benchmark
on the server with the task queue, we get results over 20000 r/s on average for most benchmarks.How can we improve the push performance for new jobs in a situation like this? Is there a better pattern that we should use?
12 seconds? This is insane.
Have you considered using celery?
Never used redis-rq, but from what I see based on docs it is not really good for big numbers of workers
Redis queue usualy based on BLPOP command, which can work with multiple clients, but who knows how much it can really handle for one key.
So I suggest you to switch to Celery or writing your own tasks distributor for python-rq, which wont be easier then switching