Search code examples
pythonflaskredisuwsgipython-rq

Python-rq with flask + uwsgi + Nginx : Do I need more uwsgi processes or redis workers?


I have a server with above configuration and I am processing long tasks but I have to update user about the process state, which I am doing through Firebase. To respond to the client immediately I enqueue the job in redis using python-rq.

I am using flask and uwsgi and Nginx. In uwsgi conf file, there is a field which asks for number of processes. My question is, Do I need to start multiple uwsgi processes, or more redis workers?

Does starting more uwsgi workers will create more redis workers?

How would the scaling work, My server has 1 vCPU and 2GB ram. I have aws autoscaling for production. Should I run more uWsgi workers and how many redis workers with only one queue.

I am starting the worker independently. The flask app is importing the connection and adding the job.

my startup script

my worker code


Solution

  • It depends upon how you're running rq workers. There can be two cases

    1) Running rq workers from inside the app. Then increasing number of workers in uwsgi settings will automatically spawn num_rq_workers_in_app_conf * num_app_workers_in_uwsgi_conf

    2) Running rq workers outside application like using supervisord. Where you can manually control number of rq workers independently of app.

    According to me running rq workers under supervisord is a better option than point 1. It helps in effective debugging of each worker and one more issue which I've encountered while using rq is that rq-workers running via point 1 strategy unregisters themselves from rq i.e becomes dead for rq although running in background in few weeks interval.