Search code examples
pythonflaskqueuepython-multiprocessing

Managing calls to rate-limited API from Flask application


I have a Flask application that (among other things) must interact with a rate-limited API (i.e., cannot make more than x requests to the API within some given unit of time). However, the demand the Flask application makes on the API is uneven -- sometimes demand far exceeds what the API will allow, sometimes there's no demand for minutes or hours at a time.

It's fine for calls to the API to occur asynchronously -- there's no need for the Flask application to block and wait for a response.

So I'm wondering how best to implement this.

I'm thinking the best approach would be to have a separate process with a FIFO queue which makes calls at some fixed interval (less that the limit rate for the API) -- kind of like leaky-bucket algorithm.

from multiprocessing import Queue

q = Queue()

...

# This runs all the time
while True:
    sleep(SOME_TIME)
    if q.empty() == False:
       # pop data and use make the API call

But I'm not sure how to set this up and have the Flask application interact with the queue (just pushing new requests on as they occur).

It also seems that Celery (or similar) is overkill.

Should I be looking into python-daemon or creating a subprocess with multiprocessing.Queue? What's the best way approach this?


Solution

  • I think that celery is the best solution for your problem. This is exactly what celery does, and it is highly adopted by the python community to solve issues like yours.

    It is not an overkill as it's not that hard to setup & configure, and you can read about it in Flasks documentation itself.

    Thats about 30 lines of code:)