Search code examples
djangocachingdjango-rest-frameworkthrottling

Django Rest Framework throttling rates with cached requests


I'm currently working on an API made with Django-rest-framework. I have to set the throttling rates on a per User-Group basis.

We are currently using memcached with default configuration as the cache backend, which is per-site cache.

While making some simple tests with AnonRateThrottle and UserRateThrottle, I notice that if the requests that the user is making is already cache it doesn't count for the throttle rates.

The documentation says that throttling is determined before running the main body of the view, i guess because the requests is being serve from the cache the view is not being executed so throttle is not take into account.

Basically i wanna ask:

  1. Is this what's really happening?
  2. Could there be a way to count the cache requests for throttling purposes? (pros and cons if you could)

One thing I thought of was caching only the database/Orm lookups, so that every request executes the corresponding view body.

Probably the number of requests that exceeds the throttling rate is not that great, and because they're cached they don't impact the performance of the service, so basically i just want to know the behavior of the service in this case.


Solution

  • It depends on how you do your caching.

    If you mean the default Django middlewares (UpdateCacheMiddleware, FetchFromCacheMiddleware), the requests never reach Django-Rest-Framework and so are never counted against any throttle. So this is what is really happening.

    What you could do is to cache the responses in your views. Since your view-methods (classes) get called by DRF the throttling will be used. drf-extensions has an example on this. This would cache your data before it gets encoded into your output format (json, yaml, xml, ..)

    In general you should only cache when you know something is slow. Django's cache middleware caches only based on a timeout. Real cache invalidation can be hard.