Search code examples
djangodjango-templateswagtailwagtail-streamfield

Conditional Caching when Looping Through Blocks in Django


On our site, we are looping through wagtail blocks for many of our webpages. Some of those blocks contain webforms or dynamic/personalized data, and thus cannot be cached. We want to provide maximum flexibility for the editors to update webpages without needing to touch the code, but also to provide good performance and load times.

In the below code, the field page.body is a wagtail streamfield where the editors can add any number of content blocks and in any order.

Here is the desired solution, with simplified code, which fails:

{% load cache wagtailcore_tags %}
<!-- Start the page cache -->
{% cache timeout url_path %}

{% for block in page.body %}

  <!-- disable the cache before an excluded block -->
  {% if block.disable_cache %}
     {% endcache %} <!-- template engine throws an error at this "endcache" tag -->
  {% endif %}

  {% include_block block %}

  <!-- reenable the cache after an excluded block is rendered -->
  {% if block.disable_cache %}
    {% cache timeout block.uuid %} 
  {% endif %}

{% endfor %}

<!-- end the page cache -->
{% endcache %}

Let's say a page has 15 blocks, with a single block in the middle having a form that should not be cached. In this case, we would have 1 rendered block in between 2 cached sets of blocks, which results in 2 calls to the cache. The problem here is that the template renderer won't accept or parse a cache tag within a conditional, so this solution fails with an error where the template engine expects an "endif" tag where the first "endcache" appears in the code.

The alternative would be to cache each block individually like this, which works:

{% load cache wagtailcore_tags %}

{% for block in page.body %}

  <!-- render or cache each block individually -->
  {% if block.disable_cache %}
     {% include_block block %}
  {% else %}
    {% cache timeout block.uuid %}
      {% include_block block %}
    {% endcache %}
  {% endif %}

{% endfor %}

While this second solution works, it results in 1 block being rendered and 14 calls to the cache, one for each individual block. This is obviously much less performant than the first (failing) solution, which would theorieticaly render 1 block at runtime, and make only 2 calls to the cache.

Anyone have thoughts or experience with reducing the number of cache calls when using the fragment cahce and loops or conditionals in django? Or a potential solution?


Solution

  • Have a look at adv-cache-tag.

    It has a {% nocache %} {% endnocache %} django block type which you can wrap around your dynamic content.

    It works very well, but it needs remembering that when you jump out of your current cached block like this, you don't have access to your local variables. If you used this in a with statement for example, you don't have your 'with' inside the nocache block.

    Other than that, you can use all the same syntax as django template fragment caching.

    I would install it from erudit-django-adv-cache-tag. The original project has a Django 3.x reference to django.utils.http.urlquote - the developer doesn't respond to pull requests.

    Otherwise, you could install the original as an app, and update the following import in tags.py from:

    from django.utils.http import urlquote
    

    to

    from urllib.parse import quote
    

    and update

    class CacheTag(object, metaclass=CacheTagMetaClass):
        ....
        def hash_args(self):
            """
            Take all the arguments passed after the fragment name and return a
            hashed version which will be used in the cache key
            """
            return hashlib.md5(force_bytes(':'.join([quote(force_bytes(var)) for var in self.vary_on]))).hexdigest()