Search code examples
databasepaginationcomplex-event-processing

What pagination schemes can handle rapidly-changing content lists?


Pagination is hard when your content rankings can change quickly, and even harder when those rankings differ per-user. (Let's treat infinite scroll as a type of pagination where the links are invisible.) There are two hard problems: newly-added content at the top, and reranked content.

Let's forget about newly-added content, and accept that you'll have to refresh page 1 to see it. Let's also pretend we're doing pure ORDER BY position; if you're ordering by something else, you might have to use window functions. Our pages have 4 rows of animals per page. They start out:

+----+----------+-----------+
| id | position^|  animal   |
+----+----------+-----------+
|  1 |        1 | Alpacas   |
|  2 |        2 | Bats      |
|  3 |        3 | Cows      |
|  4 |        4 | Dogs      |
|  5 |        5 | Elephants |
|  6 |        6 | Foxes     |
|  7 |        7 | Giraffes  |
|  8 |        8 | Horses    |
+----+----------+-----------+

After we fetch page 1, and before we fetch page 2, a lot of items move around. The DB is now:

+----+----------+-----------+
| id | position^|  animal   |
+----+----------+-----------+
|  4 |        1 | Dogs      |
|  2 |        2 | Bats      |
|  1 |        3 | Alpacas   |
|  5 |        4 | Elephants |
|  6 |        5 | Foxes     |
|  7 |        6 | Giraffes  |
|  3 |        7 | Cows      |
|  8 |        8 | Horses    |
+----+----------+-----------+

There are three common approaches:

Offset/limit approach

This is the typical naive approach; in Rails, it's how will_paginate and Kaminari work. If I want to fetch page 2, I'll do

SELECT * FROM animals
ORDER BY animals.position
OFFSET ((:page_num - 1) * :page_size) 
LIMIT :page_size;

which gets rows 5-8. I'll never see Elephants, and I'll see Cows twice.

Last seen ID approach

Reddit takes a different approach. Instead of calculating the first row based on page size, the client tracks the ID of the last item you've seen, like a bookmark. When you hit "next", they start looking from that bookmark onward:

SELECT * FROM animals
WHERE position > (
  SELECT position FROM animals 
  WHERE id = :last_seen_id
) 
ORDER BY position
LIMIT :page_size;

In some cases, this works better than page/offset. But in our case, Dogs, the last-seen post, zoomed right to #1. So the client sends up ?last_seen_id=4, and my page 2 is Bats, Alpacas, Elephants and Foxes. I haven't missed any animals, but I saw Bats and Alpacas twice.

Server side state

HackerNews (and our site, right now) solves this with server-side continuations; they store the entire result set for you (or at least several pages in advance?), and the "More" link references that continuation. When I fetch page 2, I ask for "page 2 of my original query". It uses the same offset/limit calculation, but since it's against the original query, I simply don't care that things have now moved around. I see Elephants, Foxes, Giraffes, and Horses. No dups, no missed items.

The downside is that we have to store a lot of state on the server. On HN, that's stored in RAM, and in reality those continuations often expire before you can press the "More" button, forcing you to go all the way back to page 1 to find a valid link. In most applications, you can store that in memcached, or even in the database itself (using your own table, or in Oracle or PostgreSQL, using holdable cursors). Depending on your application, there might be a performance hit; in PostgreSQL, at least, you have to find a way to hit the right database connection again, which requires a lot of sticky-state or some clever back-end routing.

Are these the only three possible approaches? If not, are there computer-science concepts that would give me Google juice to read about this? Are there ways to approximate the continuation approach without storing the entire result set? Long term, there's complex event-streaming/point-in-time systems, where "the result set as of the moment I fetched page 1" is forever derivable. Short of that... ?


Solution

  • We're going with the server-side state approach for now, caching the entire result on the first query so we always return a consistent list. This will work as long as our query already returns all rows; eventually we'll need to use a nearest-neighbor approach and that wont work.

    But I think there's a fourth possibility, which scales very well, as long as:

    1. You don't need a guarantee of no duplicates, only a high likelihood
    2. You're okay with missing some content during scrolls, as long as you avoid duplicates

    The solution is a variant of the "last seen ID" solution: Have the client keep not one, but 5 or 10 or 20 bookmarks - few enough that you can store them efficiently. The query ends up looking like:

    SELECT * FROM posts
    WHERE id > :bookmark_1
    AND id > :bookmark_2
    ...
    ORDER BY id
    

    As the number of bookmarks grows, the odds rapidly diminish that you are (a) starting at some point past all n bookmarks but (b) seeing duplicate content anyway because they were all reranked.

    If there are holes, or better answers in the future, I'll happily unaccept this answer.