Search code examples
sqlpostgresqlindexingquery-performance

Why is this pg query so slow? How can I make it faster?


This is the query:

(SELECT * 
FROM url 
WHERE domain = 'youtube.com' 
  AND timestamp > NOW() - INTERVAL '24 hours' 
ORDER BY likes DESC LIMIT 10) 
UNION 
(SELECT * 
FROM url 
WHERE domain = 'twitter.com' 
  AND timestamp > NOW() - INTERVAL '24 hours' 
ORDER BY likes DESC LIMIT 10) 
UNION 
(SELECT * 
FROM url 
WHERE domain = 'reddit.com' 
  AND timestamp > NOW() - INTERVAL '24 hours' 
ORDER BY likes DESC LIMIT 10) 
ORDER BY timestamp DESC

This is the EXPLAIN ANALYZE.

Sort  (cost=20460.17..20460.25 rows=30 width=497) (actual time=5161.013..5161.015 rows=30 loops=1)
  Sort Key: url."timestamp" DESC
  Sort Method: quicksort  Memory: 53kB
  ->  HashAggregate  (cost=20459.14..20459.44 rows=30 width=497) (actual time=5160.709..5160.738 rows=30 loops=1)
        Group Key: url.url, url.domain, url.title, url.views, url.likes, url.dislikes, url.comments, url.shares, url.links_to_url, url."user", url.thumbnail_url, url.is_collection, url.image_url, url.video_url, url.audio_url, url.width, url.height, url.body, url.source, url."timestamp", url.created_at, url.updated_at, url.duration_seconds, url.tags, url.channel
        ->  Append  (cost=0.43..20457.26 rows=30 width=497) (actual time=0.514..5160.073 rows=30 loops=1)
              ->  Limit  (cost=0.43..18150.71 rows=10 width=1177) (actual time=0.513..28.599 rows=10 loops=1)
                    ->  Index Scan Backward using "url-likes-index" on url  (cost=0.43..816763.00 rows=450 width=1177) (actual time=0.511..28.594 rows=10 loops=1)
                          Filter: (((domain)::text = 'youtube.com'::text) AND ("timestamp" > (now() - '24:00:00'::interval)))
                          Rows Removed by Filter: 11106
              ->  Limit  (cost=0.43..859.82 rows=10 width=1177) (actual time=2330.390..5033.214 rows=10 loops=1)
                    ->  Index Scan Backward using "url-likes-index" on url url_1  (cost=0.43..816763.00 rows=9504 width=1177) (actual time=2330.388..5033.200 rows=10 loops=1)
                          Filter: (((domain)::text = 'twitter.com'::text) AND ("timestamp" > (now() - '24:00:00'::interval)))
                          Rows Removed by Filter: 1667422
              ->  Limit  (cost=0.43..1446.28 rows=10 width=1177) (actual time=64.748..98.228 rows=10 loops=1)
                    ->  Index Scan Backward using "url-likes-index" on url url_2  (cost=0.43..816763.00 rows=5649 width=1177) (actual time=64.745..98.220 rows=10 loops=1)
                          Filter: (((domain)::text = 'reddit.com'::text) AND ("timestamp" > (now() - '24:00:00'::interval)))
                          Rows Removed by Filter: 26739
Planning Time: 3.006 ms
Execution Time: 5162.201 ms

And if you're interested in running it yourself, go to this link.

I see that a million twitter rows are being filtered, but I'm not sure how to avoid it. I have a timestamp index and I was hoping that would be used instead of sorting by likes and scanning the whole thing. Does this mean I need a composite index? Is there a way to make the planner use both indexes instead of making another?

p.s. I think I goofed with the primary key being the url. It makes the indexes needlessly larger.


Solution

  • I would suggest writing the query like this:

    SELECT ufiltered.*
    FROM (SELECT url.*,
                ROW_NUMBER() OVER (PARTITION BY domain ORDER BY likes DESC) AS seqnum
          FROM url 
          WHERE domain IN ('youtube.com', 'twitter.com', 'reddit.com') AND
                timestamp > NOW() - INTERVAL '24 hours'
        ) AS ufiltered
    WHERE seqnum <= 10
    ORDER BY timestamp DESC
    

    For this, I would recommend an index on url(timestamp, domain, likes).