I have a slow query (> 1s). Here is the result of an explain analyze
on that query:
Nested Loop Left Join (cost=0.42..32275.13 rows=36 width=257) (actual time=549.409..1106.044 rows=2 loops=1)
Join Filter: (answer.lt_surveyee_survey_id = lt_surveyee_survey.id)
-> Index Scan using lt_surveyee_survey_id_key on lt_surveyee_survey (cost=0.42..8.44 rows=1 width=64) (actual time=0.108..0.111 rows=1 loops=1)
Index Cond: (id = 'xxxxx'::citext)
-> Seq Scan on answer (cost=0.00..32266.24 rows=36 width=230) (actual time=549.285..1105.910 rows=2 loops=1)
Filter: (lt_surveyee_survey_id = 'xxxxx'::citext)
Rows Removed by Filter: 825315
Planning time: 0.592 ms
Execution time: 1106.124 ms
The xxxxx
parts of the result are an uuid like. I did not built that database, so I have no clue right now. Here is the query:
EXPLAIN ANALYZE SELECT
lt_surveyee_survey.id
-- +Some other fields
FROM lt_surveyee_survey
LEFT JOIN answer ON answer.lt_surveyee_survey_id = lt_surveyee_survey.id
WHERE lt_surveyee_survey.id = 'xxxxx';
Your JOIN
is causing the performance drop according to the EXPLAIN ANALYZE
output. You can see that there are 2 different lookups, one took a couple milliseconds, and the other took half a second.
The difference is indicated by the beginning of the lines: Index Scan
and Seq Scan
, where Seq
is short for sequential, meaning that all of the rows had to be checked by the DBMS to process the JOIN
. The reason why sequential scans would occur is a missing index on the column being checked (answer.lt_surveyee_survey_id
in your case).
Adding an index should solve the performance issue.