Search code examples
postgresqllimit

Limit slows down my postgres query


Hi i have a simple query on a single table which runs pretty fast, but i want to page my results and the LIMIT slows down the select incredibly. The Table contains about 80 Million rows. I'm on postgres 9.2.

Without LIMIT it takes 330ms and returns 2100 rows

EXPLAIN SELECT * from interval where username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc

Sort  (cost=156599.71..156622.43 rows=45438 width=108)"
  Sort Key: "time""
  ->  Bitmap Heap Scan on "interval"  (cost=1608.05..155896.71 rows=45438 width=108)"
        Recheck Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)"
        ->  Bitmap Index Scan on interval_username  (cost=0.00..1605.77 rows=45438 width=0)"
              Index Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)

EXPLAIN ANALYZE SELECT * from interval where 
username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc

Sort  (cost=156599.71..156622.43 rows=45438 width=108) (actual time=1.734..1.887 rows=2131 loops=1)
  Sort Key: id
  Sort Method: quicksort  Memory: 396kB
  ->  Bitmap Heap Scan on "interval"  (cost=1608.05..155896.71 rows=45438 width=108) (actual time=0.425..0.934 rows=2131 loops=1)
        Recheck Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
        ->  Bitmap Index Scan on interval_username  (cost=0.00..1605.77 rows=45438 width=0) (actual time=0.402..0.402 rows=2131 loops=1)
              Index Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
Total runtime: 2.065 ms

With LIMIT it takes several minuts (i never waited for it to end)

EXPLAIN SELECT * from interval where username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc LIMIT 10

Limit  (cost=0.00..6693.99 rows=10 width=108)
  ->  Index Scan Backward using interval_time on "interval"  (cost=0.00..30416156.03 rows=45438 width=108)
        Filter: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)

Table definition

-- Table: "interval"

-- DROP TABLE "interval";

CREATE TABLE "interval"
(
  uuid character varying(255) NOT NULL,
  deleted boolean NOT NULL,
  id bigint NOT NULL,
  "interval" bigint NOT NULL,
  "time" timestamp without time zone,
  trackerversion character varying(255),
  username character varying(255),
  CONSTRAINT interval_pkey PRIMARY KEY (uuid),
  CONSTRAINT fk_272h71b2gfyov9fwnksyditdd FOREIGN KEY (username)
      REFERENCES appuser (panelistcode) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE CASCADE,
  CONSTRAINT uk_hyi5iws50qif6jwky9xcch3of UNIQUE (id)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE "interval"
  OWNER TO postgres;

-- Index: interval_time

-- DROP INDEX interval_time;

CREATE INDEX interval_time
  ON "interval"
  USING btree
  ("time");

-- Index: interval_username

-- DROP INDEX interval_username;

CREATE INDEX interval_username
  ON "interval"
  USING btree
  (username COLLATE pg_catalog."default");

-- Index: interval_uuid

-- DROP INDEX interval_uuid;

CREATE INDEX interval_uuid
  ON "interval"
  USING btree
  (uuid COLLATE pg_catalog."default");

Further results

SELECT n_distinct FROM pg_stats WHERE tablename='interval' AND attname='username';
n_distinct=1460

SELECT AVG(length) FROM (SELECT username, COUNT(*) AS length FROM interval GROUP BY username) as freq;
45786.022605591910

SELECT COUNT(*) FROM interval WHERE username='1228321f131084766f3b0c6e40bc5edc41d4677e';
2131

Solution

  • The planner is expecting 45438 rows for username '1228321f131084766f3b0c6e40bc5edc41d4677e', while in reality there are only 2131 rows with it, thus it thinks it will find the 10 rows you want faster by looking backward through the interval_time index.

    Try increasing the stats on the username column and see whether the query plan will change.

    ALTER TABLE interval ALTER COLUMN username SET STATISTICS 100;
    
    ANALYZE interval;
    

    You can try different values of statistics up to 10000.

    If you are still not satisfied with the plan and you are sure that you can do better than the planner and know what you are doing, then you can bypass any index easily by performing some operation over it that does not change its value.

    For example, instead of ORDER BY time, you can use ORDER BY time + '0 seconds'::interval. That way any index on the value of time stored in the table will be bypassed. For integer values you can multiply * 1, etc.