Suppose I have a table like below
create table recommendation_raw_v2
(
id uuid default gen_random_uuid() constraint recommendation_pkey primary key,
worker_id uuid not null,
company_id uuid not null,
job_id uuid not null,
obsolete boolean default false not null,
discipline varchar default ''::character varying not null,
weekly_pay_amount numeric(12, 4) default 0 not null,
geog geography(Point, 4326) not null
);
I want to find all job_id which have some conditions and they are within a radius of 500 km from (-118.2436849, 34.0522342)
. I create a query
select A.weekly_pay_amount,
, ST_Distance(c.x::geography, t.geog::geography)/1000 as distance
, A.job_id
from (
select id, job_id, matching_score, weekly_pay_amount, job_created_at
from recommendation_raw_v2
where worker_id='89b9d5c1-3862-4820-887c-0f1b266e6ce8'::uuid
and company_id='9fcf4081-4adb-4aaf-bf86-f4926de332ef'::uuid
and obsolete = false
and weekly_pay_amount >= 500
and discipline='Foo'
) as A
join recommendation_raw_v2 as t on A.id = t.id,
(SELECT ST_SetSRID(ST_MakePoint(-118.2436849, 34.0522342), 4326)) AS c(x)
where ST_DWithin(t.geog::geography, c.x::geography, 500 * 1000)
order by 1 DESC;
I analyzed this query
QUERY PLAN
-------------------------------------------------------------------------------------
Sort (cost=1098.60..1098.61 rows=1 width=41) (actual time=182052.024..182053.516 rows=202 loops=1)
Sort Key: recommendation_raw_v2.matching_score DESC
Sort Method: quicksort Memory: 40kB
-> Hash Join (cost=1033.35..1098.59 rows=1 width=41) (actual time=1055.329..182050.122 rows=202 loops=1)
Hash Cond: (t.id = recommendation_raw_v2.id)
-> Index Scan using gist_geog on recommendation_raw_v2 t (cost=0.67..33.69 rows=2753 width=48) (actual time=564.032..181600.874 rows=1919272 loops=1)
Index Cond: (geog && _st_expand('0101000020E6100000DC018D88988F5DC0CA5D3A9CAF064140'::geography, '500000'::double precision))
Filter: st_dwithin(geog, '0101000020E6100000DC018D88988F5DC0CA5D3A9CAF064140'::geography, '500000'::double precision, true)
Rows Removed by Filter: 1041991
-> Hash (cost=1029.49..1029.49 rows=255 width=49) (actual time=31.253..32.173 rows=310 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 35kB
-> Index Scan using worker_id_obsolete_index on recommendation_raw_v2 (cost=0.56..1029.49 rows=255 width=49) (actual time=1.883..31.102 rows=310 loops=1)
Index Cond: ((worker_id = '89b9d5c1-3862-4820-887c-0f1b266e6ce8'::uuid) AND (obsolete = false))
Filter: ((weekly_pay_amount >= '500'::numeric) AND (company_id = '9fcf4081-4adb-4aaf-bf86-f4926de332ef'::uuid) AND ((discipline)::text = 'Foo'::text))
Rows Removed by Filter: 148
Planning Time: 16.259 ms
Execution Time: 182058.761 ms
I saw that there are 2 parts, gist_geog
index first and get 1919272 rows, and worker_id_obsolete_index
index to extract 255 rows. PostgreSQL will hash join 2 parts together to yield the final result.
My question is, if somehow I can force PostgreSQL to do worker_id_obsolete_index
first, my query will be faster? and if yes, any idea would you suggest?
Updated:
Forgive me if I am oversimplifying your question, but since you're joining a table with a subset of itself, wouldn't it be less expensive to put everything in a single query and let the planer decide what to do?
SELECT
job_id,weekly_pay_amount,
ST_Distance(geog,
ST_SetSRID(ST_MakePoint(-118.2436849,34.0522342),4326)::geography)/1000
FROM recommendation_raw_v2
WHERE
worker_id='89b9d5c1-3862-4820-887c-0f1b266e6ce8'::uuid AND
company_id='9fcf4081-4adb-4aaf-bf86-f4926de332ef'::uuid AND
obsolete = false AND
weekly_pay_amount >= 500 AND
discipline='Foo' AND
ST_DWithin(geog,
ST_SetSRID(ST_MakePoint(-118.2436849,34.0522342),4326)::geography,500*1000);