My problem is represented by the following query:
SELECT
b.row_id, b.x, b.y, b.something,
(SELECT a.x FROM my_table a WHERE a.row_id = (b.row_id - 1), a.something != 42 ) AS source_x,
(SELECT a.y FROM my_table a WHERE a.row_id = (b.row_id - 1), a.something != 42 ) AS source_y
FROM
my_table b
I'm using the same subquery statement twice, for getting both source_x
and source_y
.
That's why I'm wondering if it's possible to do it using one subquery only?
Because once I run this query on my real data (millions of rows) it seems to never finish and take hours, if not days (my connection hang up before the end).
I am using PostgreSQL 8.4
@DavidEG posted the best syntax for the query.
However, your problem is definitely not just with the query technique. A JOIN
instead of two subqueries can speed up things by a factor of two at best. Most likely less. That doesn't explain "hours". Even with millions of rows, a decently set up Postgres should finish the simple query in seconds, not hours.
First thing that stands out is the syntax error in your query:
... WHERE a.row_id = (b.row_id - 1), a.something != 42
AND
or OR
is needed here, not a comma.
Next thing to check are indexes. If row_id
is not the primary key, you may not have an index on it. For optimum performance of this particular query create a multi-column index on (row_id, something)
like this:
CREATE INDEX my_table_row_id_something_idx ON my_table (row_id, something)
If the filter excludes the same value every time in something != 42
you can also use a partial index instead for additional speed up:
CREATE INDEX my_table_row_id_something_idx ON my_table (row_id)
WHERE something != 42
This will only make a substantial difference if 42
is a common value or something
is a bigger column than just an integer. (An index with two integer
columns normally occupies the the same size on disk as an index with just one, due to data alignment. See:
When performance is an issue, it is always a good idea to check your settings. Standard settings in Postgres use minimal resources in many distributions and are not up to handling "millions of rows".
Depending on your actual version of Postgres, an upgrade to a current version (9.1 at the time of writing) may help a lot.
Ultimately, hardware is always a factor, too. Tuning and optimizing can only get you so far.