Search code examples
mysqlsqldatabasepostgresqlhigh-load

Does it make sense to split a huge select query into parts?


Actually, it is the question for an interview of a company which builds high-load service. For example, we have a table with 1TB of records with primary b-tree index. We need to select all records in a range from 5000 to 5000000. We cannot block the whole database. Database in under high load. Does it make sense to split a huge select query into parts like

select * from a where id > =5000 and id < 10000;
select * from a where id >= 10000 and id < 15000;
...

Please help me to compare behaviour in case when we use Postgres and MySQL. Are there any other optimal techniques to select all required records?

Thanks.


Solution

  • There are many unknowns in your question. First of all, what is the table structure ? Will this query use any indexes ?

    The best way to find out is to run an execution plan and analyze performance.

    But trying to retrieve so many rows in one pass does not seem very reasonable. The query will very likely cause heavy load on the server + RAM consumption + usage of a temp file probably. It could fail or time out.

    And then the resultset has to travel across the network and it could be huge. You have to evaluate the size of the dataset, we cannot guess without insight into the table structure.

    The big question is, why retrieve so many rows, what is the ultimate goal ? Say you have a GUI application with a datagridview or something like that. You are not going to display 500 millions rows at once, this would crash the application. What the user probably wants is to paginate or search records using some filter. Maybe you'll show a few hundreds of records at a time max.

    What are you going to do with all those records ?