I have an ever-increasing database table of ~8 million rows running, which my application regularly fetches data from. However, the query has suddenly started locking up the entire system. There are tons of mysqld
processes clogging up all CPU cores.
Could it be the ever-increasing size of the database? Or is there something within the query below that could cause it to run for so long? The UNIX_TIMESTAMP
, for example? It's an excerpt from the slow query log. The query is executed every minute, and always had a query time of around ~7.
# Query_time: 6.839524 Lock_time: 0.000170 Rows_sent: 277 Rows_examined: 7989334
FROM (
SELECT @row := @row + 1 AS `row`, `timestamp`, `price`
FROM (
SELECT @row := 0
) `derived_1`, `items`
WHERE `price` IS NOT NULL
AND `timestamp` >= UNIX_TIMESTAMP(NOW() - INTERVAL 1 DAY)
AND `currency` = 'EUR'
AND `type` = 'icon'
ORDER BY `timestamp` ASC
) `derived_2`
WHERE `row` % 8 = 0;
It's a bit hard to just try it out, as it's a production environment. I can't reproduce the issue on my development environment either.
If you need any additional information, please let me know!
Thanks a bunch in advance!
Create an index, so the DBMS can find the records quickly. This index should contain the columns of the WHERE
clause, starting with those where you compare with =
.
CREATE idx ON items (type, currency, price, timestamp);
This even happens to be a covering index, i.e. it contains all columns you are using in the query. So the DBMS won't have to read the table even, as it can get all data from the index itself.