I am using a MAMP server in my office where I have a table which stores around 7.5 lakh (750 000) records. On uploading a file, if the data already in present, we don't throw an error because of any business logic just we delete those records in that time range of the file and re-insert the data from the file. But while deleting, lock timeout exception is thrown while debugging and the file fails to upload. How can I prevent this timeout? Can lock timeout be used in delete query which we keep in a properties file?
Query:
DELETE * FROM TABLE_NAME WHERE DATE >= (STARTDATE) AND DATE <= (EndDate).
For say, at least 10k records have to be deleted which can fall in these date range. Above this query gives lock timeout exception.
It can be very time-consuming to perform a bulk operation like yours on a large table. Why?
How to fix this.
DATE
so the server can find the correct rows efficiently.SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
. That tells the query it can go ahead even if it might get a soon-to-be DELETEd row.DELETE * FROM TABLE_NAME WHERE DATE >= (STARTDATE) AND DATE <= (EndDate) LIMIT 1000;
And repeat the delete operation until it processes zero rows. That uses mutiple transactions while keeping each one at a reasonable size so the server does not get choked by them.
This sort of thing can be done most simply in java with a loop like this pseudocode.
bool done = false;
while (!done) {
int rowcount = execute.Update("Delete Query with LIMIT clause");
if (rowcount <= 0) done = true;
}