Search code examples
javamysqlmultithreadingjdbcinnodb

How to prevent MySQL InnoDB setting a lock for delete statement through JDBC


I have a multi-threaded client/server system with thousands of clients continuously sending data to the server that is stored in a specific table. This data is only important for a few days, so it's deleted afterwards.

The server is written in J2SE, database is MySQL and my table uses InnoDB engine. It contains some millions of entries (and is indexed properly for the usage).

One scheduled thread is running once a day to delete old entries. This thread could take a large amount of time for deleting, because the number of rows to delete could be very large (some millions of rows). On my specific system deletion of 2.5 million rows would take about 3 minutes.

The inserting threads (and reading threads) get a timeout error telling me

Lock wait timeout exceeded; try restarting transaction

  1. How can I simply get that state from my Java code? I would prefer handling the situation on my own instead of waiting. But the more important point is, how to prevent that situation?
  2. Could I use

    conn.setIsolationLevel( Connection.TRANSACTION_READ_UNCOMMITTED )
    

    for the reading threads, so they will get their information regardless if it is most currently accurate (which is absolutely OK for this usecase)?

  3. What can I do to my inserting threads to prevent blocking? They purely insert data into the table (primary key is the tuple userid, servertimemillis).
  4. Should I change my deletion thread? It is purely deleting data for the tuple userid, greater than specialtimestamp.

Edit:

When reading the MySQL documentation, I wonder if I cannot simply define the connection for inserting and deleting rows with

conn.setIsolationLevel( Connection.TRANSACTION_READ_COMMITTED )

and achieve what I need. It says that UPDATE- and DELETE statements, that use a unique index with a unique search pattern only lock the matching index entry, but not the gap before and with that, rows can still be inserted into that gap. It would be great to get your experience on that, since I can't simply try it on production - and it is a big effort to simulate it on test environment.


Solution

  • You can try to replace your one huge DELETE with multiple shorter DELETE ... LIMIT n with n being determined after testing (not too small to cause many queries and not too large to cause long locks). Since the locks would last for a few ms (or seconds, depending on your n) you could let the delete thread run continuously (provided it can keep-up; again n can be adjusted so it can keep-up). Also, table partitioning can help.