Search code examples
mysqllockinginnodbread-write

InnoDB Concurrent Read-Write


I wanted to understand how InnoDB handles multiple simultaneous reads/write operations. Consider the below scenario: You are performing a pretty intense batch write. Any read request that will come in will have to wait till the batch write gets completed. During this time, write request are also requested on the table. So by the time, the batch write is complete, there are multiple read requests and write requests pending. In what order would innodb resolve the requests. In most cases, i would prefer getting the most recent result-set from table. So waiting for write requests would be the way, however that could lead to read-request starvation. All writes requests are non-updates row requests. Update requests I believe acquire row-level lock, whereas the insert requests require table-level lock.

Can you please explain how this would take place in InnoDB? Thanks


Solution

  • In InnoDB, INSERTs do not take a table lock. (MyISAM is a different story.)

    Roughly speaking, each read or write to an InnoDB table will lock only the row(s) needed. If there is no overlap between the rows of one query and another, then there is no waiting. Some issues that make it "roughly":

    • Some locks are "gap" locks. Think of INSERTing a new row between two existing rows.
    • Shared read locks (on the same row) do not block each other
    • eXclusive locks act somewhat like what you described.

    "MVCC" allows multiple transactions to "see" different versions of the same row. Let's say one transaction my be SELECTing one row while another is UPDATEing it. Until both transactions are finished there are physically two versions of that row. Things get cleaned up after both transactions are COMMITed or ROLLBACKed.

    It may be worth nothing that all of this row-level locking and checking is costly. If you have dozens of connections pounding away, they will all slow down. In older versions, 4-8 connections was a practical limit. For 5.7 the about 64 can be handled. Still, InnoDB can handle thousands of transactions per second, ultimately limited by slow disk I/O.

    "Batch Inserts" -- If you really mean INSERT into a single table, then it is optimal to create single INSERT statement(s) that contain 100-1000 rows. This decreases the overhead of communication, parsing, optimizing, and transaction overhead. It also increases the risk of colliding with reads, but the speedup (usually) outweighs the collision delay.

    Be sure to check for errors after every InnoDB statement. Deadlocks can occur. Generally, they are handled by ROLLBACK plus rerunning the BEGIN...COMMIT.