Search code examples
mysqlmariadbquery-optimizationmyisam

Slow Update, Delete and Insert Queries under MariaDB


our server was updated from Ubuntu 16 to Ubuntu 20 with MariaDB. Unfortunately, the loading time of the website has become slower. Normally MariaDB should be faster than Mysql. I've found that, quite simply, update commands on the website take about 7 seconds sometimes. However, if I enter these update commands directly into the database via myphpadmin, they only take 0.0005ms.

It seems to me that MariaDB has a problem with update commands when they occur frequently. This was never a problem with mysql. Here's an query example:

UPDATE LOW_PRIORITY users
SET user_video_count = user_video_count + 1
WHERE user_id = 12345

The database format is MyISAM.

I have no idea what could be the reason. Do you?

Thank you very much.


Solution

  • It may be something as simple as a SELECT searching for something in users. Note, InnoDB would not suffer this problem.

    MyISAM necessarily does a table lock when doing UPDATE, INSERT, or DELETE. (Also ALTER and other DDL statements.) If there are a lot of connections doing any mixture of writes and even SELECTs, the locks can cascade for a surprisingly long time.

    The real solution, whether in MariaDB or [especially] in MySQL, is to switch to InnoDB.

    If this is a case of high volume counting of "likes" or "views", then a partial solution (in either Engine) is to put such counters in a separate, parallel, table. This avoids those simple and fast updates fighting with other actions on the main table. In an extremely high traffic area, gathering such increments and applying them in batches is warranted. I don't think your volume needs that radical solution.

    MySQL has all-but-eliminated MyISAM. MariaDB may follow suit in a few years.

    To address this:

    the same query in myphpadmin its really fast

    The problem is not with how you run it, but what else happens to be going on at the same time.

    (LOW PRIORITY is a MyISAM-specific kludge that sometimes works.)

    MyISAM does "table locking"; InnoDB does "row locking". Hence, Innodb can do a lot of "simultaneous" actions on a table, whereas MyISAM becomes serialized as soon as a write occurs.

    More (Now focusing on InnoDB.)

    Some other things that may be involved.

    If two UPDATEs are trying to modify the same row at the same time, one will have to wait (due to the row locking).

    If there is a really large number of things going on, delays can cascade. If 20 connections are actively running at one instance, they are each slowing down each other. Each connection is given a fair share, but that means that they all are slowed down.

    SHOW PROCESSLIST to see what is running -- not "Sleep". The process with the highest "Time" (except for system threads) is likely to be the instigator of the fracas.

    The slowlog can help in diving deeper. I turn it on, with a low enough long_query_time and wait for the 'event' to happen. Then I use pt-query-digest (or mydumpslow -s t) to find out the slowest queries. With some more effort, one might notice that there were a lot of queries that were "slow" at one instant -- possibly even "point queries" (like UPDATE ... WHERE id=constant) unexpectedly running slower than long_query_time. This indicates too many queries and/or some query that is locking rows unexpectedly. (Note: the "timestamp" of the queries is when the query ended; subtract Query_time to get the start.) SlowLog

    More

    innodb_flush_log_at_trx_commit = 2, as you found out, is a good fix when rapidly doing lots of single-query transactions. If the frequency becomes too large for that fix, then my comments above may become necessary.

    There won't be much performance difference between =2 and =0.

    As for innodb_flush_log_at_timeout. Please provide `SHOW GLOBAL STATUS LIKE 'Binlog%commits'

    As for innodb_lock_wait_timeout... I don't think that changing that will help you. If one of your queries aborts due to that timeout, you should record that it happened and retry the transaction.

    It sounds like you are running with autocommit = ON and not using explicit transactions? That's fine (for non-money activity). There are cases where using a transaction can help performance -- such as artificially batching several queries together to avoid some I/O. The drawback is an increased chance of conflicts with other connections. Still, if you are always checking for errors and rerunning the 'transaction', all should be well.

    innodb_flush_log_at_trx_commit

    When that setting is "1", which is probably what you originally had, each Update did an extra write to disk to assure the data integrity. If the disk is HDD (not SDD), that adds about 10ms to each Update, hence leading to a max of somewhere around 100 updates/second. There are several ways around it.

    • innodb_flush_log_at_trx_commit = 0 or 2, sacrificing some data integrity.
    • Artificially combining several Updates into a single transaction, thereby spreading out the 10ms over multiple queries.
    • Explicitly combining several Updates based on what they are doing and/or which rows they touch. (In really busy systems, this could involve other servers and/or other tables.)
    • Moving the counter to another table (see above) -- this allows interference from more time-consuming operations on the main table. (I did not hear a clear example of this, but the slowlog might have pointed out such.)
    • Switch to SSD drives -- perhaps 10x increase in capacity of Updates.

    I suspect the social media giants do all of the above.