It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO
statement when the following
mysqldump some_db > some_db.sql
is done to back up the database. (is it 1 insert statement that handles all the records?)
So when reconstructing the DB using
mysql some_db < some_db.sql
then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy...
Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump
, can it break the INSERT
statement into shorter ones, so that the mysql
doesn't need to parse the line so hard when restoring the DB?
If anything is using time, it will be mysqld, that's what actually does all of the work. If you're connecting to a remote mysql server then mysqld will be on that machine, not your local one.
The most direct way to speed it up would be to remove all keys and indexes from the table and then create them once the data is loaded. Keeping everything updated across that many inserts can be very taxing on a server, and will probably cause you to end up with fragmented indexes anyways. You can expect the index creation at the end of the inserts to take a while, but it won't be as bad as keeping them all up to date as the inserts are taking place.
A better solution would be to stop using mysqldump for that table and switch to using LOAD DATA INFILE
(with the matching SELECT ... INTO OUTFILE
for creating the dump).
Your absolute best bet would be to just copy the database files instead of trying to do a backup and restore. I think this still only works with MyISAM databases and not InnoDB, but someone else can correct me if things have changed recently.