I have a small MySQL 5.1.70 InnoDB database: 4 tables; ~3K, ~23K, ~500K, ~600K rows. This runs on OpenBSD 5.4.
I'm in the process of upgrading to OpenBSD 5.7 which switches to MariaDB 10.0.16 (which AFAIK back-ports MySQL 5.6 features). To this end I built a little staging OpenBSD 5.7/MariaDB server so that I can test the upgrade. I've recreated the db schema to match the prod one and ran mysql_upgrade --force
just to be safe.
Now i'd like to fill it with prod data. I dumped the data with:
$ mysqldump -u root -p --single-transaction --no-create-info \
--skip-add-drop-table --skip-extended-insert mydb > data_dump.sql
$ du -h data_dump.sql
116M data_dump.sql
The dump contains text/numbers, no blobs or binaries. The problem is that MariaDB keeps aborting randomly while importing the first table in the dump, which is just ~23K rows.
staging $ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0a 7.2G 3.5G 3.3G 51% /
staging $ mysql -u root -p -D mydb < data_dump.sql
Enter password:
ERROR 2013 (HY000) at line 8039: Lost connection to MySQL server during query
The line where it crashes is a normal INSERT no different from thousands before that one. In fact repeating the import again (after truncating tables and ensuring server starts ok) will NOT crash at the same line. I've seen it crash after 4000 rows, for example, same my.cnf
. So I ruled out the bad data hypothesis.
The error log shows the following (from server startup to crash):
160211 12:07:05 mysqld_safe Starting mysqld daemon with databases from /var/mysql
160211 12:07:06 [Note] InnoDB: Using mutexes to ref count buffer pool pages
160211 12:07:06 [Note] InnoDB: The InnoDB memory heap is disabled
160211 12:07:06 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
160211 12:07:06 [Note] InnoDB: GCC builtin __sync_synchronize() is used for memory barrier
160211 12:07:06 [Note] InnoDB: Compressed tables use zlib 1.2.3
160211 12:07:06 [Note] InnoDB: Not using CPU crc32 instructions
160211 12:07:06 [Note] InnoDB: Initializing buffer pool, size = 128.0M
160211 12:07:06 [Note] InnoDB: Completed initialization of buffer pool
160211 12:07:06 [Note] InnoDB: Setting log file /var/mysql/ib_logfile101 size to 32 MB
160211 12:07:09 [Note] InnoDB: Setting log file /var/mysql/ib_logfile1 size to 32 MB
160211 12:07:11 [Note] InnoDB: Renaming log file /var/mysql/ib_logfile101 to /var/mysql/ib_logfile0
160211 12:07:11 [Warning] InnoDB: New log files created, LSN=27985430
160211 12:07:11 [Note] InnoDB: Highest supported file format is Barracuda.
160211 12:07:11 [Note] InnoDB: 128 rollback segment(s) are active.
160211 12:07:12 [Note] InnoDB: Waiting for purge to start
160211 12:07:12 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.22-71.0 started; log sequence number 27985932
160211 12:07:12 [Note] Server socket created on IP: '127.0.0.1'.
160211 12:07:12 [Note] Event Scheduler: Loaded 0 events
160211 12:07:12 [Note] /usr/local/libexec/mysqld: ready for connections.
Version: '10.0.16-MariaDB-log' socket: '/var/run/mysql/mysql.sock' port: 3306 OpenBSD port: mariadb-server-10.0.16v0
2016-02-11 12:09:34 c10b6700 InnoDB: Assertion failure in thread 3238749952 in file trx0purge.cc line 695
InnoDB: Failing assertion: purge_sys->rseg->last_page_no != FIL_NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
Abort trap
160211 12:09:34 mysqld_safe mysqld restarted
160211 12:09:34 [Note] InnoDB: Using mutexes to ref count buffer pool pages
160211 12:09:34 [Note] InnoDB: The InnoDB memory heap is disabled
160211 12:09:34 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
160211 12:09:34 [Note] InnoDB: GCC builtin __sync_synchronize() is used for memory barrier
160211 12:09:34 [Note] InnoDB: Compressed tables use zlib 1.2.3
160211 12:09:34 [Note] InnoDB: Not using CPU crc32 instructions
160211 12:09:34 [Note] InnoDB: Initializing buffer pool, size = 128.0M
160211 12:09:35 [Note] InnoDB: Completed initialization of buffer pool
160211 12:09:35 [Note] InnoDB: Highest supported file format is Barracuda.
160211 12:09:35 [Note] InnoDB: Log scan progressed past the checkpoint lsn 31818497
160211 12:09:35 [Note] InnoDB: Database was not shutdown normally!
160211 12:09:35 [Note] InnoDB: Starting crash recovery.
160211 12:09:35 [Note] InnoDB: Reading tablespace information from the .ibd files...
160211 12:09:35 [Note] InnoDB: Restoring possible half-written data pages
160211 12:09:35 [Note] InnoDB: from the doublewrite buffer...
InnoDB: Doing recovery: scanned up to log sequence number 32140062
160211 12:09:36 [Note] InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: Last MySQL binlog file position 0 2314577, file name ./binlog.000079
160211 12:09:36 [Note] InnoDB: 128 rollback segment(s) are active.
160211 12:09:36 [Note] InnoDB: Waiting for purge to start
160211 12:09:36 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.22-71.0 started; log sequence number 32140062
160211 12:09:37 [Note] Recovering after a crash using binlog
160211 12:09:37 [Note] Starting crash recovery...
160211 12:09:37 [Note] Crash recovery finished.
160211 12:09:37 [Note] Server socket created on IP: '127.0.0.1'.
160211 12:09:37 [Note] Event Scheduler: Loaded 0 events
160211 12:09:37 [Note] /usr/local/libexec/mysqld: ready for connections.
Version: '10.0.16-MariaDB-log' socket: '/var/run/mysql/mysql.sock' port: 3306 OpenBSD port: mariadb-server-10.0.16v0
I didn't have luck googling for the purge_sys->rseg->last_page_no != FIL_NULL failed assertion. Here's /etc/my.cnf
file for MariaDB:
[client]
port = 3306
socket = /var/run/mysql/mysql.sock
[mysqld]
port = 3306
socket = /var/run/mysql/mysql.sock
datadir = /var/mysql
max_allowed_packet = 128M
table_open_cache = 64
sort_buffer_size = 128K
read_buffer_size = 512K
net_buffer_length = 64K
server-id = 1
general-log = 1
general_log_file = query.log
log-error = error.log
slow-query-log = 1
long_query_time = 2
slow_query_log_file = slow_queries.log
bind-address = 127.0.0.1
log-bin = binlog
binlog_format = mixed
max_binlog_size = 512M
key_buffer_size = 2M
read_rnd_buffer_size = 1M
innodb_data_home_dir = /var/mysql/
innodb_log_group_home_dir = /var/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_buffer_pool_size = 128M
innodb_log_file_size = 32M
innodb_log_buffer_size = 16M
innodb_lock_wait_timeout = 50
innodb_max_dirty_pages_pct = 1
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[mysqlhotcopy]
interactive-timeout
Originally some of those params were set to lower values (still are on the prod MySQL) but the increase above had little success:
net_buffer_length was 2K
max_allowed_packet was 1M
innodb_buffer_pool_size was 64M
innodb_log_file_size was 5M
innodb_log_buffer_size was 8M
Also, increasing innodb_buffer_pool_size/innodb_log_file_size
to 256M/64M
seemed to post-pone the crash a little but didn't help significantly.
Short of breaking the import into small chunks I don't know what else to do. It seems to me this is a pretty small import file though. The staging server is an old iBook (500mhz ppc) with a non-SSD drive (3.3G free), but I did not notice any disk problems or high memory pressure (top
reported 200+Mb free RAM) during the import.
What my.cnf params could cause this crash?
EDIT: timeout info as requested:
MariaDB []> SHOW GLOBAL VARIABLES LIKE '%timeout';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| thread_pool_idle_timeout | 60 |
| wait_timeout | 28800 |
+-----------------------------+----------+
It looks like this might be a bug, perhaps OpenBSD/PowerPC related. It has been reported to MariaDB developers here: http://mariadb.atlassian.net/browse/MDEV-9569