On Ubuntu 14.04 terminal using bash, i'm able to dump from master mysql db to the slave:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 --user=myuser --password=[password] --host=127.0.0.1 mydb | mysql --max_allowed_packet=128M -h 192.168.1.110 -u myuser -p[password] otherdb
But I simultaneously want to redirect the mysqldump output to head -30 > /tmp/pointintime.log
so I can get the point in time info:
--
-- Position to start replication or point-in-time recovery from
--
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.049868', MASTER_LOG_POS=107;
How to do this? I tried using tee
but this results in sql syntax errors at the slave import end (as though its breaking up the output and not sending full statements):
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 --user=myuser --password=[password] --host=127.0.0.1 mydb | tee >(head -30 >/tmp/pointintime.log) | mysql --max_allowed_packet=128M -h 192.168.1.110 -u myuser -p[password] otherdb
How to mysqldump
import to slave mysql db and simultaneously retrieve the head -30
of the dump? I dont want to dump to a file first because the db is huge, its more efficient to pipe over the network.
I also tried using multiple tee
redirects but had the same syntax error:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 --user=myuser --password=[password] --host=127.0.0.1 mydb | tee >(mysql --max_allowed_packet=128M -h 192.168.1.110 -u myuser -p[password] otherdb) >(head -30 >/tmp/pointintime.log)
ERROR 1064 (42000) at line 47: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near..
You can't do this with tee
and head
because head
closes its input handle once it has output the requested number of lines, breaking the pipe. Here's what you should see with the command you've written.
mysqldump: Got errno 32 on write
Error 32 is "Broken pipe."
The other error -- I suspect -- is because the broken pipe causes the input handle to mysql
to be closed, with a partial line of output already written, and the end of file it receives when that happens is interpreted as a premature end of statement.
You need something in the middle that will not break the pipe. Suggestion: the swiss army chainsaw... Perl.
Replace this:
| tee >(head -30 >/tmp/pointintime.log) |
With this:
| perl -p -e 'print STDERR $_ unless $. > 30' 2>/tmp/pointintime.log |
The -e
switch tells Perl that instead of loading a Perl script from a file, that the script is inside the quoted string that follows. The -p
switch causes this little one-line "program" to be run for each line of STDIN, after which each input line will be printed to STDOUT, with the input line transiently appearing in the variable $_
as each line passes through, and the variable $.
indicating the running line counter. Of course STDERR
is the second output stream, which we catch with 2>
and redirect to your log file, and once $. > 30
is true, no more lines are written to your log... but they're all written to the output.
Simple as that.
Piping mysqldump through Perl in this way is completely safe -- it will not manipulate the data in any way, it will write it out exactly as it came in. I routinely do this when I need to modify a backup stream on the fly.