[crontab] mysqldump & gzip commands to properly create a compressed file of a MySQL database using crontab

I am having problems with getting a crontab to work. I want to automate a MySQL database backup.

The setup:

  • Debian GNU/Linux 7.3 (wheezy)
  • MySQL Server version: 5.5.33-0+wheezy1(Debian)
  • directories user, backup and backup2 have 755 permission
  • The user names for MySQL db and Debian account are the same

From the shell this command works

mysqldump -u user -p[user_password] [database_name] | gzip > dumpfilename.sql.gz

When I place this in a crontab using crontab -e

* * /usr/bin/mysqldump -u user -pupasswd mydatabase | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/dev/null 2>&1

A file is created every minute in /home/user/backup directory, but has 0 bytes.

However when I redirect this output to a second directory, backup2, I note that the proper mysqldumpfile duly compressed is created in it. I am unable to figure what is the mistake that I am making that results in a 0 byte file in the first directory and the expected output in the second directory.

* * /usr/bin/mysqldump -u user -pupasswd my-database | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1

I would greatly appreciate an explanation.

Thanks

This question is related to crontab mysqldump

The answer is


Personally, I have create a file.sh (right 755) in the root directory, file who do this job, on order of the crontab.

Crontab code:

10 2 * * * root /root/backupautomatique.sh

File.sh code:

rm -f /home/mordb-148-251-89-66.sql.gz #(To erase the old one)

mysqldump mor | gzip > /home/mordb-148-251-89-66.sql.gz (what you have done)

scp -P2222 /home/mordb-148-251-89-66.sql.gz root@otherip:/home/mordbexternes/mordb-148-251-89-66.sql.gz

(to send a copy somewhere else if the sending server crashes, because too old, like me ;-))


Besides the solution of m79lkm above, my 2 cents on this topic is not to directly pipe the result in gzip but first dump it as a .sql file, and then gzip it. (Use && instead of | )

The dump itself will be faster. (for what I tested it was double as fast)

Otherwise you tables will be locked longer and the downtime/slow-responding of your application can bother the users. The mysqldump command is taking a lot of resources from your server.

So I would go for "&& gzip" instead of "| gzip"

Important: check for free disk space first with df -h since you will need more then piping | gzip.

mysqldump -u user -p[user_password] [database_name] > dumpfilename.sql && gzip dumpfilename.sql

-> which will also result in 1 file called dumpfilename.sql.gz

Furthermore the option --single-transaction prevents the tables being locked but still result in a solid backup. So you might consider to use that option. See docs here

mysqldump --single-transaction -u user -p[user_password] [database_name] > dumpfilename.sql && gzip dumpfilename.sql

You can use the tee command to redirect output:

/usr/bin/mysqldump -u user -pupasswd my-database | \
tee >(gzip -9 -c > /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz)  | \
gzip> /home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1

see documentation here


if you need to add a date-time to your backup file name (Centos7) use the following:

/usr/bin/mysqldump -u USER -pPASSWD DBNAME | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz

this will create the file: db.2017-11-17.231537.sql.gz