MySQL Database Backup and Restore: Practical Lessons from Production

MySQL tutorial - IT technology blog
MySQL tutorial - IT technology blog

Production database down at 2 AM — a lesson I’d rather forget

Last year I had to deal with a serious incident: a table got corrupted because a write operation was interrupted by a power outage on the server. The painful part wasn’t the database error itself — it was that the team’s most recent backup only had the schema, no data. That night, my lead dev and I spent three hours manually recovering tables from log files.

That’s when I learned: backing up MySQL isn’t just about running a command. You need to understand what you’re backing up, how to restore it, and most importantly — you need to test your restores regularly.

Why MySQL backup is easier to get wrong than you think

Looking back at the incidents I’ve dealt with, MySQL backup failures almost always come down to the same mistakes:

  • Schema-only backups: Running mysqldump with the wrong flags, dumping only the structure without any data
  • No consistency guarantee: Backing up while write transactions are in progress → data is inconsistent across tables
  • Corrupt backup files: Disk fills up mid-dump, process gets killed → incomplete dump file that nobody notices
  • Never testing restores: Backup runs fine but restore fails due to character set mismatches, foreign key issues, or broken triggers

My production database at the time was MySQL 8.0, around 50GB of data. I tried several approaches before finding a stable workflow. Initially using plain mysqldump — each backup took nearly 45 minutes, and the server noticeably slowed down the entire time.

MySQL backup methods

1. mysqldump — Simple, sufficient for smaller databases

No additional installation needed — it ships with MySQL. Works well for databases under 10GB or dev/staging environments where zero-downtime backup isn’t a requirement.

# Back up a single database
mysqldump -u root -p mydb > mydb_backup_$(date +%Y%m%d_%H%M%S).sql

# Back up multiple databases at once
mysqldump -u root -p --databases mydb otherdb > multi_backup.sql

# Back up the entire MySQL server
mysqldump -u root -p --all-databases > full_backup.sql

Essential flags for production backups:

mysqldump -u root -p \
  --single-transaction \     # Ensures consistency without locking tables on InnoDB
  --routines \               # Include stored procedures and functions
  --triggers \               # Include triggers
  --events \                 # Include scheduled events
  --hex-blob \               # Safely encode binary data
  mydb > mydb_backup.sql

--single-transaction is a flag I always enable on production. Instead of locking tables, it opens a consistent-read transaction — other write transactions continue running normally during the backup. Omitting this flag on a database with active write traffic means your backup data will be inconsistent across tables.

Restoring from a dump file:

# Restore into an existing database
mysql -u root -p mydb < mydb_backup.sql

# Create a new database and restore into it
mysql -u root -p -e "CREATE DATABASE mydb_restored CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"
mysql -u root -p mydb_restored < mydb_backup.sql

2. mysqlpump — Parallel backup, significantly faster

MySQL 5.7.8 introduced mysqlpump with parallel threads — exporting multiple tables simultaneously instead of sequentially. On that same 50GB database, backup time dropped from 45 minutes down to 18 minutes.

mysqlpump -u root -p \
  --default-parallelism=4 \     # Run 4 threads in parallel
  --single-transaction \
  mydb > mydb_pump_backup.sql

The output includes a progress indicator — you can see how far along it is instead of staring at a frozen screen. A practical note: when I tested with 8 threads, disk I/O became saturated and it actually ran slower than 4 threads. Benchmark on staging first if you’re unsure about your server’s disk throughput.

3. Percona XtraBackup — Hot backup for serious production workloads

Large database that can’t afford server slowdown during backup? XtraBackup is the answer. Instead of exporting SQL like mysqldump, it copies InnoDB files directly at the physical level — no table locks, no query impact, and it supports incremental backups to save storage space.

# Install on Ubuntu/Debian
apt install percona-xtrabackup-80

# Full backup
xtrabackup --backup \
  --user=root \
  --password=yourpassword \
  --target-dir=/backup/mysql/full/$(date +%Y%m%d)

# Incremental backup (only backs up changes since the last backup)
xtrabackup --backup \
  --user=root \
  --password=yourpassword \
  --target-dir=/backup/mysql/inc/$(date +%Y%m%d) \
  --incremental-basedir=/backup/mysql/full/20250301

Restoring with XtraBackup requires two steps — the prepare step is mandatory and cannot be skipped:

# Step 1: Prepare — apply logs to ensure consistency
xtrabackup --prepare --target-dir=/backup/mysql/full/20250301

# Step 2: Copy data into MySQL datadir
systemctl stop mysql
xtrabackup --copy-back --target-dir=/backup/mysql/full/20250301
chown -R mysql:mysql /var/lib/mysql
systemctl start mysql

Automated backup script — what I actually use in production

The script below runs via cron at 2 AM every night. It backs up each database individually instead of using --all-databases — this makes partial restores much easier when needed. Files are compressed immediately, and backups older than 7 days are automatically deleted:

#!/bin/bash
# mysql_backup.sh

DB_USER="backup_user"
DB_PASS="$(cat /etc/mysql_backup_pass)"  # Don't hardcode the password
BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7
LOG_FILE="/var/log/mysql_backup.log"

echo "[$DATE] Starting MySQL backup..." >> $LOG_FILE

# Back up each database individually (easier to restore than all-databases)
for DB in $(mysql -u$DB_USER -p$DB_PASS -e "SHOW DATABASES;" \
  | grep -Ev "(Database|information_schema|performance_schema|sys)"); do

    OUTPUT_FILE="$BACKUP_DIR/${DB}_${DATE}.sql.gz"

    mysqldump -u$DB_USER -p$DB_PASS \
        --single-transaction \
        --routines \
        --triggers \
        "$DB" | gzip -9 > "$OUTPUT_FILE"

    if [ $? -eq 0 ]; then
        echo "[$DATE] OK: $DB → $OUTPUT_FILE" >> $LOG_FILE
    else
        echo "[$DATE] ERROR: Failed to backup $DB" >> $LOG_FILE
        # Add Slack/email alert here if needed
    fi
done

# Delete backups older than RETENTION_DAYS days
find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
echo "[$DATE] Cleaned backups older than $RETENTION_DAYS days" >> $LOG_FILE

Add to crontab to run at 2 AM every day:

0 2 * * * /usr/local/bin/mysql_backup.sh

Testing restores — the mandatory step most people skip

An untested backup is not really a backup. That might sound extreme, but I’ve personally witnessed cases where backups ran successfully for months, then failed with an encoding error during an actual restore — or the dump file was silently truncated and nobody knew. Once a month, I restore to a staging server and verify row counts on the critical tables:

# Decompress and restore into a test database
gunzip -c /backup/mysql/mydb_20250301_020000.sql.gz | \
  mysql -u root -p mydb_test

# Verify row counts on critical tables
mysql -u root -p mydb_test -e "
  SELECT 'users' AS tbl, COUNT(*) AS cnt FROM users
  UNION ALL
  SELECT 'orders', COUNT(*) FROM orders
  UNION ALL
  SELECT 'products', COUNT(*) FROM products;
"

The 3-2-1 backup strategy combined with binlog

After a few incidents like that, I decided to do things properly and adopted the 3-2-1 strategy for MySQL:

  • 3 backup copies: Full weekly + incremental daily + continuous binary log
  • 2 storage types: Local disk and cloud (S3, Backblaze B2)
  • 1 offsite copy: At least one copy must be stored somewhere other than the production server

Upload backups to S3 after the script completes:

# Requires AWS CLI installed and credentials configured
aws s3 cp /backup/mysql/mydb_${DATE}.sql.gz \
  s3://your-bucket/mysql-backups/ \
  --storage-class STANDARD_IA   # Infrequent Access — cheaper than Standard

Binary logs (binlog) are what saved me during that incident. With binlog enabled, you can perform point-in-time recovery — restoring to the exact moment right before the failure occurred:

# Enable binlog in /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
binlog_expire_logs_seconds = 1209600  # 14 days

# Point-in-time recovery: apply binlog from after the backup up to the target time
mysqlbinlog \
  --start-datetime="2025-03-01 02:00:00" \
  --stop-datetime="2025-03-01 14:30:00" \
  /var/log/mysql/mysql-bin.000123 | mysql -u root -p mydb

My current setup for a 50GB production database: XtraBackup full backup every weekend, binlog for continuous intraday recovery, and mysqldump for daily backups of each critical database. Everything syncs to S3 after each run. Three layers of protection — and more importantly, all three have been validated with real restore tests. I sleep a lot better than I did when there was just one silent cron job that nobody ever checked the logs for.

Share: