2 AM and the Question: “Where’s the Backup?”
This happened to me last year. The production database got corrupted after a careless package update. With trembling hands I opened a terminal, ran the command to check the backup — and realized the last backup was… 3 weeks ago. Three weeks of transaction logs, orders, user data — gone. No recovery, no rollback.
That taught me a hard lesson: manual backups are backups that don’t exist. Humans forget; machines don’t. Cron jobs were born to solve exactly this problem.
After 3 years managing more than 10 Linux VPS instances, the most expensive lesson I’ve learned is: always run your script manually before handing it off to cron. Especially with backups — a broken script running at 3 AM can fail silently for an entire month without anyone knowing, until you actually need to restore.
Background & Why Automation Matters
The Manual Backup Trap
Many teams have clear backup procedures on paper: “Back up every Friday evening.” But in practice? The person in charge takes leave → forgets to back up. Tight deadline → skip it. Holidays → nobody remembers.
Cron jobs completely eliminate human dependency from this process. Configure it once, and it runs on schedule, every day, without any reminders.
What to Back Up on a Real Server
- Database: MySQL/PostgreSQL dump — the most critical, losing this means losing it for real
- Config files: /etc/nginx/, /etc/apache2/, application configs
- User data: /var/www/html/uploads, media files
- SSL certificates: /etc/letsencrypt/ — lose these and your website goes down immediately
Installation
Check if the Cron Daemon Is Running
The cron daemon comes pre-installed on most modern Linux distros — you usually don’t need to install anything extra. Quick check:
# Ubuntu/Debian
systemctl status cron
# RHEL/AlmaLinux/CentOS
systemctl status crond
# If not running, start it:
systemctl enable --now cron # Ubuntu/Debian
systemctl enable --now crond # RHEL-based
Install Required Tools
# Ubuntu/Debian
apt install -y mysql-client postgresql-client rsync gzip tar curl
# AlmaLinux/CentOS
dnf install -y mysql postgresql rsync gzip tar curl
# For cloud backup (S3, Backblaze, Google Drive...)
curl https://rclone.org/install.sh | sudo bash
Detailed Configuration
Understanding Cron Syntax Once and for All
Many people are intimidated by cron because the syntax looks strange. In reality, there are only 5 fields:
* * * * * command_to_run
│ │ │ │ │
│ │ │ │ └── Day of week (0-7, both 0 and 7 are Sunday)
│ │ │ └──── Month (1-12)
│ │ └────── Day of month (1-31)
│ └──────── Hour (0-23)
└────────── Minute (0-59)
Some commonly used patterns:
0 2 * * * # Run at 2:00 AM every day
0 2 * * 0 # Run at 2:00 AM every Sunday
*/30 * * * * # Run every 30 minutes
0 0 1 * * # Run at midnight on the 1st of every month
0 2,14 * * * # Run at 2:00 AM and 2:00 PM every day
Not sure if your expression is correct? Paste it into crontab.guru to test right in your browser — it saves a ton of debugging time.
Create a MySQL User with Backup-Only Permissions
Don’t use the MySQL root user in your backup script. A dedicated user with minimal privileges — if your script gets leaked, an attacker can only read data, not delete or modify anything:
-- Run in MySQL console
CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'strong_password_here';
GRANT SELECT, LOCK TABLES, SHOW VIEW, EVENT, TRIGGER ON myapp_db.*
TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;
Writing a Complete Backup Script
The script below is what I’m actually running in production — refined through many painful lessons:
#!/bin/bash
# /usr/local/bin/backup.sh
set -euo pipefail # Exit immediately on error, undefined variable, or pipe failure
# ===== CONFIG =====
BACKUP_DIR="/var/backups/myapp"
MYSQL_USER="backup_user"
MYSQL_PASS="your_password_here"
MYSQL_DB="myapp_db"
REMOTE_DEST="myaws:mybucket/backups" # rclone remote (omit if not using)
KEEP_DAYS=7
DATE=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backup.log"
# ===== HELPER =====
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# ===== MAIN =====
mkdir -p "$BACKUP_DIR"
log "=== Backup started: $DATE ==="
# 1. Dump MySQL
log "Dumping MySQL..."
mysqldump \
-u "$MYSQL_USER" \
-p"$MYSQL_PASS" \
--single-transaction \
--routines \
--triggers \
"$MYSQL_DB" | gzip > "$BACKUP_DIR/db_${DATE}.sql.gz"
log "✓ MySQL done: db_${DATE}.sql.gz ($(du -sh "$BACKUP_DIR/db_${DATE}.sql.gz" | cut -f1))"
# 2. Backup config + SSL
log "Backing up configs..."
tar -czf "$BACKUP_DIR/config_${DATE}.tar.gz" \
/etc/nginx/ \
/etc/letsencrypt/ \
/var/www/html/config/ \
2>/dev/null || true # Don't stop if a path doesn't exist
log "✓ Config done"
# 3. Sync uploads
log "Syncing uploads..."
rsync -az --delete \
/var/www/html/uploads/ \
"$BACKUP_DIR/uploads_latest/"
log "✓ Uploads synced"
# 4. Upload to remote (if rclone is installed)
if command -v rclone &>/dev/null; then
log "Uploading to remote..."
rclone copy "$BACKUP_DIR" "$REMOTE_DEST" \
--include "*${DATE}*" \
--log-file "$LOG_FILE"
log "✓ Remote upload done"
fi
# 5. Clean up old backups
log "Removing backups older than ${KEEP_DAYS} days..."
find "$BACKUP_DIR" -name "*.gz" -mtime +$KEEP_DAYS -delete
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$KEEP_DAYS -delete
log "✓ Cleanup done"
log "=== Backup complete ==="
Set permissions and test manually first:
chmod +x /usr/local/bin/backup.sh
# Always run manually once before adding to cron
/usr/local/bin/backup.sh
Add to Crontab
sudo crontab -e
Add the following lines:
# Full backup at 2:30 AM every day
# (30-minute offset to avoid colliding with other jobs on the hour)
30 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1
# Quick DB backup every 6 hours (DB only, no file backup needed)
0 */6 * * * mysqldump -u backup_user -p'your_pass' myapp_db | gzip > /var/backups/myapp/db_quick_$(date +\%H).sql.gz 2>>/var/log/backup.log
Important note: In crontab, the % character has special meaning (newline), so it must be escaped as \% when used in date commands.
Verification & Monitoring
Confirm the Cron Job Is Running
# View root's crontab
sudo crontab -l
# Check the recently created backup file
ls -lht /var/backups/myapp/ | head -10
# Monitor logs in real time
tail -f /var/log/backup.log
Backup Integrity Verification Script
A completed backup doesn’t mean a good backup — files can be corrupted or truncated mid-write. I add a verification step after each run. A valid DB backup is typically a few MB to several hundred MB depending on database size; if it’s only a few KB, the dump almost certainly failed:
#!/bin/bash
# /usr/local/bin/verify-backup.sh
BACKUP_DIR="/var/backups/myapp"
LATEST_DB=$(ls -t "$BACKUP_DIR"/db_*.sql.gz 2>/dev/null | head -1)
if [ -z "$LATEST_DB" ]; then
echo "CRITICAL: No database backup file found!"
exit 2
fi
# Check if .gz file is valid
if gzip -t "$LATEST_DB" 2>/dev/null; then
echo "OK: Valid backup: $LATEST_DB ($(du -sh "$LATEST_DB" | cut -f1))"
else
echo "CRITICAL: Backup corrupted: $LATEST_DB"
exit 2
fi
# Warn if file is too small — a failed dump usually produces an empty file or just a header
SIZE=$(stat -c%s "$LATEST_DB")
if [ "$SIZE" -lt 10240 ]; then
echo "WARNING: Backup file is only ${SIZE} bytes — dump may have failed"
exit 1
fi
echo "OK: All checks passed"
Alerts on Backup Failure
Cron runs but nobody checks the logs — it can still fail silently. The simplest approach is to send a Telegram notification at the end of the script, along with the backup size so any anomalies are easy to spot:
# Add to the end of backup.sh
TELEGRAM_TOKEN="your_bot_token"
TELEGRAM_CHAT_ID="your_chat_id"
BACKUP_SIZE=$(du -sh "$BACKUP_DIR" | cut -f1)
curl -s -X POST \
"https://api.telegram.org/bot${TELEGRAM_TOKEN}/sendMessage" \
-d "chat_id=${TELEGRAM_CHAT_ID}" \
-d "text=✅ Backup $(hostname) completed at $(date '+%H:%M %d/%m/%Y') | Size: ${BACKUP_SIZE}" > /dev/null
Restore Testing — The Most Overlooked Step
A backup you’ve never tested restoring is practically no backup at all. My schedule: restore once a month into a test database, count the tables to confirm the dump is complete.
# Restore MySQL from backup file
gunzip -c /var/backups/myapp/db_20260301_023000.sql.gz | \
mysql -u root -p test_restore_db
# Confirm the number of tables restored
mysql -u root -p -e \
"SELECT COUNT(*) as table_count FROM information_schema.tables \
WHERE table_schema='test_restore_db';"
Add a monthly cron job for automated restore testing:
# 1st of every month at 3:00 AM
0 3 1 * * /usr/local/bin/test-restore.sh >> /var/log/backup-verify.log 2>&1
Pre-Production Checklist
- ☑ Script has been run manually at least once successfully
- ☑ Log output is clear, timestamped, and easy to read when debugging
- ☑ Backup file integrity is verified after creation
- ☑ Alert is sent on backup failure
- ☑ Rotation in place — old files are deleted automatically to prevent disk from filling up
- ☑ Backup is copied to remote storage (offsite)
- ☑ Restore has been tested at least once
Setting up cron is not the finish line — it’s just the beginning. Monitoring is what determines whether your backups actually work. I once had a script running on schedule but failing silently for 2 weeks because the MySQL password changed without the script being updated. I didn’t find out until I needed to restore. If I’d had Telegram alerts back then, I would have known the same day — not two weeks later.

