MySQL Point-in-Time Recovery (PITR): Restore Your Database to Any Second with Binary Log

MySQL tutorial - IT technology blog
MySQL tutorial - IT technology blog

2 AM. A DELETE Without a WHERE Clause.

Your phone rings. Slack explodes. Users are reporting that all orders from the day have vanished. The dev looks back at the query history and spots a familiar statement — missing its most critical piece:

DELETE FROM orders WHERE status = 'pending';
-- What actually ran:
DELETE FROM orders;

I once dealt with database corruption at 3 AM and had to restore from a backup — ever since, I check backups every single day. But backups alone aren’t enough. If your most recent backup was at 1 AM and the incident happened at 2:47 AM, you’ve lost nearly two hours of data. For an e-commerce system handling hundreds of orders per hour, that’s a genuine disaster. If you want a deeper dive into backup strategies, MySQL Database Backup and Restore: Practical Lessons from Production covers the real-world details.

This is exactly when you need Point-in-Time Recovery (PITR) — the ability to rewind your database to any moment in time, down to the second, by combining a full backup with MySQL Binary Log.

What Is Binary Log and Why It Can Save You

Binary Log (binlog) is MySQL’s journal file that records every data-changing operation in chronological order: INSERT, UPDATE, DELETE, CREATE TABLE, DROP TABLE… It does not log SELECT statements since they don’t modify data.

The PITR mechanism is actually straightforward:

  1. Restore from the most recent full backup (your starting point)
  2. Replay the Binary Log from the backup point up to just before the incident occurred (your ending point)

The result: a clean database state with the destructive statement never applied.

Check Whether Binary Log Is Enabled

SHOW VARIABLES LIKE 'log_bin';
-- Variable_name | Value
-- log_bin       | ON

If you see OFF, you need to enable it first — and unfortunately, there’s no recovering what’s already lost if binlog was never recording. On production systems, this isn’t optional — it’s a baseline requirement. A solid starting point is having MySQL properly installed and hardened; see Install and Configure MySQL 8 on Ubuntu: A Detailed A-Z Guide if you’re building from scratch.

Enabling Binary Log on MySQL 8.x

# Add to /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
server-id          = 1
log_bin            = /var/log/mysql/mysql-bin.log
binlog_format      = ROW
binlog_expire_logs_seconds = 604800  # Retain for 7 days
max_binlog_size    = 100M
sudo systemctl restart mysql

ROW format is the best choice for PITR — it records exactly which rows were changed, not just the SQL statement. This matters for UPDATE statements that use functions like NOW() or RAND(): STATEMENT format would produce different results on each replay, while ROW format always stores the actual values that were written.

Step-by-Step PITR Walkthrough

Step 1: Pinpoint the Incident in the Binary Log

Your top priority the moment an incident occurs: identify exactly which statement caused the problem and its position in the binlog — before restoring or doing anything else.

# List currently available binlog files
mysqlbinlog --no-defaults /var/log/mysql/mysql-bin.index

# Or from within MySQL
SHOW BINARY LOGS;
# Read binlog contents, filtered by time range
mysqlbinlog --no-defaults \
  --start-datetime="2026-04-01 02:00:00" \
  --stop-datetime="2026-04-01 03:00:00" \
  /var/log/mysql/mysql-bin.000023 | grep -A5 "DELETE FROM orders"

The output will show something like this:

# at 487219
#260401  2:47:33 server id 1  end_log_pos 487391
### DELETE FROM `mydb`.`orders`

Note down two things: the timestamp (2:47:33) and the position (487219). Position is more precise than timestamp since timestamps can repeat.

Step 2: Restore the Full Backup

Assuming you have a backup from 1:00 AM (before the incident):

# Restore from mysqldump
mysql -u root -p mydb < backup_20260401_0100.sql

# Or if using Percona XtraBackup
xtrabackup --copy-back --target-dir=/backup/2026-04-01_01-00/

A useful tip when using mysqldump: always include the --master-data=2 flag (MySQL 5.x) or --source-data=2 (MySQL 8.x) when dumping. This flag writes the binlog position at dump time into the header of the SQL file, saving you time later:

mysqldump --single-transaction --source-data=2 \
  -u root -p mydb > backup_20260401_0100.sql

# The file header will contain a line like:
-- CHANGE REPLICATION SOURCE TO SOURCE_LOG_FILE='mysql-bin.000023', SOURCE_LOG_POS=321847;

Step 3: Replay the Binary Log Up to Just Before the Incident

This step requires precision — a wrong position means losing more data. Replay from position 321847 (the backup point) up to just before position 487219 (the destructive DELETE):

mysqlbinlog --no-defaults \
  --start-position=321847 \
  --stop-position=487219 \
  /var/log/mysql/mysql-bin.000023 | mysql -u root -p mydb

If the incident spans multiple binlog files, list them all in a single command:

mysqlbinlog --no-defaults \
  --start-position=321847 \
  /var/log/mysql/mysql-bin.000022 \
  /var/log/mysql/mysql-bin.000023 \
  --stop-position=487219 | mysql -u root -p mydb

Step 4: Verify the Results

-- Check that orders have been restored
SELECT COUNT(*), MIN(created_at), MAX(created_at) FROM orders;

-- Cross-check with business logs if available
SELECT * FROM orders ORDER BY created_at DESC LIMIT 10;

Skipping a Specific Statement in the Middle

Sometimes you need to replay everything before the bad statement, skip it, then continue replaying what came after. For example, if the bad event spans positions 487219 to 487391 and there are valid transactions after it:

# Replay everything before the bad statement
mysqlbinlog --no-defaults \
  --start-position=321847 \
  --stop-position=487219 \
  /var/log/mysql/mysql-bin.000023 | mysql -u root -p mydb

# Replay everything after the bad statement (skipping 487219 → 487391)
mysqlbinlog --no-defaults \
  --start-position=487391 \
  /var/log/mysql/mysql-bin.000023 | mysql -u root -p mydb

Practical Notes from Production

  • Back up binlogs to a separate location: Binlogs live on the same server as MySQL. If the disk fails, you lose both. Use rsync or mysqlbinlog --read-from-remote-server to stream binlogs to another server in real time.
  • Test PITR regularly: An untested backup is not a backup. Every month, run a full PITR drill on staging — verify the process works before you actually need it.
  • Set a sensible retention period: 7 days is usually sufficient. Too short and you won’t catch issues in time; too long and you waste disk space (large databases can generate several GB of binlogs per day). Monitor with SHOW BINARY LOGS.
  • GTID mode: If you’re using GTID replication, add --skip-gtids when replaying binlogs to avoid transaction ID conflicts. GTID is also essential when setting up MySQL Master-Slave Replication — the two features are deeply intertwined.
# With GTID mode enabled
mysqlbinlog --no-defaults --skip-gtids \
  --start-position=321847 \
  --stop-position=487219 \
  /var/log/mysql/mysql-bin.000023 | mysql -u root -p mydb

Conclusion

PITR isn’t some advanced black art. It’s a survival skill — anyone running MySQL in production needs to know it cold. A full backup is your starting point; Binary Log is every step that follows. Together, they let you rewind to any second before disaster struck. For high-availability setups where you need the database to stay online even during failures, MySQL Group Replication pairs naturally with PITR as a complementary layer of protection.

The next time you set up a new production MySQL instance, make it your first task after installation: enable binlog, configure backups with --source-data=2, and run a PITR test. Don’t wait until 2 AM to learn how.

Share: