Monitor MySQL Performance with Performance Schema and sys Schema: Find Bottlenecks Fast

MySQL tutorial - IT technology blog
MySQL tutorial - IT technology blog

When Do You Need Performance Schema?

Back when the users table in my project had a few hundred thousand rows, everything ran fine. But once the data crossed 10 million rows, slow queries started appearing — response times spiked, and MySQL CPU climbed to 80–90%. I opened the Slow Query Log (covered in a separate post), saw the list of slow SQL statements, but still couldn’t tell why they were slow: missing index? Lock contention? I/O? Too many concurrent connections?

That’s when Performance Schema comes in. This built-in MySQL diagnostic tool shows you exactly how long each query takes to run, which threads are waiting on locks, how many times the disk is being read — details that Slow Query Log simply can’t reach.

sys Schema is a wrapper on top of Performance Schema. Instead of manually joining 4–5 tables, sys Schema provides ready-to-use views like sys.statement_analysis and sys.innodb_lock_waits — just query them directly and read the results.

Core Concepts to Understand First

What Is Performance Schema?

It’s a special database inside MySQL (named performance_schema), enabled by default since MySQL 5.6+. MySQL silently records all activity into it: how long each query ran, what each thread is waiting for, which tables are read most frequently.

The data is stored entirely in RAM — queries are extremely fast, but everything is lost when MySQL restarts.

What Is sys Schema?

sys Schema (the database named sys) has been available since MySQL 5.7.7+. It doesn’t collect new data on its own — it simply reads from performance_schema and information_schema and presents it through more readable views. For example, instead of joining 4–5 tables in performance_schema, you just run SELECT * FROM sys.statements_with_runtimes_in_95th_percentile.

Check Whether Performance Schema Is Enabled

-- Check status
SHOW VARIABLES LIKE 'performance_schema';

-- Expected result:
-- +--------------------+-------+
-- | Variable_name      | Value |
-- +--------------------+-------+
-- | performance_schema | ON    |
-- +--------------------+-------+

If the result shows OFF, add the following line to /etc/mysql/my.cnf and restart MySQL:

[mysqld]
performance_schema = ON

Hands-On: Finding Bottlenecks Step by Step

Step 1 — Identify Which Queries Consume the Most Resources

This is the first query I run whenever I start debugging:

-- Top 10 slowest queries (using sys Schema)
SELECT 
    query,
    exec_count,
    total_latency,
    avg_latency,
    rows_examined_avg
FROM sys.statement_analysis
ORDER BY total_latency DESC
LIMIT 10;

Pay attention to the rows_examined_avg column: if this number is significantly larger than rows_sent_avg, MySQL is scanning far more rows than it returns — usually a sign of a missing or poorly matched index.

Real-world example: I once saw a query with rows_examined_avg = 9,800,000 but rows_sent_avg = 12. A full table scan across 10 million rows just to return 12 records. Adding a composite index fixed it entirely — a scenario covered in detail in optimizing MySQL query performance with index and EXPLAIN.

Step 2 — Detect Lock Waits (Blocked Queries)

Lock waits are silent killers — the query itself runs fast, but it has to queue up waiting for a lock held by another transaction. Slow Query Log won’t catch this.

-- View threads currently being blocked
SELECT 
    r.trx_id AS waiting_trx_id,
    r.trx_mysql_thread_id AS waiting_thread,
    r.trx_query AS waiting_query,
    b.trx_id AS blocking_trx_id,
    b.trx_mysql_thread_id AS blocking_thread,
    b.trx_query AS blocking_query
FROM information_schema.innodb_lock_waits w
JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;

Or use the more concise sys Schema version:

-- Summarize lock waits by table
SELECT * FROM sys.innodb_lock_waits\G

If you see one thread blocking many others, check whether it’s a transaction that hasn’t been COMMITted yet. The most common cause: an application opens a transaction, calls an external API that takes 3–5 seconds, then COMMITs — the lock is held the entire time. When this escalates into a full MySQL deadlock, the impact on production can be severe.

Step 3 — Analyze I/O: Which Tables Are Read or Written Most?

-- Top tables by I/O count
SELECT 
    table_name,
    total_read_latency,
    total_write_latency,
    io_read_requests,
    io_write_requests
FROM sys.schema_table_statistics
ORDER BY total_read_latency DESC
LIMIT 10;

Any table with an unusually high total_read_latency compared to the rest is a candidate to revisit for index and caching strategy improvements.

Step 4 — Check for Unused Indexes

This is a view I run after every optimization cycle — hunting down redundant indexes that consume RAM without ever being used:

-- Indexes never used since the last MySQL restart
SELECT *
FROM sys.schema_unused_indexes
WHERE object_schema NOT IN ('performance_schema', 'sys', 'information_schema', 'mysql');

Don’t rush to drop them immediately. Wait at least a few days to make sure no periodic queries use them (for example, a monthly report that runs every 30 days). For a deeper look at how MySQL handles functional and invisible indexes, that technique pairs well with this cleanup workflow.

Step 5 — See Which Users or Hosts Are Generating the Most Load

-- Statistics by user
SELECT 
    user,
    total_latency,
    statements,
    table_scans,
    rows_sent
FROM sys.user_summary
ORDER BY total_latency DESC;

This is especially useful when multiple services share a single MySQL server — you can pinpoint exactly which service is causing abnormal load instead of searching blindly.

Step 6 — Reset Statistics for a Clean Baseline

After completing an optimization, reset the Performance Schema data so you can measure from a clean state:

-- Reset all statistics (no MySQL restart needed)
CALL sys.ps_truncate_all_tables(FALSE);

-- Or reset specific types:
TRUNCATE TABLE performance_schema.events_statements_summary_by_digest;
TRUNCATE TABLE performance_schema.table_io_waits_summary_by_table;

A Practical Diagnostic Workflow

When I get a report that “MySQL is slow,” I typically follow this sequence:

  1. Run sys.statement_analysis — find top queries by total_latency and rows_examined_avg
  2. Run sys.innodb_lock_waits — check for lock contention
  3. Run sys.schema_table_statistics — see which tables have high I/O
  4. EXPLAIN the suspicious query — confirm whether an index is being used
  5. Fix (add index / optimize query / narrow transaction scope)
  6. Reset statistics → monitor for another 24–48 hours

Step 4 is non-negotiable: Performance Schema tells you what is slow, but EXPLAIN tells you why it’s slow at the execution plan level.

Conclusion

Performance Schema and sys Schema don’t replace Slow Query Log — they complement it. Slow Query Log is quick to enable and easy to read right away. Performance Schema goes deeper: lock waits, I/O breakdown, index usage, and per-user load.

Performance Schema should stay enabled in production at all times. Real-world overhead is under 5% according to Oracle’s benchmarks — nothing to worry about. More importantly, when an incident occurs, you already have historical data instead of having to reproduce the problem from scratch.

Start simple: open sys.statement_analysis and look at the rows_examined_avg column. That number immediately reflects the health of your MySQL — which queries are over-scanning, and which tables are being abused.

Share: