Is Your Application Slow, and You Suspect the Database is the Culprit?
This is a situation most developers have faced. Users complain that a certain feature is “as slow as a turtle.” After checking the application code and finding nothing unusual, all suspicion falls on the database. But how do you know exactly which SQL query is causing the bottleneck? Instead of guessing, MySQL provides an extremely powerful tool: the Slow Query Log.
This tool records all the “roadblocks”—SQL commands that exceed the execution time threshold you set. Finding them is the first and most crucial step in optimizing database performance.
Mastering the Core Concepts
What is the MySQL Slow Query Log?
Simply put, the Slow Query Log is a text file. The MySQL server uses it to record information about SQL queries whose execution time is longer than the long_query_time value (in seconds). It doesn’t just record the command. The log file also contains other valuable information like execution time, number of rows scanned… helping you diagnose the problem more accurately.
When Should You Enable the Slow Query Log?
Continuous logging, especially on high-traffic systems, consumes I/O resources and can slightly affect performance. Therefore, there are two common usage scenarios:
- Development and testing environments: Always on. This helps you detect problematic queries right from the development stage.
- Production environment: Only enable it when slow performance is reported, or enable it periodically (e.g., a few hours per week) for a system “health check.” After finding and fixing the issue, you should turn it off to save resources.
A Detailed A-to-Z Practical Guide
Let’s get to the main part: how to configure, read the log, and optimize.
1. Configuring the Slow Query Log
First, connect to your MySQL server and check the current settings.
Check the Status
Run the following command in your MySQL client:
SHOW VARIABLES LIKE '%slow_query_log%';
You will see a result similar to this:
+---------------------+--------------------------------------------------+
| Variable_name | Value |
+---------------------+--------------------------------------------------+
| slow_query_log | OFF |
| slow_query_log_file | /var/lib/mysql/your-hostname-slow.log |
+---------------------+--------------------------------------------------+
Next, check the default time threshold:
SHOW VARIABLES LIKE 'long_query_time';
+-----------------+-----------+
| Variable_name | Value |
+-----------------+-----------+
| long_query_time | 10.000000 |
+-----------------+-----------+
In the example above, the log is disabled (OFF) and the time threshold is 10 seconds. This is a very high threshold. A query that takes 9 seconds to load a product page is unacceptable, but it won’t be logged with this configuration. We need to lower it.
Enable the Slow Query Log (Temporarily)
This method is very convenient for quick diagnostics without restarting the server. However, the settings will be lost when MySQL restarts.
SET GLOBAL slow_query_log = 'ON';
-- Set the threshold to 1 second, a reasonable value to start with
SET GLOBAL long_query_time = 1;
Permanent Configuration in the my.cnf File
To make the settings effective after every restart, you need to edit the MySQL configuration file. The location of this file may vary depending on the operating system:
- Ubuntu/Debian:
/etc/mysql/my.cnfor/etc/mysql/mysql.conf.d/mysqld.cnf - CentOS/RHEL:
/etc/my.cnf
Open the configuration file and add (or edit) the following lines under the [mysqld] section:
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
# (Optional but very useful) Log queries that don't use indexes
log_queries_not_using_indexes = 1
Important note: Ensure the /var/log/mysql/ directory exists and the mysql user has write permissions for it. After saving the file, you need to restart the MySQL service to apply the changes:
sudo systemctl restart mysql
2. Analyzing the Log File
After the system has been running for a while, the mysql-slow.log file will start recording slow queries. You can view it directly or use a tool to analyze it.
Reading the Log File Directly
An entry in the log file will have a structure like this:
# Time: 2023-10-27T10:15:30.123456Z
# User@Host: root[root] @ localhost [] Id: 123
# Query_time: 2.123 Lock_time: 0.000 Rows_sent: 5 Rows_examined: 50000
SET timestamp=1698398130;
SELECT p.post_title, u.display_name
FROM wp_posts p
JOIN wp_users u ON p.post_author = u.ID
WHERE p.post_status = 'publish' AND u.user_email LIKE '%@gmail.com';
Key information to pay attention to:
- Query_time: The query execution time (2.123 seconds).
- Rows_examined: The number of rows the database had to scan (50,000 rows). This is an extremely important metric. The higher the ratio between
Rows_examinedandRows_sent(the number of rows returned), the less efficient the query is. Imagine having to read 50,000 lines in an Excel file just to find 5 lines that meet the criteria. - The specific SQL statement that caused the delay.
Using the `mysqldumpslow` Tool
When the log file grows large, reading it manually becomes impossible. `mysqldumpslow` is a utility that comes with MySQL to help you aggregate and sort slow queries.
A few useful commands:
# List the top 10 queries that took the most time to execute
sudo mysqldumpslow -s t -t 10 /var/log/mysql/mysql-slow.log
# List the top 10 queries that scanned the most rows
sudo mysqldumpslow -s r -t 10 /var/log/mysql/mysql-slow.log
# List the top 10 queries that were run most frequently (causing repeated slowness)
sudo mysqldumpslow -s c -t 10 /var/log/mysql/mysql-slow.log
The output of `mysqldumpslow` will group similar statements, replacing specific values with ‘S’ (string) and ‘N’ (number). This helps you identify which *type* of query is the main problem, rather than individual queries.
3. Optimizing Slow Queries
After identifying the “culprit” from the slow log, the next step is to “dissect” it with the EXPLAIN command.
Analyzing with EXPLAIN
Suppose we find the following statement in the log:
SELECT * FROM products WHERE category_name = 'Phones';
Place the EXPLAIN keyword in front of it and run it in the MySQL client:
EXPLAIN SELECT * FROM products WHERE category_name = 'Phones';
MySQL will return its execution plan. Pay special attention to the type column. If the value is ALL, that’s a very bad sign. It means MySQL is performing a full table scan. The rows column will show you the estimated number, which is usually very large.
Adding an Index to Speed Things Up
When you see a full table scan on a WHERE clause, the most effective solution is usually to add an index to the column being used for filtering.
CREATE INDEX idx_products_category_name ON products(category_name);
Here’s a real-world example. On a 50GB production database, a product search query took up to 4.5 seconds because it had to scan millions of records. After analyzing the slow log and adding an appropriate composite index, the execution time dropped to under 200 milliseconds.
Re-checking with EXPLAIN
After adding the index, run the EXPLAIN command again. You will likely see a significant improvement: the type column changes to ref, the key column displays the name of the index you just created, and most importantly, the value in the rows column decreases considerably. This proves the query is now using the index to go directly to the required data, instead of having to “scan” the entire table.
Conclusion
Database optimization isn’t magic; it’s a methodical process. The Slow Query Log helps you turn a vague problem (“the app is slow”) into a concrete list of tasks. The standard procedure is usually: Enable the log to collect data -> Use mysqldumpslow to analyze and find the “heaviest” queries -> Use EXPLAIN to understand the cause -> Apply optimization techniques like adding indexes. This is a fundamental and essential skill that any developer working with databases should master.
