MySQL Performance Tuning: Boosting Speed with Buffer Pool and Thread Pool in Practice

MySQL tutorial - IT technology blog
MySQL tutorial - IT technology blog

When High-End Hardware Meets Sluggish MySQL Performance

A common irony: A server with 64GB RAM and 16 CPU cores, yet MySQL keeps throwing ‘Too many connections’ errors. When checking with htop, RAM is mostly free, but Disk I/O is “red-lining.” This is the consequence of running MySQL with default configurations—originally designed for low-spec machines to ensure they can start up anywhere.

The system I manage runs MySQL 8.0 with 50GB of data. After applying the following tweaks, query response times improved significantly. Moreover, the database remained stable even when traffic tripled. Don’t upgrade hardware blindly. Learn how MySQL manages memory to make the most of your existing resources.

1. InnoDB Buffer Pool – The Core of Processing Speed

If you’re only allowed to modify one parameter, choose innodb_buffer_pool_size. This is where InnoDB stores table data and indexes. Our goal is to move as much data as possible from the disk into RAM for instant retrieval.

What is the optimal configuration?

Don’t blindly follow the 80% RAM rule for the Buffer Pool. This formula only holds true if the server is running MySQL exclusively. If shared with Nginx or Redis, allocating too much RAM will cause “swapping.” In that case, the hard drive simulates RAM, causing performance to plummet.

Based on practical experience, I usually allocate as follows:

  • Dedicated DB Server: 70% – 75% of total RAM.
  • Multi-service Server: 40% – 50% of total RAM.

For example, a dedicated 16GB RAM server should be set to around 12GB:

[mysqld]
innodb_buffer_pool_size = 12G

To check if your configuration is correct, run this command:

SHOW ENGINE INNODB STATUS\G

Pay attention to the Buffer pool hit rate line. If it reaches 999/1000, it means 99.9% of queries found data in RAM. At this point, latency will drop from tens of milliseconds to under 1ms.

2. Splitting the Buffer Pool to Reduce Contention

With a Buffer Pool over 8GB and multi-core CPUs, using a single instance can easily lead to “mutex contention.” Processing threads must queue up to access memory. Splitting the Buffer Pool allows MySQL to handle parallel processing more efficiently.

# Split 12GB Buffer Pool into 8 instances, 1.5GB each
innodb_buffer_pool_instances = 8

Each instance should have at least 1GB to achieve the best results.

3. Handling Thousands of Connections with Thread Pool

By default, MySQL creates one thread for each connection. Imagine 1,000 users rushing in simultaneously; the CPU will be exhausted just “taking attendance” and switching between these threads instead of actually processing queries.

Don’t increase max_connections recklessly

Many people set max_connections = 5000 to avoid errors. However, if RAM is insufficient, the server will crash immediately. Instead, use a Thread Pool to reuse existing threads.

Note: The MySQL Community Edition does not come with a built-in Thread Pool. You can install Percona Server or use ProxySQL in front to manage connections more effectively.

max_connections = 500
max_used_connections = 400 # The actual value should be 20% lower than the limit

4. Accelerating Write Speed with Redo Log

The innodb_log_file_size parameter determines the size of the Redo Log. This is where changes are recorded before being officially saved to the data file. If this file is too small, MySQL must pause processing to clean the logs (checkpointing), causing periodic “lag.”

For e-commerce systems with high write volumes, I usually set it to 1GB or more.

innodb_log_file_size = 1G
innodb_log_files_in_group = 2

5. Trading Safety for Write Speed

The innodb_flush_log_at_trx_commit parameter allows you to customize data safety levels:

  • Value 1: Absolute safety but extremely high Disk I/O as it must write to disk after every commit.
  • Value 2: The best balance. Writes logs to cache once per second. Write speed can increase 3-5 times compared to the default.

In typical production environments, I always choose the value 2 to relieve pressure on SSD drives.

innodb_flush_log_at_trx_commit = 2

6. Optimizing Disk Writes with O_DIRECT

On Linux, use O_DIRECT to let MySQL bypass the operating system’s intermediate cache layer. This allows data to be written directly to the disk, avoiding RAM wastage on duplicate storage (double buffering).

innodb_flush_method = O_DIRECT

Safe Deployment Process

Never overwrite the my.cnf file and restart immediately. Follow these 3 steps to ensure safety:

  1. Backup Configuration: Always have a backup file for quick recovery in case of issues.
  2. Runtime Changes: Use the SET GLOBAL command for parameters that support hot-swapping to test the impact first.
  3. Check Error Log: Monitor /var/log/mysql/error.log to detect memory allocation errors early.

Summary

MySQL optimization is a journey of continuous observation and adjustment. As data grows, today’s configuration might not be suitable tomorrow. Prioritize stability over pushing hardware to its absolute limits. If you’ve tuned everything and it’s still slow, it’s time to review your indexes or consider Read Replicas.

Share: