Quick Test: Limiting 500MB RAM in 3 Minutes
In a hurry and want to see results immediately? Let’s try limiting a process to a maximum of 500MB RAM. On modern Linux distributions like Ubuntu 22.04+, Fedora, or AlmaLinux 9, all the power of cgroups v2 is located right in the /sys/fs/cgroup directory.
First, create a new management group (e.g., han-che-app):
sudo mkdir /sys/fs/cgroup/han-che-app
Setting a 500MB RAM threshold for this group is incredibly simple:
echo "500M" | sudo tee /sys/fs/cgroup/han-che-app/memory.max
To apply it, simply write the process PID to the cgroup.procs file. A more professional approach is to use systemd-run to launch a Python script with pre-defined limits:
systemd-run --user --scope -p MemoryMax=500M python3 crawler_data.py
Now your script can run freely without the worry of it suddenly “devouring” all the server’s RAM.
Cgroups v2: The Next-Generation Resource Management Framework
Cgroups (Control Groups) is a core Linux kernel feature that helps allocate resources such as CPU, RAM, bandwidth, or I/O to groups of processes. Unlike the complexity of version 1, cgroups v2 unifies everything into a single hierarchy. It operates using a very transparent directory structure.
In practice, with older projects running on CentOS 7, optimizing cgroups v1 often gave sysadmins headaches due to its fragmented structure. Moving to newer kernels makes management much easier. Every folder you create in /sys/fs/cgroup acts as a safe “green zone,” preventing resource-hungry processes from affecting the entire system.
Why Use cgroups When You Have Docker?
Many people think that Docker with its --memory or --cpus parameters is sufficient. However, in production environments, we don’t always containerize everything. Legacy applications, background cronjob scripts, or core databases often run directly on VMs to maximize performance.
Using cgroups v2 directly offers several immediate benefits:
- Completely eliminates Docker’s overhead (however small).
- Strictly control system services (Nginx, MySQL) running directly on the OS.
- Master the core mechanisms that Docker and Kubernetes themselves operate on underneath.
Configuring Key Parameters for Your Application
1. Tightening CPU (CPU Quota)
In the world of cgroups v2, CPU is orchestrated via the cpu.max file. This file takes two values: the usage limit and the period (defaulting to 100ms).
Suppose you want a process to use a maximum of 20% of a single CPU core:
# Use 20ms in every 100ms period
echo "20000 100000" | sudo tee /sys/fs/cgroup/han-che-app/cpu.max
2. Managing RAM (Memory Limit)
There are two thresholds you should pay special attention to in order to avoid system hangs:
memory.high: The throttle threshold. When this limit is reached, the system forces the process to reclaim memory but does not kill it immediately.memory.max: The hard limit. Exceeding this number triggers the kernel’s OOM Killer to terminate the process immediately.
echo "1G" | sudo tee /sys/fs/cgroup/han-che-app/memory.high
echo "1.5G" | sudo tee /sys/fs/cgroup/han-che-app/memory.max
3. Limiting Disk I/O Speed
This feature is extremely useful for backup scripts or large file compression. You wouldn’t want a backup process to saturate the entire disk bandwidth, making your website unable to retrieve data.
First, use lsblk to identify the disk ID (major:minor). If the disk is 8:0 and you want to limit the read speed to 10MB/s:
echo "8:0 rbps=10485760" | sudo tee /sys/fs/cgroup/han-che-app/io.max
Professional Implementation with Systemd
Manually interacting with files in /sys/fs/cgroup is only suitable for quick debugging. For production environments, the standard approach is to configure them directly within the Systemd service file.
Open your service file (e.g., /etc/systemd/system/my-app.service) and add the following lines under the [Service] section:
[Service]
ExecStart=/usr/bin/python3 /opt/my-app/main.py
# 50% CPU limit
CPUQuota=50%
# 1GB hard RAM limit, 800MB soft limit
MemoryMax=1G
MemoryHigh=800M
# 10MB/s IO read/write limit
IOReadBandwidthMax=/dev/sda 10M
Finally, just run systemctl daemon-reload and restart the service. The system will handle the rest cleanly.
Real-world Experience: Don’t Let Numbers Deceive You
After numerous troubleshooting sessions, I’ve gathered a few valuable lessons when working with cgroups v2:
- Always Have a Buffer: If a Java application needs 1GB of RAM to start, never set
memory.maxto exactly 1GB. Add a 20-30% buffer for unexpected overhead. - Prioritize
memory.high: Give the application a chance to release memory itself instead of having the kernel “behead” it with the OOM Killer. - Monitor
memory.events: This file contains invaluable information. It tells you exactly how many times the application almost crashed due to memory shortages.
I once encountered a tricky case with a Java application running in a cgroup. Despite setting a 1GB Heap Size, the app kept crashing. It turned out that the JVM consumed an additional 200MB for Off-heap and Stack. After checking memory.events and seeing the oom_kill counter spike, I realized the issue and adjusted the numbers accordingly.
Cgroups v2 is not difficult once you understand your application’s characteristics. Hopefully, this article provides you with a powerful tool to manage stubborn processes on your servers!

