When Do You Actually Need htop and iotop?
There was one evening when the server suddenly slowed to a crawl — response time jumped from 200ms to 3 seconds, and CPU load average spiked to 8.0 on a 4-core machine. I opened top, watched the screen constantly jumping around, and couldn’t figure out which process was eating resources. After switching to htop, I spotted it immediately: a backup cron job was running rsync concurrently with a MySQL dump, pushing disk I/O to 100%. I needed iotop to confirm it.
Ever since installing these two tools on an Ubuntu 22.04 / 4GB RAM server, incident response time dropped from 30 minutes of guessing to under 5 minutes. Not an exaggeration — it’s just using the right tool for the job: htop for CPU/RAM by process, iotop for disk I/O by process.
This article goes straight to practical use — no copying from man pages.
Installing htop and iotop
On Ubuntu/Debian
sudo apt update
sudo apt install htop iotop -y
On CentOS/AlmaLinux/RHEL
sudo dnf install htop iotop -y
# Or if using yum (old CentOS 7):
sudo yum install htop iotop -y
Verify Installation Version
htop --version
# htop 3.2.2
iotop --version
# iotop 0.6
Note: iotop requires kernel support for CONFIG_TASK_IO_ACCOUNTING — most modern distros already have this enabled. If it reports “kernel not configured for task I/O accounting”, the quickest fix is to install iotop-c instead — a C rewrite that has fewer requirements and is more actively maintained than the original.
Configuring and Using htop in Detail
Reading the htop Screen
Run htop with sudo to see processes from all users — without sudo you’ll only see your own:
sudo htop
The layout is divided into 3 clear sections:
- Header (top): CPU bars, memory, swap, load average, uptime
- Process list (middle): List of processes with metric columns
- Footer (bottom): Keyboard shortcuts
The CPU bars: each bar represents one core (or thread). Green = user processes, red = kernel/system, yellow = nice (low priority processes). When the red portion dominates, the kernel is under heavy load — usually due to I/O wait or interrupts.
Key Columns to Watch
- PID: Process ID
- USER: Process owner
- PRI/NI: Priority and Nice value (NI ranges from -20 to 19; lower = higher priority)
- VIRT/RES/SHR: Virtual/Resident/Shared memory. RES is the number that actually matters — this is the physical RAM the process is consuming
- S: State — R (running), S (sleeping), D (disk wait — process is waiting for disk, possible I/O bottleneck)
- CPU%: CPU usage percentage for this process
- MEM%: Percentage of total RAM used
- TIME+: Total CPU time consumed
- COMMAND: Command name
Practical Keyboard Shortcuts
| Key | Function |
|---|---|
F6 or > |
Select column to sort by (defaults to CPU%) |
F4 or \ |
Filter processes by name |
F5 |
Switch to tree view — clearly shows parent/child processes |
F9 |
Kill process (select signal) |
Space |
Tag process (to kill multiple processes at once) |
u |
Filter by user |
H |
Show/hide user threads |
t |
Toggle tree view |
Customizing Displayed Columns
Press F2 (Setup) → Columns → add or remove columns as needed. I usually add IO_READ_RATE and IO_WRITE_RATE to see disk activity directly in the process list — no need to open iotop separately. Configuration is saved at ~/.config/htop/htoprc.
Example ~/.config/htop/htoprc with I/O columns enabled:
# View current configuration
cat ~/.config/htop/htoprc
# Important section in htoprc
fields=0 48 17 18 38 39 40 2 46 47 49 1
# 48 = IO_READ_RATE, 49 = IO_WRITE_RATE
Configuring and Using iotop
Basic iotop Usage
iotop requires root to read I/O accounting for all processes:
sudo iotop
Results appear immediately: read/write speed per process, with the header showing total system throughput.
Practical Options
# Show only processes with active I/O (skip idle processes)
sudo iotop -o
# Batch mode — text output, useful for logging or grepping
sudo iotop -b -o -n 5
# -b: batch mode, -n 5: take 5 snapshots then exit
# View only processes from a specific user
sudo iotop -u www-data
# Combined: only + batch + 2-second interval
sudo iotop -o -b -d 2 -n 10
Reading iotop Metrics
- DISK READ / DISK WRITE: Current I/O speed for the process (KB/s, MB/s)
- SWAPIN: Percentage of time the process is swapping — if > 0, RAM is already running short
- IO>: Percentage of time the process is blocked waiting for I/O — this is the most important metric for detecting bottlenecks
- Total DISK READ / WRITE (first line): Total system throughput
- Actual DISK READ / WRITE (second line): Actual throughput to hardware (after kernel buffer)
When Actual DISK READ is significantly higher than Total DISK READ, the kernel is doing readahead — that’s normal. When a process’s IO> is consistently above 50%, that process is experiencing an I/O bottleneck.
Practical Monitoring and Diagnostics
Scenario 1: Finding the CPU Hog
# Open htop, sorted by CPU% (default)
sudo htop
# Press F6 → select PERCENT_CPU → Enter
# The most CPU-intensive processes will rise to the top
Is php-fpm or mysqld consistently consuming > 80% CPU? Pull up the slow query log immediately — there’s usually a hidden full-scan query lurking in there.
Scenario 2: Investigating a Memory Leak
# Sort by RES (resident memory)
# In htop press F6 → select M_RESIDENT
# Or press 'm' to toggle sort by memory
If RES keeps climbing and never comes back down — that’s a memory leak, no further diagnosis needed. I once watched a Node.js app climb from 200MB to 3GB over 6 hours — sorting by memory in htop caught it immediately.
Scenario 3: Diagnosing Server Lag with iotop
# Run iotop in only mode to see processes with active I/O
sudo iotop -o
# Meanwhile in another terminal, check load average
watch -n 1 'cat /proc/loadavg'
High load average but low CPU% in htop? The culprit is almost certainly I/O. Run iotop -o — processes blocked waiting for disk will appear at the top of the list. If you’re managing multiple panes at once, tmux makes it easy to keep both iotop and watch visible side by side.
Scenario 4: Logging I/O Activity for Later Analysis
# Log iotop output for 60 seconds (30 snapshots, every 2 seconds)
sudo iotop -b -o -d 2 -n 30 > /tmp/iotop_$(date +%Y%m%d_%H%M%S).log 2>&1 &
# Review the log afterwards
grep -v '^$' /tmp/iotop_*.log | head -50
Simple Script to Alert on High I/O
#!/bin/bash
# /usr/local/bin/check_io.sh
# Cron: */5 * * * * /usr/local/bin/check_io.sh
THRESHOLD=50 # MB/s
LOG=/var/log/io_alert.log
# Get total write rate (MB/s)
WRITE_RATE=$(sudo iotop -b -o -n 1 -d 1 2>/dev/null | \
grep 'Total DISK WRITE' | \
awk '{print $9}')
if (( $(echo "$WRITE_RATE > $THRESHOLD" | bc -l) )); then
echo "$(date '+%Y-%m-%d %H:%M:%S') ALERT: Disk write ${WRITE_RATE} MB/s" >> $LOG
# Get the top 3 processes with the highest write activity
sudo iotop -b -o -n 1 2>/dev/null | head -15 >> $LOG
fi
Combining with vmstat for a Complete Picture
# vmstat outputs metrics every 2 seconds
vmstat 2
# Column 'b': number of processes blocked waiting for I/O
# Column 'wa': I/O wait % (if consistently > 20%, disk is the bottleneck)
# Column 'si/so': swap in/out (if consistently > 0, RAM is insufficient)
My go-to workflow during an incident: vmstat 2 for the overview → if wa is high, open iotop -o to find the process → if CPU load is high, htop sorted by CPU → if memory is low, sort by RES. Three steps and you’ve identified the problem.
Something I learned after months of doing this: don’t just look at instantaneous numbers. htop and iotop are just snapshots — real bottlenecks tend to appear during peak traffic hours, not when you’re actively watching. Combine them with sar or Netdata for historical data — htop/iotop then become your drill-down tools once you know when the problem occurred.

