Last week my team dealt with an annoying incident: the application was running unusually slow, developers complained about database timeouts, the DBA insisted queries were fast. After two hours of digging through logs and profiling, the culprit turned out to be bandwidth congestion between the app server and DB server during evening peak hours. The frustrating part was that if I’d just run iperf3 from the start, we would have found it in 5 minutes.
Since that day, iperf3 has become my first diagnostic step whenever I suspect a network issue. In this post, I’ll compare the available network measurement tools and show you how to use iperf3 properly for the most common real-world scenarios.
Network Performance Tools — Which One to Use and When?
speedtest-cli — Internet Connection Speed Only
speedtest-cli connects to Ookla’s servers to measure download/upload speed. It’s useful when you need to verify whether your ISP is delivering the promised bandwidth, but it’s completely useless when you need to measure internal bandwidth between two servers in the same datacenter or VPC. It measures over the Internet, not through your internal network.
netperf — Classic but Aging
netperf dates back to the 90s, supports TCP/UDP measurement, and provides detailed metrics. The problem is the project is rarely maintained, the documentation is outdated, and the syntax is harder to remember. Some modern distros don’t include it in their official repositories. I’ve tried it a few times but never found a compelling reason to use it over iperf3.
iperf3 — The Standard Choice for DevOps
My go-to tool is iperf3, maintained by ESnet — the research network of the U.S. Department of Energy — so the codebase is quite solid. TCP and UDP support, parallel streams, reverse mode, JSON output for automation — it has everything. Available in most distro repositories. Install and use immediately, no configuration needed.
iperf3: Strengths and Things to Know Upfront
Advantages:
- Measures actual point-to-point bandwidth between two servers — independent of the Internet
- UDP mode for detecting packet loss and jitter — TCP automatically adjusts speed and hides problems
- Parallel streams simulate multiple simultaneous connections like real traffic
- JSON output for integration into monitoring scripts
- Reverse mode tests bandwidth in both directions
- Lightweight, no complex configuration required
Known Limitations:
- Must be installed and running on both ends (server + client)
- Port 5201 must be open on the server-side firewall
- Cannot measure outbound Internet connection speed
- iperf3 is not backward-compatible with iperf2 (completely different protocol)
Choosing the Right Tool for Each Scenario
Before installing anything, identify what you actually need to measure:
- Is your ISP delivering the promised speed? → speedtest-cli
- Bandwidth between two internal servers? → iperf3 TCP mode
- Is there packet loss or jitter in the network? → iperf3 UDP mode
- InfiniBand/RDMA network (HPC environment)? → qperf
- Need a quick test without installing tools? →
curl+ file download from internal server (less accurate but convenient)
Installing iperf3 on Linux
Most distros include iperf3 in their official repositories, ready to use after installation:
# Ubuntu / Debian
sudo apt update && sudo apt install iperf3
# CentOS / RHEL / Rocky Linux
sudo dnf install iperf3
# Arch Linux
sudo pacman -S iperf3
Verify the version after installation:
iperf3 --version
# iperf 3.14 (cJSON 1.7.15)
Basic Usage — Server and Client
The model is simple: one end runs in server mode to listen for connections, the other end is the client that initiates the test. No background daemon needed — start the server when you need it, stop it when done.
On the server machine (receiving connections):
# Run in foreground, listen on port 5201
iperf3 -s
# Run in background and write log
iperf3 -s -D --logfile /var/log/iperf3.log
On the client machine (initiating the test):
# Basic test: TCP, 10 seconds, 1 stream
iperf3 -c 192.168.1.100
# Longer test — 30 seconds
iperf3 -c 192.168.1.100 -t 30
# Transfer exactly 5GB then stop (instead of using time)
iperf3 -c 192.168.1.100 -n 5G
Sample output from a healthy Gigabit link:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 112 MBytes 940 Mbits/sec
[ 5] 1.00-2.00 sec 111 MBytes 933 Mbits/sec
...
[ 5] 0.00-10.01 sec 1.09 GBytes 935 Mbits/sec sender
[ 5] 0.00-10.01 sec 1.09 GBytes 934 Mbits/sec receiver
Commonly Used Advanced Options
Parallel Streams — Simulating Real Traffic
By default, iperf3 uses a single TCP stream. In practice, applications open many simultaneous connections. Use -P to test with multiple streams:
# 4 parallel streams
iperf3 -c 192.168.1.100 -P 4
You’ll typically see higher total bandwidth with multiple streams, as TCP window scaling operates more efficiently on high-latency links.
Reverse Mode — Measuring Bandwidth in the Opposite Direction
# Server sends data to Client (download from client's perspective)
iperf3 -c 192.168.1.100 -R
JSON Output — Integration with Automation
iperf3 -c 192.168.1.100 -J > /tmp/result.json
# Extract bitrate from JSON
iperf3 -c 192.168.1.100 -J | python3 -c \
"import sys,json; d=json.load(sys.stdin); print(d['end']['sum_received']['bits_per_second']/1e6, 'Mbps')"
UDP Mode — Detecting Packet Loss and Jitter
TCP is good at hiding network problems. When packet loss occurs, it automatically retransmits and throttles bandwidth down — you only see low throughput without understanding why. UDP has no such mechanism: lost packets are reported directly in the output. This is the required mode when you need to measure actual packet loss and jitter.
# UDP test with target bandwidth 1Gbps
iperf3 -c 192.168.1.100 -u -b 1G
# Test with 100Mbps bandwidth (for 1G link to avoid overload)
iperf3 -c 192.168.1.100 -u -b 100M -t 30
UDP output includes two additional important columns:
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 1.16 GBytes 998 Mbps 0.052 ms 847/885802 (0.096%)
- Jitter: latency variation (ms). Ideal is <1ms on LAN, <10ms over WAN
- Lost/Total: packet loss rate. Should be 0% on a healthy LAN. Investigate immediately if >0.1%
Diagnosing Intermittent Packet Loss — Real-World Experience
My toughest network debugging experience was when intermittent packet loss only occurred during peak hours (9–11 AM and 3–5 PM). I ran iperf3 in the evening — completely normal, 0% packet loss, 940 Mbps. The next morning — still fine.
I then wrote a script to run continuously and log results to catch the exact moment:
#!/bin/bash
# Run iperf3 UDP test every 5 minutes, log results
SERVER="192.168.1.100"
LOGFILE="/tmp/iperf3_monitor.log"
for i in $(seq 1 288); do # 288 iterations = 24 hours
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
RESULT=$(iperf3 -c $SERVER -u -b 1G -t 30 -J 2>/dev/null)
LOSS=$(echo $RESULT | python3 -c \
"import sys,json; d=json.load(sys.stdin); print(d['end']['sum']['lost_percent'])" 2>/dev/null)
echo "$TIMESTAMP packet_loss=$LOSS%" >> $LOGFILE
sleep 270 # 270 seconds rest + 30 second test = 300 seconds (5 minutes)
done
The results showed packet loss reaching 2.3% during peak hours, while off-peak was 0%. The root cause turned out to be a faulty SFP module on the uplink switch that struggled under high load. Replacing the SFP fixed everything.
Opening the Firewall for iperf3
Before testing, remember to open port 5201 on the server:
# UFW (Ubuntu/Debian)
sudo ufw allow 5201/tcp
sudo ufw allow 5201/udp
# firewalld (CentOS/RHEL/Rocky)
sudo firewall-cmd --permanent --add-port=5201/tcp
sudo firewall-cmd --permanent --add-port=5201/udp
sudo firewall-cmd --reload
If you need to use a custom port (e.g., running multiple instances or avoiding conflicts):
# Server listens on port 9001
iperf3 -s -p 9001
# Client connects to port 9001
iperf3 -c 192.168.1.100 -p 9001
Quick Checklist When Suspecting a Network Issue
- Run a basic TCP test — verify overall bandwidth meets expectations
- Run a UDP test at 80% of link capacity — check for packet loss and jitter
- Run with
-P 4(parallel streams) — some issues only appear with multiple connections - Run reverse mode
-R— bidirectional bandwidth may differ with asymmetric routing - If the issue is intermittent, use a time-based monitoring script like the example above
Combining iperf3 with ss/netstat is sufficient to isolate most network issues on Linux. Bandwidth bottlenecks, hidden packet loss, asymmetric routing — each problem type has its own testing approach. The key isn’t memorizing every flag, but knowing how to ask the right questions before you run.

