How to Monitor Network Bandwidth in Linux: Essential Tools and Real-Time Techniques
Want to keep your servers fast and predictable? This practical guide to Linux bandwidth monitoring walks you through core metrics, real-time tools, and hands-on techniques to detect spikes, diagnose bottlenecks, and plan capacity with confidence.
Monitoring network bandwidth on Linux is a critical task for site administrators, developers, and businesses that depend on predictable, high-performance connectivity. Whether you’re troubleshooting a slow service, enforcing bandwidth limits, or planning capacity for growth, having the right tools and techniques in place enables you to observe traffic in real time, collect historical metrics, and make informed decisions. This article walks through the core principles, practical tools, real-time approaches, and selection guidance so you can build a robust bandwidth monitoring strategy on Linux servers.
Why accurate bandwidth monitoring matters
Bandwidth is not just about raw throughput numbers; it reflects how applications, users, and network devices behave under load. Accurate monitoring lets you:
- Detect anomalies such as sudden traffic spikes, distributed attacks, or runaway processes consuming all available capacity.
- Diagnose bottlenecks at the interface, host, or application layer instead of guessing.
- Plan capacity based on historical utilization patterns to avoid over- or under-provisioning.
- Enforce policies by identifying heavy consumers and applying QoS or traffic shaping.
Core principles and metrics for Linux bandwidth monitoring
Before choosing tools, understand what metrics and signals are useful:
- Interface throughput: bytes/sec or bits/sec transmitted and received per network interface (e.g., eth0, ens3).
- Packet rates: packets/sec, useful to spot small-packet floods like DDoS.
- Errors and drops: indicate NIC, driver, or link issues.
- Connections and per-process usage: which PIDs or sockets produce the traffic.
- Latency and retransmits: increased retransmits or RTTs can point to congestion or packet loss.
Linux exposes low-level counters via the kernel (e.g., /proc/net/dev, netlink, ethtool) and higher-level socket information via tools like ss. Effective monitoring combines both periodic counters and packet-level inspection.
Essential command-line tools and how to use them
vnStat — lightweight, historical monitoring
vnStat is ideal for low-overhead tracking: it reads kernel counters periodically and stores historical usage in a local database. Install and start the daemon, then view daily, monthly, or hourly statistics. Example usage:
vnstat -i eth0
Use vnStat when you need long-term bandwidth reports without continuous packet capture or high CPU usage.
iftop and iptraf-ng — real-time interface-level insights
iftop shows real-time traffic between hosts on an interface, displaying bandwidth per connection. Launch with:
sudo iftop -i eth0
For text-based interactive per-interface summaries, iptraf-ng provides connection lists, protocol breakdown, and per-host statistics. These tools are great for on-the-fly troubleshooting.
nethogs — per-process bandwidth
nethogs maps network traffic to processes and PIDs. This is extremely helpful when you suspect a specific application is saturating the link:
sudo nethogs eth0
Note: nethogs uses packet capture and may add moderate CPU overhead on busy servers.
bmon — visual bandwidth and interface statistics
bmon provides a curses-based interface with graphs and per-interface statistics, including rate history. It’s useful for quick visual trends directly in the terminal.
ss, netstat, and lsof — socket and connection diagnostics
To inspect open connections, use ss -tunap or netstat -tunap. Combine with PID mapping (e.g., lsof -i) to determine which sockets a process holds. These tools complement bandwidth counters by revealing connection endpoints and states.
tcpdump and tshark — packet-level inspection
When you need deep analysis, capture packets with tcpdump or analyze them live with tshark. Example to capture traffic to or from a host:
sudo tcpdump -i eth0 host 1.2.3.4 -w capture.pcap
Use BPF filters to reduce captured traffic and minimize overhead. Packet captures enable protocol-level inspection, latency measurements, and precise throughput calculations.
iperf3 — active throughput testing
iperf3 is the de-facto tool for controlled throughput tests between two endpoints. Run an iperf3 server on one host and client on another to measure achievable TCP or UDP bandwidth:
iperf3 -s (server)
iperf3 -c server_ip -P 8 -t 30 (client, parallel streams)
This is essential when validating network performance independent of production traffic.
tc — shaping and scheduling
Linux’s traffic control utility, tc, paired with qdisc (pfifo_fast, fq_codel, HTB), lets you shape, rate-limit, or prioritize traffic. Example to limit egress on eth0 to 50Mbps:
tc qdisc add dev eth0 root tbf rate 50mbit burst 32kbit latency 400ms
Combine monitoring with tc to enforce policies discovered during analysis.
Advanced monitoring: agents, metrics and dashboards
For continuous monitoring and alerting at scale, use agent-based exporters and monitoring stacks:
- Prometheus + node_exporter: node_exporter exposes network interface counters (bytes, packets, errors) as Prometheus metrics. Scrape, store, and query with PromQL for alerts.
- Grafana: visualize time-series data with customizable dashboards for per-interface throughput, top talkers, and utilization percentiles.
- Netdata: an all-in-one agent that provides real-time charts and alerts with minimal setup — useful for interactive diagnostics.
- ntopng: flow-based analysis and per-host statistics with DPI capabilities; useful when you need richer traffic context (protocols, applications).
These integrations allow long-term trend analysis, SLA monitoring, and automated alerting when utilization crosses thresholds.
Real-time techniques and best practices
To get accurate, actionable real-time insights, follow these practical tips:
- Monitor at the right point: measure traffic at the interface closest to the resource you care about (e.g., public NIC for inbound/outbound Internet, bridge interface for VMs).
- Combine counters and packet captures: use counters for efficiency and packet captures for root-cause analysis when counters indicate anomalies.
- Use sampling or filters: when capturing on busy links, apply BPF filters or sample packets to avoid overwhelming the host.
- Watch for counter wrap: older counters can wrap on very busy interfaces; modern tools handle 64-bit counters, but validate readings when you see anomalies.
- Correlate metrics: align network metrics with CPU, disk I/O, and application logs to identify whether network is truly the limiting factor.
- Alert on rates, not totals: set alerts on utilization %, packets/sec spikes, or sudden change rates to reduce noise.
Comparing tools: pros and cons
Choosing the right tool depends on use case:
- vnStat: pros — low overhead, good for historical reports; cons — no per-process detail, not packet-aware.
- iftop / bmon / iptraf-ng: pros — quick, real-time visuals; cons — interactive, not suitable for automated long-term storage.
- nethogs: pros — per-process mapping; cons — higher overhead on busy systems, limited historical data.
- tcpdump / tshark: pros — detailed packet-level analysis; cons — high resource use and storage needs for captures.
- Prometheus + Grafana: pros — scalable, alerting and historical analysis; cons — setup complexity and storage considerations.
- Netdata / ntopng: pros — rich UI and near real-time insights; cons — additional agents and resource footprint.
Selecting the right monitoring approach
Match your environment and goals to a monitoring strategy:
- For small VPS or single-server setups: combine vnStat for historical usage with iftop or nethogs for on-demand troubleshooting.
- For production clusters and business-critical services: deploy Prometheus + node_exporter for metrics collection, Grafana for dashboards, and Netdata for drill-down diagnostics.
- For security-sensitive environments: integrate flow analysis (ntopng) and packet captures (tcpdump) selectively with retention policies, and alert on unusual flows.
When running in cloud or VPS environments, monitor both the VM-level interfaces and the hypervisor/network fabric if available from the provider to get a complete picture.
Practical examples and quick-check checklist
When you suspect bandwidth issues, follow this checklist:
- Check interface counters:
cat /proc/net/devorvnstat -i eth0. - View real-time flows:
sudo iftop -i eth0orsudo nethogs. - Inspect connections and sockets:
ss -tunap. - Capture suspicious traffic:
sudo tcpdump -i eth0 host 203.0.113.5 -s 0 -w suspect.pcap. - Run an active throughput test:
iperf3 -c other_host -P 4 -t 20. - Enforce temporary limits:
tc qdisccommands to shape traffic while you remediate.
Summary
Monitoring network bandwidth on Linux requires a mix of lightweight counters, real-time interactive tools, and scalable metric collection for long-term visibility. Use vnStat or Prometheus for history, iftop and nethogs for real-time per-connection or per-process insights, and tcpdump/tshark for packet-level root-cause analysis. Combine these tools with traffic control (tc) to act on findings, and visualize metrics with Grafana for trend-based capacity planning.
For production deployments, particularly on VPS platforms, pairing robust monitoring with reliable hosting avoids surprises. If you’re considering cloud or VPS options in the USA, check out the available plans and network options at USA VPS from VPS.DO to match infrastructure to your monitoring and performance needs.