KVM-based microVMM for the Volt platform: - Sub-second VM boot times - Minimal memory footprint - Landlock LSM + seccomp security - Virtio device support - Custom kernel management Copyright (c) Armored Gates LLC. All rights reserved. Licensed under AGPSL v5.0
Volt Network Benchmarks
Comprehensive benchmark suite for comparing network backend performance in Volt VMs.
Quick Start
# Install dependencies (run once on each test machine)
./setup.sh
# Run full benchmark suite
./run-all.sh <server-ip> <backend-name>
# Or run individual tests
./throughput.sh <server-ip> <backend-name>
./latency.sh <server-ip> <backend-name>
./pps.sh <server-ip> <backend-name>
Test Architecture
┌─────────────────┐ ┌─────────────────┐
│ Client VM │ │ Server VM │
│ (runs tests) │◄───────►│ (runs servers) │
│ │ │ │
│ ./throughput.sh│ │ iperf3 -s │
│ ./latency.sh │ │ sockperf sr │
│ ./pps.sh │ │ netserver │
└─────────────────┘ └─────────────────┘
Backends Tested
| Backend | Description | Expected Performance |
|---|---|---|
virtio |
Pure virtio-net (QEMU userspace) | Baseline |
vhost-net |
vhost-net kernel acceleration | ~2-3x throughput |
macvtap |
Direct host NIC passthrough | Near line-rate |
Running Benchmarks
Prerequisites
- Two VMs with network connectivity
- Root/sudo access on both
- Firewall rules allowing test traffic
Server Setup
On the server VM, start the test servers:
# iperf3 server (TCP/UDP throughput)
iperf3 -s -D
# sockperf server (latency)
sockperf sr --daemonize
# netperf server (PPS)
netserver
Client Tests
# Test with virtio backend
./run-all.sh 192.168.1.100 virtio
# Test with vhost-net backend
./run-all.sh 192.168.1.100 vhost-net
# Test with macvtap backend
./run-all.sh 192.168.1.100 macvtap
Comparison
After running tests with all backends:
./compare.sh results/
Output
Results are saved to results/<backend>/<timestamp>/:
results/
├── virtio/
│ └── 2024-01-15_143022/
│ ├── throughput.json
│ ├── latency.txt
│ └── pps.txt
├── vhost-net/
│ └── ...
└── macvtap/
└── ...
Test Details
Throughput Tests (throughput.sh)
| Test | Tool | Command | Metric |
|---|---|---|---|
| TCP Single | iperf3 | -c <ip> -t 30 |
Gbps |
| TCP Multi-8 | iperf3 | -c <ip> -P 8 -t 30 |
Gbps |
| UDP Max | iperf3 | -c <ip> -u -b 0 -t 30 |
Gbps, Loss% |
Latency Tests (latency.sh)
| Test | Tool | Command | Metric |
|---|---|---|---|
| ICMP Ping | ping | -c 1000 -i 0.01 |
avg/p50/p95/p99 µs |
| TCP Latency | sockperf | pp -i <ip> -t 30 |
avg/p50/p95/p99 µs |
PPS Tests (pps.sh)
| Test | Tool | Command | Metric |
|---|---|---|---|
| 64-byte UDP | iperf3 | -u -l 64 -b 0 |
packets/sec |
| TCP RR | netperf | TCP_RR -l 30 |
trans/sec |
Interpreting Results
What to Look For
- Throughput: vhost-net should be 2-3x virtio, macvtap near line-rate
- Latency: macvtap lowest, vhost-net middle, virtio highest
- PPS: Best indicator of CPU overhead per packet
Red Flags
- TCP throughput < 1 Gbps on 10G link → Check offloading
- Latency P99 > 10x P50 → Indicates jitter issues
- UDP loss > 1% → Buffer tuning needed
Troubleshooting
iperf3 connection refused
# Ensure server is running
ss -tlnp | grep 5201
sockperf not found
# Rebuild with dependencies
./setup.sh
Inconsistent results
# Disable CPU frequency scaling
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor