Volt VMM (Neutron Stardust): source-available under AGPSL v5.0
KVM-based microVMM for the Volt platform: - Sub-second VM boot times - Minimal memory footprint - Landlock LSM + seccomp security - Virtio device support - Custom kernel management Copyright (c) Armored Gates LLC. All rights reserved. Licensed under AGPSL v5.0
This commit is contained in:
158
benchmarks/README.md
Normal file
158
benchmarks/README.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# Volt Network Benchmarks
|
||||
|
||||
Comprehensive benchmark suite for comparing network backend performance in Volt VMs.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install dependencies (run once on each test machine)
|
||||
./setup.sh
|
||||
|
||||
# Run full benchmark suite
|
||||
./run-all.sh <server-ip> <backend-name>
|
||||
|
||||
# Or run individual tests
|
||||
./throughput.sh <server-ip> <backend-name>
|
||||
./latency.sh <server-ip> <backend-name>
|
||||
./pps.sh <server-ip> <backend-name>
|
||||
```
|
||||
|
||||
## Test Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Client VM │ │ Server VM │
|
||||
│ (runs tests) │◄───────►│ (runs servers) │
|
||||
│ │ │ │
|
||||
│ ./throughput.sh│ │ iperf3 -s │
|
||||
│ ./latency.sh │ │ sockperf sr │
|
||||
│ ./pps.sh │ │ netserver │
|
||||
└─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
## Backends Tested
|
||||
|
||||
| Backend | Description | Expected Performance |
|
||||
|---------|-------------|---------------------|
|
||||
| `virtio` | Pure virtio-net (QEMU userspace) | Baseline |
|
||||
| `vhost-net` | vhost-net kernel acceleration | ~2-3x throughput |
|
||||
| `macvtap` | Direct host NIC passthrough | Near line-rate |
|
||||
|
||||
## Running Benchmarks
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Two VMs with network connectivity
|
||||
2. Root/sudo access on both
|
||||
3. Firewall rules allowing test traffic
|
||||
|
||||
### Server Setup
|
||||
|
||||
On the server VM, start the test servers:
|
||||
|
||||
```bash
|
||||
# iperf3 server (TCP/UDP throughput)
|
||||
iperf3 -s -D
|
||||
|
||||
# sockperf server (latency)
|
||||
sockperf sr --daemonize
|
||||
|
||||
# netperf server (PPS)
|
||||
netserver
|
||||
```
|
||||
|
||||
### Client Tests
|
||||
|
||||
```bash
|
||||
# Test with virtio backend
|
||||
./run-all.sh 192.168.1.100 virtio
|
||||
|
||||
# Test with vhost-net backend
|
||||
./run-all.sh 192.168.1.100 vhost-net
|
||||
|
||||
# Test with macvtap backend
|
||||
./run-all.sh 192.168.1.100 macvtap
|
||||
```
|
||||
|
||||
### Comparison
|
||||
|
||||
After running tests with all backends:
|
||||
|
||||
```bash
|
||||
./compare.sh results/
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Results are saved to `results/<backend>/<timestamp>/`:
|
||||
|
||||
```
|
||||
results/
|
||||
├── virtio/
|
||||
│ └── 2024-01-15_143022/
|
||||
│ ├── throughput.json
|
||||
│ ├── latency.txt
|
||||
│ └── pps.txt
|
||||
├── vhost-net/
|
||||
│ └── ...
|
||||
└── macvtap/
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Test Details
|
||||
|
||||
### Throughput Tests (`throughput.sh`)
|
||||
|
||||
| Test | Tool | Command | Metric |
|
||||
|------|------|---------|--------|
|
||||
| TCP Single | iperf3 | `-c <ip> -t 30` | Gbps |
|
||||
| TCP Multi-8 | iperf3 | `-c <ip> -P 8 -t 30` | Gbps |
|
||||
| UDP Max | iperf3 | `-c <ip> -u -b 0 -t 30` | Gbps, Loss% |
|
||||
|
||||
### Latency Tests (`latency.sh`)
|
||||
|
||||
| Test | Tool | Command | Metric |
|
||||
|------|------|---------|--------|
|
||||
| ICMP Ping | ping | `-c 1000 -i 0.01` | avg/p50/p95/p99 µs |
|
||||
| TCP Latency | sockperf | `pp -i <ip> -t 30` | avg/p50/p95/p99 µs |
|
||||
|
||||
### PPS Tests (`pps.sh`)
|
||||
|
||||
| Test | Tool | Command | Metric |
|
||||
|------|------|---------|--------|
|
||||
| 64-byte UDP | iperf3 | `-u -l 64 -b 0` | packets/sec |
|
||||
| TCP RR | netperf | `TCP_RR -l 30` | trans/sec |
|
||||
|
||||
## Interpreting Results
|
||||
|
||||
### What to Look For
|
||||
|
||||
1. **Throughput**: vhost-net should be 2-3x virtio, macvtap near line-rate
|
||||
2. **Latency**: macvtap lowest, vhost-net middle, virtio highest
|
||||
3. **PPS**: Best indicator of CPU overhead per packet
|
||||
|
||||
### Red Flags
|
||||
|
||||
- TCP throughput < 1 Gbps on 10G link → Check offloading
|
||||
- Latency P99 > 10x P50 → Indicates jitter issues
|
||||
- UDP loss > 1% → Buffer tuning needed
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### iperf3 connection refused
|
||||
```bash
|
||||
# Ensure server is running
|
||||
ss -tlnp | grep 5201
|
||||
```
|
||||
|
||||
### sockperf not found
|
||||
```bash
|
||||
# Rebuild with dependencies
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
### Inconsistent results
|
||||
```bash
|
||||
# Disable CPU frequency scaling
|
||||
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
|
||||
```
|
||||
236
benchmarks/compare.sh
Executable file
236
benchmarks/compare.sh
Executable file
@@ -0,0 +1,236 @@
|
||||
#!/bin/bash
|
||||
# Volt Network Benchmark - Backend Comparison
|
||||
# Generates side-by-side comparison of all backends
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
RESULTS_BASE="${1:-${SCRIPT_DIR}/results}"
|
||||
|
||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
||||
echo "║ Volt Backend Comparison Report ║"
|
||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
||||
echo ""
|
||||
echo "Results directory: $RESULTS_BASE"
|
||||
echo "Generated: $(date)"
|
||||
echo ""
|
||||
|
||||
# Find all backends with results
|
||||
BACKENDS=()
|
||||
for dir in "${RESULTS_BASE}"/*/; do
|
||||
if [ -d "$dir" ]; then
|
||||
backend=$(basename "$dir")
|
||||
BACKENDS+=("$backend")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#BACKENDS[@]} -eq 0 ]; then
|
||||
echo "ERROR: No results found in $RESULTS_BASE"
|
||||
echo "Run benchmarks first with: ./run-all.sh <server-ip> <backend-name>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Found backends: ${BACKENDS[*]}"
|
||||
echo ""
|
||||
|
||||
# Function to get latest result directory for a backend
|
||||
get_latest_result() {
|
||||
local backend="$1"
|
||||
ls -td "${RESULTS_BASE}/${backend}"/*/ 2>/dev/null | head -1
|
||||
}
|
||||
|
||||
# Function to extract metric from JSON
|
||||
get_json_metric() {
|
||||
local file="$1"
|
||||
local path="$2"
|
||||
local default="${3:-N/A}"
|
||||
|
||||
if [ -f "$file" ] && command -v jq &> /dev/null; then
|
||||
result=$(jq -r "$path // \"$default\"" "$file" 2>/dev/null)
|
||||
echo "${result:-$default}"
|
||||
else
|
||||
echo "$default"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to format Gbps
|
||||
format_gbps() {
|
||||
local bps="$1"
|
||||
if [ "$bps" = "N/A" ] || [ -z "$bps" ] || [ "$bps" = "0" ]; then
|
||||
echo "N/A"
|
||||
else
|
||||
printf "%.2f" $(echo "$bps / 1000000000" | bc -l 2>/dev/null || echo "0")
|
||||
fi
|
||||
}
|
||||
|
||||
# Collect data for comparison
|
||||
declare -A TCP_SINGLE TCP_MULTI UDP_MAX ICMP_P50 ICMP_P99 PPS_64
|
||||
|
||||
for backend in "${BACKENDS[@]}"; do
|
||||
result_dir=$(get_latest_result "$backend")
|
||||
if [ -z "$result_dir" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Throughput
|
||||
tcp_single_bps=$(get_json_metric "${result_dir}/tcp-single.json" '.end.sum_sent.bits_per_second')
|
||||
TCP_SINGLE[$backend]=$(format_gbps "$tcp_single_bps")
|
||||
|
||||
tcp_multi_bps=$(get_json_metric "${result_dir}/tcp-multi-8.json" '.end.sum_sent.bits_per_second')
|
||||
TCP_MULTI[$backend]=$(format_gbps "$tcp_multi_bps")
|
||||
|
||||
udp_max_bps=$(get_json_metric "${result_dir}/udp-max.json" '.end.sum.bits_per_second')
|
||||
UDP_MAX[$backend]=$(format_gbps "$udp_max_bps")
|
||||
|
||||
# Latency
|
||||
if [ -f "${result_dir}/ping-summary.env" ]; then
|
||||
source "${result_dir}/ping-summary.env"
|
||||
ICMP_P50[$backend]="${ICMP_P50_US:-N/A}"
|
||||
ICMP_P99[$backend]="${ICMP_P99_US:-N/A}"
|
||||
else
|
||||
ICMP_P50[$backend]="N/A"
|
||||
ICMP_P99[$backend]="N/A"
|
||||
fi
|
||||
|
||||
# PPS
|
||||
if [ -f "${result_dir}/udp-64byte.json" ]; then
|
||||
packets=$(get_json_metric "${result_dir}/udp-64byte.json" '.end.sum.packets')
|
||||
# Assume 30s duration if not specified
|
||||
if [ "$packets" != "N/A" ] && [ -n "$packets" ]; then
|
||||
pps=$(echo "$packets / 30" | bc 2>/dev/null || echo "N/A")
|
||||
PPS_64[$backend]="$pps"
|
||||
else
|
||||
PPS_64[$backend]="N/A"
|
||||
fi
|
||||
else
|
||||
PPS_64[$backend]="N/A"
|
||||
fi
|
||||
done
|
||||
|
||||
# Print comparison tables
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " THROUGHPUT COMPARISON (Gbps)"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Header
|
||||
printf "%-15s" "Backend"
|
||||
printf "%15s" "TCP Single"
|
||||
printf "%15s" "TCP Multi-8"
|
||||
printf "%15s" "UDP Max"
|
||||
echo ""
|
||||
|
||||
printf "%-15s" "-------"
|
||||
printf "%15s" "----------"
|
||||
printf "%15s" "-----------"
|
||||
printf "%15s" "-------"
|
||||
echo ""
|
||||
|
||||
for backend in "${BACKENDS[@]}"; do
|
||||
printf "%-15s" "$backend"
|
||||
printf "%15s" "${TCP_SINGLE[$backend]:-N/A}"
|
||||
printf "%15s" "${TCP_MULTI[$backend]:-N/A}"
|
||||
printf "%15s" "${UDP_MAX[$backend]:-N/A}"
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " LATENCY COMPARISON (µs)"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
printf "%-15s" "Backend"
|
||||
printf "%15s" "ICMP P50"
|
||||
printf "%15s" "ICMP P99"
|
||||
echo ""
|
||||
|
||||
printf "%-15s" "-------"
|
||||
printf "%15s" "--------"
|
||||
printf "%15s" "--------"
|
||||
echo ""
|
||||
|
||||
for backend in "${BACKENDS[@]}"; do
|
||||
printf "%-15s" "$backend"
|
||||
printf "%15s" "${ICMP_P50[$backend]:-N/A}"
|
||||
printf "%15s" "${ICMP_P99[$backend]:-N/A}"
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " PPS COMPARISON (packets/sec)"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
printf "%-15s" "Backend"
|
||||
printf "%15s" "64-byte UDP"
|
||||
echo ""
|
||||
|
||||
printf "%-15s" "-------"
|
||||
printf "%15s" "-----------"
|
||||
echo ""
|
||||
|
||||
for backend in "${BACKENDS[@]}"; do
|
||||
printf "%-15s" "$backend"
|
||||
printf "%15s" "${PPS_64[$backend]:-N/A}"
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Generate markdown report
|
||||
REPORT_FILE="${RESULTS_BASE}/COMPARISON.md"
|
||||
{
|
||||
echo "# Volt Backend Comparison"
|
||||
echo ""
|
||||
echo "Generated: $(date)"
|
||||
echo ""
|
||||
echo "## Throughput (Gbps)"
|
||||
echo ""
|
||||
echo "| Backend | TCP Single | TCP Multi-8 | UDP Max |"
|
||||
echo "|---------|------------|-------------|---------|"
|
||||
for backend in "${BACKENDS[@]}"; do
|
||||
echo "| $backend | ${TCP_SINGLE[$backend]:-N/A} | ${TCP_MULTI[$backend]:-N/A} | ${UDP_MAX[$backend]:-N/A} |"
|
||||
done
|
||||
echo ""
|
||||
echo "## Latency (µs)"
|
||||
echo ""
|
||||
echo "| Backend | ICMP P50 | ICMP P99 |"
|
||||
echo "|---------|----------|----------|"
|
||||
for backend in "${BACKENDS[@]}"; do
|
||||
echo "| $backend | ${ICMP_P50[$backend]:-N/A} | ${ICMP_P99[$backend]:-N/A} |"
|
||||
done
|
||||
echo ""
|
||||
echo "## Packets Per Second"
|
||||
echo ""
|
||||
echo "| Backend | 64-byte UDP PPS |"
|
||||
echo "|---------|-----------------|"
|
||||
for backend in "${BACKENDS[@]}"; do
|
||||
echo "| $backend | ${PPS_64[$backend]:-N/A} |"
|
||||
done
|
||||
echo ""
|
||||
echo "## Analysis"
|
||||
echo ""
|
||||
echo "### Expected Performance Hierarchy"
|
||||
echo ""
|
||||
echo "1. **macvtap** - Direct host NIC passthrough, near line-rate"
|
||||
echo "2. **vhost-net** - Kernel datapath, 2-3x virtio throughput"
|
||||
echo "3. **virtio** - QEMU userspace, baseline performance"
|
||||
echo ""
|
||||
echo "### Key Observations"
|
||||
echo ""
|
||||
echo "- TCP Multi-stream shows aggregate bandwidth capability"
|
||||
echo "- P99 latency reveals worst-case jitter"
|
||||
echo "- 64-byte PPS shows raw packet processing overhead"
|
||||
echo ""
|
||||
} > "$REPORT_FILE"
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Comparison report saved to: $REPORT_FILE"
|
||||
echo ""
|
||||
echo "Performance Hierarchy (expected):"
|
||||
echo " macvtap > vhost-net > virtio"
|
||||
echo ""
|
||||
echo "Key insight: If vhost-net isn't 2-3x faster than virtio,"
|
||||
echo "check that vhost_net kernel module is loaded and in use."
|
||||
208
benchmarks/latency.sh
Executable file
208
benchmarks/latency.sh
Executable file
@@ -0,0 +1,208 @@
|
||||
#!/bin/bash
|
||||
# Volt Network Benchmark - Latency Tests
|
||||
# Tests ICMP and TCP latency with percentile analysis
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Parse arguments
|
||||
SERVER_IP="${1:?Usage: $0 <server-ip> [backend-name] [count]}"
|
||||
BACKEND="${2:-unknown}"
|
||||
PING_COUNT="${3:-1000}"
|
||||
SOCKPERF_DURATION="${4:-30}"
|
||||
|
||||
# Setup results directory
|
||||
TIMESTAMP=$(date +%Y-%m-%d_%H%M%S)
|
||||
RESULTS_DIR="${SCRIPT_DIR}/results/${BACKEND}/${TIMESTAMP}"
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
echo "=== Volt Latency Benchmark ==="
|
||||
echo "Server: $SERVER_IP"
|
||||
echo "Backend: $BACKEND"
|
||||
echo "Ping count: $PING_COUNT"
|
||||
echo "Results: $RESULTS_DIR"
|
||||
echo ""
|
||||
|
||||
# Function to calculate percentiles from sorted data
|
||||
calc_percentiles() {
|
||||
local file="$1"
|
||||
local count=$(wc -l < "$file")
|
||||
|
||||
if [ "$count" -eq 0 ]; then
|
||||
echo "N/A N/A N/A N/A N/A"
|
||||
return
|
||||
fi
|
||||
|
||||
# Sort numerically
|
||||
sort -n "$file" > "${file}.sorted"
|
||||
|
||||
# Calculate indices (1-indexed for sed)
|
||||
local p50_idx=$(( (count * 50 + 99) / 100 ))
|
||||
local p95_idx=$(( (count * 95 + 99) / 100 ))
|
||||
local p99_idx=$(( (count * 99 + 99) / 100 ))
|
||||
|
||||
# Ensure indices are at least 1
|
||||
[ "$p50_idx" -lt 1 ] && p50_idx=1
|
||||
[ "$p95_idx" -lt 1 ] && p95_idx=1
|
||||
[ "$p99_idx" -lt 1 ] && p99_idx=1
|
||||
|
||||
local min=$(head -1 "${file}.sorted")
|
||||
local max=$(tail -1 "${file}.sorted")
|
||||
local p50=$(sed -n "${p50_idx}p" "${file}.sorted")
|
||||
local p95=$(sed -n "${p95_idx}p" "${file}.sorted")
|
||||
local p99=$(sed -n "${p99_idx}p" "${file}.sorted")
|
||||
|
||||
# Calculate average
|
||||
local avg=$(awk '{sum+=$1} END {printf "%.3f", sum/NR}' "${file}.sorted")
|
||||
|
||||
rm -f "${file}.sorted"
|
||||
|
||||
echo "$min $avg $p50 $p95 $p99 $max"
|
||||
}
|
||||
|
||||
# ICMP Ping Test
|
||||
echo "[$(date +%H:%M:%S)] Running ICMP ping test (${PING_COUNT} packets)..."
|
||||
PING_RAW="${RESULTS_DIR}/ping-raw.txt"
|
||||
PING_LATENCIES="${RESULTS_DIR}/ping-latencies.txt"
|
||||
|
||||
if ping -c "$PING_COUNT" -i 0.01 "$SERVER_IP" > "$PING_RAW" 2>&1; then
|
||||
# Extract latency values (time=X.XX ms)
|
||||
grep -oP 'time=\K[0-9.]+' "$PING_RAW" > "$PING_LATENCIES"
|
||||
|
||||
# Convert to microseconds for consistency
|
||||
awk '{print $1 * 1000}' "$PING_LATENCIES" > "${PING_LATENCIES}.us"
|
||||
mv "${PING_LATENCIES}.us" "$PING_LATENCIES"
|
||||
|
||||
read min avg p50 p95 p99 max <<< $(calc_percentiles "$PING_LATENCIES")
|
||||
|
||||
echo " ICMP Ping Results (µs):"
|
||||
printf " Min: %10.1f\n" "$min"
|
||||
printf " Avg: %10.1f\n" "$avg"
|
||||
printf " P50: %10.1f\n" "$p50"
|
||||
printf " P95: %10.1f\n" "$p95"
|
||||
printf " P99: %10.1f\n" "$p99"
|
||||
printf " Max: %10.1f\n" "$max"
|
||||
|
||||
# Save summary
|
||||
{
|
||||
echo "ICMP_MIN_US=$min"
|
||||
echo "ICMP_AVG_US=$avg"
|
||||
echo "ICMP_P50_US=$p50"
|
||||
echo "ICMP_P95_US=$p95"
|
||||
echo "ICMP_P99_US=$p99"
|
||||
echo "ICMP_MAX_US=$max"
|
||||
} > "${RESULTS_DIR}/ping-summary.env"
|
||||
else
|
||||
echo " → FAILED (check if ICMP is allowed)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# TCP Latency with sockperf (ping-pong mode)
|
||||
echo "[$(date +%H:%M:%S)] Running TCP latency test (sockperf pp, ${SOCKPERF_DURATION}s)..."
|
||||
|
||||
# Check if sockperf server is reachable
|
||||
if timeout 5 bash -c "echo > /dev/tcp/$SERVER_IP/11111" 2>/dev/null; then
|
||||
SOCKPERF_RAW="${RESULTS_DIR}/sockperf-raw.txt"
|
||||
SOCKPERF_LATENCIES="${RESULTS_DIR}/sockperf-latencies.txt"
|
||||
|
||||
# Run sockperf in ping-pong mode
|
||||
if sockperf pp -i "$SERVER_IP" -t "$SOCKPERF_DURATION" --full-log "$SOCKPERF_RAW" > "${RESULTS_DIR}/sockperf-output.txt" 2>&1; then
|
||||
|
||||
# Extract latency values from full log (if available)
|
||||
if [ -f "$SOCKPERF_RAW" ]; then
|
||||
# sockperf full-log format: txTime, rxTime, latency (nsec)
|
||||
awk '{print $3/1000}' "$SOCKPERF_RAW" > "$SOCKPERF_LATENCIES"
|
||||
else
|
||||
# Parse from summary output
|
||||
grep -oP 'latency=\K[0-9.]+' "${RESULTS_DIR}/sockperf-output.txt" > "$SOCKPERF_LATENCIES" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
if [ -s "$SOCKPERF_LATENCIES" ]; then
|
||||
read min avg p50 p95 p99 max <<< $(calc_percentiles "$SOCKPERF_LATENCIES")
|
||||
|
||||
echo " TCP Latency Results (µs):"
|
||||
printf " Min: %10.1f\n" "$min"
|
||||
printf " Avg: %10.1f\n" "$avg"
|
||||
printf " P50: %10.1f\n" "$p50"
|
||||
printf " P95: %10.1f\n" "$p95"
|
||||
printf " P99: %10.1f\n" "$p99"
|
||||
printf " Max: %10.1f\n" "$max"
|
||||
|
||||
{
|
||||
echo "TCP_MIN_US=$min"
|
||||
echo "TCP_AVG_US=$avg"
|
||||
echo "TCP_P50_US=$p50"
|
||||
echo "TCP_P95_US=$p95"
|
||||
echo "TCP_P99_US=$p99"
|
||||
echo "TCP_MAX_US=$max"
|
||||
} > "${RESULTS_DIR}/sockperf-summary.env"
|
||||
else
|
||||
# Parse summary from sockperf output
|
||||
echo " → Parsing summary output..."
|
||||
grep -E "(avg|percentile|latency)" "${RESULTS_DIR}/sockperf-output.txt" || true
|
||||
fi
|
||||
else
|
||||
echo " → FAILED"
|
||||
fi
|
||||
else
|
||||
echo " → SKIPPED (sockperf server not running on $SERVER_IP:11111)"
|
||||
echo " → Run 'sockperf sr' on the server"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# UDP Latency with sockperf
|
||||
echo "[$(date +%H:%M:%S)] Running UDP latency test (sockperf under-load, ${SOCKPERF_DURATION}s)..."
|
||||
|
||||
if timeout 5 bash -c "echo > /dev/udp/$SERVER_IP/11111" 2>/dev/null || true; then
|
||||
SOCKPERF_UDP_RAW="${RESULTS_DIR}/sockperf-udp-raw.txt"
|
||||
|
||||
if sockperf under-load -i "$SERVER_IP" -t "$SOCKPERF_DURATION" --full-log "$SOCKPERF_UDP_RAW" > "${RESULTS_DIR}/sockperf-udp-output.txt" 2>&1; then
|
||||
echo " → Complete"
|
||||
# Parse percentiles from sockperf output
|
||||
grep -E "(percentile|avg-latency)" "${RESULTS_DIR}/sockperf-udp-output.txt" | head -10
|
||||
else
|
||||
echo " → FAILED or server not running"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Generate overall summary
|
||||
echo ""
|
||||
echo "=== Latency Summary ==="
|
||||
SUMMARY_FILE="${RESULTS_DIR}/latency-summary.txt"
|
||||
{
|
||||
echo "Volt Latency Benchmark Results"
|
||||
echo "===================================="
|
||||
echo "Backend: $BACKEND"
|
||||
echo "Server: $SERVER_IP"
|
||||
echo "Date: $(date)"
|
||||
echo ""
|
||||
|
||||
if [ -f "${RESULTS_DIR}/ping-summary.env" ]; then
|
||||
echo "ICMP Ping Latency (µs):"
|
||||
source "${RESULTS_DIR}/ping-summary.env"
|
||||
printf " %-8s %10.1f\n" "Min:" "$ICMP_MIN_US"
|
||||
printf " %-8s %10.1f\n" "Avg:" "$ICMP_AVG_US"
|
||||
printf " %-8s %10.1f\n" "P50:" "$ICMP_P50_US"
|
||||
printf " %-8s %10.1f\n" "P95:" "$ICMP_P95_US"
|
||||
printf " %-8s %10.1f\n" "P99:" "$ICMP_P99_US"
|
||||
printf " %-8s %10.1f\n" "Max:" "$ICMP_MAX_US"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ -f "${RESULTS_DIR}/sockperf-summary.env" ]; then
|
||||
echo "TCP Latency (µs):"
|
||||
source "${RESULTS_DIR}/sockperf-summary.env"
|
||||
printf " %-8s %10.1f\n" "Min:" "$TCP_MIN_US"
|
||||
printf " %-8s %10.1f\n" "Avg:" "$TCP_AVG_US"
|
||||
printf " %-8s %10.1f\n" "P50:" "$TCP_P50_US"
|
||||
printf " %-8s %10.1f\n" "P95:" "$TCP_P95_US"
|
||||
printf " %-8s %10.1f\n" "P99:" "$TCP_P99_US"
|
||||
printf " %-8s %10.1f\n" "Max:" "$TCP_MAX_US"
|
||||
fi
|
||||
} | tee "$SUMMARY_FILE"
|
||||
|
||||
echo ""
|
||||
echo "Full results saved to: $RESULTS_DIR"
|
||||
173
benchmarks/pps.sh
Executable file
173
benchmarks/pps.sh
Executable file
@@ -0,0 +1,173 @@
|
||||
#!/bin/bash
|
||||
# Volt Network Benchmark - Packets Per Second Tests
|
||||
# Tests small packet performance (best indicator of CPU overhead)
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Parse arguments
|
||||
SERVER_IP="${1:?Usage: $0 <server-ip> [backend-name] [duration]}"
|
||||
BACKEND="${2:-unknown}"
|
||||
DURATION="${3:-30}"
|
||||
|
||||
# Setup results directory
|
||||
TIMESTAMP=$(date +%Y-%m-%d_%H%M%S)
|
||||
RESULTS_DIR="${SCRIPT_DIR}/results/${BACKEND}/${TIMESTAMP}"
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
echo "=== Volt PPS Benchmark ==="
|
||||
echo "Server: $SERVER_IP"
|
||||
echo "Backend: $BACKEND"
|
||||
echo "Duration: ${DURATION}s per test"
|
||||
echo "Results: $RESULTS_DIR"
|
||||
echo ""
|
||||
echo "Note: Small packet tests show virtualization overhead best"
|
||||
echo ""
|
||||
|
||||
# Function to format large numbers
|
||||
format_number() {
|
||||
local num="$1"
|
||||
if [ -z "$num" ] || [ "$num" = "N/A" ]; then
|
||||
echo "N/A"
|
||||
elif (( $(echo "$num >= 1000000" | bc -l 2>/dev/null || echo 0) )); then
|
||||
printf "%.2fM" $(echo "$num / 1000000" | bc -l)
|
||||
elif (( $(echo "$num >= 1000" | bc -l 2>/dev/null || echo 0) )); then
|
||||
printf "%.2fK" $(echo "$num / 1000" | bc -l)
|
||||
else
|
||||
printf "%.0f" "$num"
|
||||
fi
|
||||
}
|
||||
|
||||
# UDP Small Packet Tests with iperf3
|
||||
echo "--- UDP Small Packet Tests (iperf3) ---"
|
||||
echo ""
|
||||
|
||||
for pkt_size in 64 128 256 512; do
|
||||
echo "[$(date +%H:%M:%S)] Testing ${pkt_size}-byte UDP packets..."
|
||||
|
||||
output_file="${RESULTS_DIR}/udp-${pkt_size}byte.json"
|
||||
|
||||
# -l sets UDP payload size, actual packet = payload + 28 (IP+UDP headers)
|
||||
# -b 0 = unlimited bandwidth (find max PPS)
|
||||
if iperf3 -c "$SERVER_IP" -u -l "$pkt_size" -b 0 -t "$DURATION" -J > "$output_file" 2>&1; then
|
||||
if command -v jq &> /dev/null && [ -f "$output_file" ]; then
|
||||
packets=$(jq -r '.end.sum.packets // 0' "$output_file" 2>/dev/null)
|
||||
pps=$(echo "scale=0; $packets / $DURATION" | bc 2>/dev/null || echo "N/A")
|
||||
bps=$(jq -r '.end.sum.bits_per_second // 0' "$output_file" 2>/dev/null)
|
||||
mbps=$(echo "scale=2; $bps / 1000000" | bc 2>/dev/null || echo "N/A")
|
||||
loss=$(jq -r '.end.sum.lost_percent // 0' "$output_file" 2>/dev/null)
|
||||
|
||||
printf " %4d bytes: %12s pps (%s Mbps, loss: %.2f%%)\n" \
|
||||
"$pkt_size" "$(format_number $pps)" "$mbps" "$loss"
|
||||
else
|
||||
echo " ${pkt_size} bytes: Complete (see JSON)"
|
||||
fi
|
||||
else
|
||||
echo " ${pkt_size} bytes: FAILED"
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo ""
|
||||
|
||||
# TCP Request/Response with netperf (best for measuring transaction rate)
|
||||
echo "--- TCP Transaction Tests (netperf) ---"
|
||||
echo ""
|
||||
|
||||
if command -v netperf &> /dev/null; then
|
||||
# TCP_RR - Request/Response (simulates real application traffic)
|
||||
echo "[$(date +%H:%M:%S)] Running TCP_RR (request/response)..."
|
||||
output_file="${RESULTS_DIR}/tcp-rr.txt"
|
||||
|
||||
if netperf -H "$SERVER_IP" -l "$DURATION" -t TCP_RR > "$output_file" 2>&1; then
|
||||
# Extract transactions per second
|
||||
tps=$(tail -1 "$output_file" | awk '{print $NF}')
|
||||
echo " TCP_RR: $(format_number $tps) trans/sec"
|
||||
echo "TCP_RR_TPS=$tps" > "${RESULTS_DIR}/tcp-rr.env"
|
||||
else
|
||||
echo " TCP_RR: FAILED (is netserver running?)"
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
|
||||
# TCP_CRR - Connect/Request/Response (includes connection setup overhead)
|
||||
echo "[$(date +%H:%M:%S)] Running TCP_CRR (connect/request/response)..."
|
||||
output_file="${RESULTS_DIR}/tcp-crr.txt"
|
||||
|
||||
if netperf -H "$SERVER_IP" -l "$DURATION" -t TCP_CRR > "$output_file" 2>&1; then
|
||||
tps=$(tail -1 "$output_file" | awk '{print $NF}')
|
||||
echo " TCP_CRR: $(format_number $tps) trans/sec"
|
||||
echo "TCP_CRR_TPS=$tps" > "${RESULTS_DIR}/tcp-crr.env"
|
||||
else
|
||||
echo " TCP_CRR: FAILED"
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
|
||||
# UDP_RR - UDP Request/Response
|
||||
echo "[$(date +%H:%M:%S)] Running UDP_RR (request/response)..."
|
||||
output_file="${RESULTS_DIR}/udp-rr.txt"
|
||||
|
||||
if netperf -H "$SERVER_IP" -l "$DURATION" -t UDP_RR > "$output_file" 2>&1; then
|
||||
tps=$(tail -1 "$output_file" | awk '{print $NF}')
|
||||
echo " UDP_RR: $(format_number $tps) trans/sec"
|
||||
echo "UDP_RR_TPS=$tps" > "${RESULTS_DIR}/udp-rr.env"
|
||||
else
|
||||
echo " UDP_RR: FAILED"
|
||||
fi
|
||||
else
|
||||
echo "netperf not installed - skipping transaction tests"
|
||||
echo "Run ./setup.sh to install"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Generate summary
|
||||
echo "=== PPS Summary ==="
|
||||
SUMMARY_FILE="${RESULTS_DIR}/pps-summary.txt"
|
||||
{
|
||||
echo "Volt PPS Benchmark Results"
|
||||
echo "================================"
|
||||
echo "Backend: $BACKEND"
|
||||
echo "Server: $SERVER_IP"
|
||||
echo "Date: $(date)"
|
||||
echo "Duration: ${DURATION}s per test"
|
||||
echo ""
|
||||
echo "UDP Packet Rates:"
|
||||
echo "-----------------"
|
||||
|
||||
for pkt_size in 64 128 256 512; do
|
||||
json_file="${RESULTS_DIR}/udp-${pkt_size}byte.json"
|
||||
if [ -f "$json_file" ] && command -v jq &> /dev/null; then
|
||||
packets=$(jq -r '.end.sum.packets // 0' "$json_file" 2>/dev/null)
|
||||
pps=$(echo "scale=0; $packets / $DURATION" | bc 2>/dev/null || echo "N/A")
|
||||
loss=$(jq -r '.end.sum.lost_percent // 0' "$json_file" 2>/dev/null)
|
||||
printf " %4d bytes: %12s pps (loss: %.2f%%)\n" "$pkt_size" "$(format_number $pps)" "$loss"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "Transaction Rates:"
|
||||
echo "------------------"
|
||||
|
||||
for test in tcp-rr tcp-crr udp-rr; do
|
||||
env_file="${RESULTS_DIR}/${test}.env"
|
||||
if [ -f "$env_file" ]; then
|
||||
source "$env_file"
|
||||
case "$test" in
|
||||
tcp-rr) val="$TCP_RR_TPS" ;;
|
||||
tcp-crr) val="$TCP_CRR_TPS" ;;
|
||||
udp-rr) val="$UDP_RR_TPS" ;;
|
||||
esac
|
||||
printf " %-10s %12s trans/sec\n" "${test}:" "$(format_number $val)"
|
||||
fi
|
||||
done
|
||||
} | tee "$SUMMARY_FILE"
|
||||
|
||||
echo ""
|
||||
echo "Full results saved to: $RESULTS_DIR"
|
||||
echo ""
|
||||
echo "Key Insight: 64-byte PPS shows raw packet processing overhead."
|
||||
echo "Higher PPS = lower virtualization overhead = better performance."
|
||||
163
benchmarks/results-template.md
Normal file
163
benchmarks/results-template.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Volt Network Benchmark Results
|
||||
|
||||
## Test Environment
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Date | YYYY-MM-DD |
|
||||
| Host CPU | Intel Xeon E-2288G @ 3.70GHz |
|
||||
| Host RAM | 64GB DDR4-2666 |
|
||||
| Host NIC | Intel X710 10GbE |
|
||||
| Host Kernel | 6.1.0-xx-amd64 |
|
||||
| VM vCPUs | 4 |
|
||||
| VM RAM | 8GB |
|
||||
| Guest Kernel | 6.1.0-xx-amd64 |
|
||||
| QEMU Version | 8.x.x |
|
||||
|
||||
## Test Configuration
|
||||
|
||||
- Duration: 30 seconds per test
|
||||
- Ping count: 1000 packets
|
||||
- iperf3 parallel streams: 8 (multi-stream tests)
|
||||
|
||||
---
|
||||
|
||||
## Results
|
||||
|
||||
### Throughput (Gbps)
|
||||
|
||||
| Test | virtio | vhost-net | macvtap |
|
||||
|------|--------|-----------|---------|
|
||||
| TCP Single Stream | | | |
|
||||
| TCP Multi-8 Stream | | | |
|
||||
| UDP Maximum | | | |
|
||||
| TCP Reverse | | | |
|
||||
|
||||
### Latency (microseconds)
|
||||
|
||||
| Metric | virtio | vhost-net | macvtap |
|
||||
|--------|--------|-----------|---------|
|
||||
| ICMP P50 | | | |
|
||||
| ICMP P95 | | | |
|
||||
| ICMP P99 | | | |
|
||||
| TCP P50 | | | |
|
||||
| TCP P99 | | | |
|
||||
|
||||
### Packets Per Second
|
||||
|
||||
| Packet Size | virtio | vhost-net | macvtap |
|
||||
|-------------|--------|-----------|---------|
|
||||
| 64 bytes | | | |
|
||||
| 128 bytes | | | |
|
||||
| 256 bytes | | | |
|
||||
| 512 bytes | | | |
|
||||
|
||||
### Transaction Rates (trans/sec)
|
||||
|
||||
| Test | virtio | vhost-net | macvtap |
|
||||
|------|--------|-----------|---------|
|
||||
| TCP_RR | | | |
|
||||
| TCP_CRR | | | |
|
||||
| UDP_RR | | | |
|
||||
|
||||
---
|
||||
|
||||
## Analysis
|
||||
|
||||
### Throughput Analysis
|
||||
|
||||
**TCP Single Stream:**
|
||||
- virtio: X Gbps (baseline)
|
||||
- vhost-net: X Gbps (Y% improvement)
|
||||
- macvtap: X Gbps (Y% improvement)
|
||||
|
||||
**Key Finding:** [Describe the performance differences]
|
||||
|
||||
### Latency Analysis
|
||||
|
||||
**P99 Latency:**
|
||||
- virtio: X µs
|
||||
- vhost-net: X µs
|
||||
- macvtap: X µs
|
||||
|
||||
**Jitter (P99/P50 ratio):**
|
||||
- virtio: X.Xx
|
||||
- vhost-net: X.Xx
|
||||
- macvtap: X.Xx
|
||||
|
||||
**Key Finding:** [Describe latency characteristics]
|
||||
|
||||
### PPS Analysis
|
||||
|
||||
**64-byte Packets (best overhead indicator):**
|
||||
- virtio: X pps
|
||||
- vhost-net: X pps (Y% improvement)
|
||||
- macvtap: X pps (Y% improvement)
|
||||
|
||||
**Key Finding:** [Describe per-packet overhead differences]
|
||||
|
||||
---
|
||||
|
||||
## Conclusions
|
||||
|
||||
### Performance Hierarchy
|
||||
|
||||
1. **macvtap** - Best for:
|
||||
- Maximum throughput requirements
|
||||
- Lowest latency needs
|
||||
- When host NIC can be dedicated
|
||||
|
||||
2. **vhost-net** - Best for:
|
||||
- Multi-tenant environments
|
||||
- Good balance of performance and flexibility
|
||||
- Standard production workloads
|
||||
|
||||
3. **virtio** - Best for:
|
||||
- Development/testing
|
||||
- Maximum portability
|
||||
- When performance is not critical
|
||||
|
||||
### Recommendations
|
||||
|
||||
For Volt production VMs:
|
||||
- Default: `vhost-net` (best balance)
|
||||
- High-performance option: `macvtap` (when applicable)
|
||||
- Compatibility fallback: `virtio`
|
||||
|
||||
### Anomalies or Issues
|
||||
|
||||
[Document any unexpected results, test failures, or areas needing investigation]
|
||||
|
||||
---
|
||||
|
||||
## Raw Data
|
||||
|
||||
Full test results available in:
|
||||
- `results/virtio/TIMESTAMP/`
|
||||
- `results/vhost-net/TIMESTAMP/`
|
||||
- `results/macvtap/TIMESTAMP/`
|
||||
|
||||
---
|
||||
|
||||
## Reproducibility
|
||||
|
||||
To reproduce these results:
|
||||
|
||||
```bash
|
||||
# On server VM
|
||||
iperf3 -s -D
|
||||
sockperf sr --daemonize
|
||||
netserver
|
||||
|
||||
# On client VM (for each backend)
|
||||
./run-all.sh <server-ip> virtio
|
||||
./run-all.sh <server-ip> vhost-net
|
||||
./run-all.sh <server-ip> macvtap
|
||||
|
||||
# Generate comparison
|
||||
./compare.sh results/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Report generated by Volt Benchmark Suite*
|
||||
222
benchmarks/run-all.sh
Executable file
222
benchmarks/run-all.sh
Executable file
@@ -0,0 +1,222 @@
|
||||
#!/bin/bash
|
||||
# Volt Network Benchmark - Full Suite Runner
|
||||
# Runs all benchmarks and generates comprehensive report
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Parse arguments
|
||||
SERVER_IP="${1:?Usage: $0 <server-ip> [backend-name] [duration]}"
|
||||
BACKEND="${2:-unknown}"
|
||||
DURATION="${3:-30}"
|
||||
|
||||
# Create shared timestamp for this run
|
||||
export BENCHMARK_TIMESTAMP=$(date +%Y-%m-%d_%H%M%S)
|
||||
RESULTS_DIR="${SCRIPT_DIR}/results/${BACKEND}/${BENCHMARK_TIMESTAMP}"
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
||||
echo "║ Volt Network Benchmark Suite ║"
|
||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
||||
echo ""
|
||||
echo "Configuration:"
|
||||
echo " Server: $SERVER_IP"
|
||||
echo " Backend: $BACKEND"
|
||||
echo " Duration: ${DURATION}s per test"
|
||||
echo " Results: $RESULTS_DIR"
|
||||
echo " Started: $(date)"
|
||||
echo ""
|
||||
|
||||
# Record system information
|
||||
echo "=== Recording System Info ==="
|
||||
{
|
||||
echo "Volt Network Benchmark"
|
||||
echo "==========================="
|
||||
echo "Date: $(date)"
|
||||
echo "Backend: $BACKEND"
|
||||
echo "Server: $SERVER_IP"
|
||||
echo ""
|
||||
echo "--- Client System ---"
|
||||
echo "Hostname: $(hostname)"
|
||||
echo "Kernel: $(uname -r)"
|
||||
echo "CPU: $(grep 'model name' /proc/cpuinfo | head -1 | cut -d: -f2 | xargs)"
|
||||
echo "Cores: $(nproc)"
|
||||
echo ""
|
||||
echo "--- Network Interfaces ---"
|
||||
ip addr show 2>/dev/null || ifconfig
|
||||
echo ""
|
||||
echo "--- Network Stats Before ---"
|
||||
cat /proc/net/dev 2>/dev/null | head -10
|
||||
} > "${RESULTS_DIR}/system-info.txt"
|
||||
|
||||
# Pre-flight checks
|
||||
echo "=== Pre-flight Checks ==="
|
||||
echo ""
|
||||
|
||||
check_server() {
|
||||
local port=$1
|
||||
local name=$2
|
||||
if timeout 3 bash -c "echo > /dev/tcp/$SERVER_IP/$port" 2>/dev/null; then
|
||||
echo " ✓ $name ($SERVER_IP:$port)"
|
||||
return 0
|
||||
else
|
||||
echo " ✗ $name ($SERVER_IP:$port) - not responding"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
IPERF_OK=0
|
||||
SOCKPERF_OK=0
|
||||
NETPERF_OK=0
|
||||
|
||||
check_server 5201 "iperf3" && IPERF_OK=1
|
||||
check_server 11111 "sockperf" && SOCKPERF_OK=1
|
||||
check_server 12865 "netperf" && NETPERF_OK=1
|
||||
|
||||
echo ""
|
||||
|
||||
if [ $IPERF_OK -eq 0 ]; then
|
||||
echo "ERROR: iperf3 server required but not running"
|
||||
echo "Start with: iperf3 -s"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Run benchmarks
|
||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
||||
echo "║ Running Benchmarks ║"
|
||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
||||
echo ""
|
||||
|
||||
# Throughput tests
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "PHASE 1: Throughput Tests"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
"${SCRIPT_DIR}/throughput.sh" "$SERVER_IP" "$BACKEND" "$DURATION" 2>&1 | tee "${RESULTS_DIR}/throughput-log.txt"
|
||||
|
||||
echo ""
|
||||
sleep 5
|
||||
|
||||
# Latency tests
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "PHASE 2: Latency Tests"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
"${SCRIPT_DIR}/latency.sh" "$SERVER_IP" "$BACKEND" 1000 "$DURATION" 2>&1 | tee "${RESULTS_DIR}/latency-log.txt"
|
||||
|
||||
echo ""
|
||||
sleep 5
|
||||
|
||||
# PPS tests
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "PHASE 3: Packets Per Second Tests"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
"${SCRIPT_DIR}/pps.sh" "$SERVER_IP" "$BACKEND" "$DURATION" 2>&1 | tee "${RESULTS_DIR}/pps-log.txt"
|
||||
|
||||
# Collect all results into unified directory
|
||||
echo ""
|
||||
echo "=== Consolidating Results ==="
|
||||
|
||||
# Find and move nested results
|
||||
for subdir in throughput latency pps; do
|
||||
nested_dir="${SCRIPT_DIR}/results/${BACKEND}"
|
||||
if [ -d "$nested_dir" ]; then
|
||||
# Find most recent subdirectory from this run
|
||||
latest=$(ls -td "${nested_dir}"/*/ 2>/dev/null | head -1)
|
||||
if [ -n "$latest" ] && [ "$latest" != "$RESULTS_DIR/" ]; then
|
||||
cp -r "$latest"/* "$RESULTS_DIR/" 2>/dev/null || true
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Generate final report
|
||||
echo ""
|
||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
||||
echo "║ Final Report ║"
|
||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
||||
|
||||
REPORT_FILE="${RESULTS_DIR}/REPORT.md"
|
||||
{
|
||||
echo "# Volt Network Benchmark Report"
|
||||
echo ""
|
||||
echo "## Configuration"
|
||||
echo ""
|
||||
echo "| Parameter | Value |"
|
||||
echo "|-----------|-------|"
|
||||
echo "| Backend | $BACKEND |"
|
||||
echo "| Server | $SERVER_IP |"
|
||||
echo "| Duration | ${DURATION}s per test |"
|
||||
echo "| Date | $(date) |"
|
||||
echo "| Hostname | $(hostname) |"
|
||||
echo ""
|
||||
echo "## Results Summary"
|
||||
echo ""
|
||||
|
||||
# Throughput
|
||||
echo "### Throughput"
|
||||
echo ""
|
||||
echo "| Test | Result |"
|
||||
echo "|------|--------|"
|
||||
|
||||
for json_file in "${RESULTS_DIR}"/tcp-*.json "${RESULTS_DIR}"/udp-*.json; do
|
||||
if [ -f "$json_file" ] && command -v jq &> /dev/null; then
|
||||
test_name=$(basename "$json_file" .json)
|
||||
if [[ "$test_name" == udp-* ]]; then
|
||||
bps=$(jq -r '.end.sum.bits_per_second // 0' "$json_file" 2>/dev/null)
|
||||
else
|
||||
bps=$(jq -r '.end.sum_sent.bits_per_second // 0' "$json_file" 2>/dev/null)
|
||||
fi
|
||||
gbps=$(echo "scale=2; $bps / 1000000000" | bc 2>/dev/null || echo "N/A")
|
||||
echo "| $test_name | ${gbps} Gbps |"
|
||||
fi
|
||||
done 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Latency
|
||||
echo "### Latency"
|
||||
echo ""
|
||||
if [ -f "${RESULTS_DIR}/ping-summary.env" ]; then
|
||||
source "${RESULTS_DIR}/ping-summary.env"
|
||||
echo "| Metric | ICMP (µs) |"
|
||||
echo "|--------|-----------|"
|
||||
echo "| P50 | $ICMP_P50_US |"
|
||||
echo "| P95 | $ICMP_P95_US |"
|
||||
echo "| P99 | $ICMP_P99_US |"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# PPS
|
||||
echo "### Packets Per Second"
|
||||
echo ""
|
||||
echo "| Packet Size | PPS |"
|
||||
echo "|-------------|-----|"
|
||||
|
||||
for pkt_size in 64 128 256 512; do
|
||||
json_file="${RESULTS_DIR}/udp-${pkt_size}byte.json"
|
||||
if [ -f "$json_file" ] && command -v jq &> /dev/null; then
|
||||
packets=$(jq -r '.end.sum.packets // 0' "$json_file" 2>/dev/null)
|
||||
pps=$(echo "scale=0; $packets / $DURATION" | bc 2>/dev/null || echo "N/A")
|
||||
echo "| ${pkt_size} bytes | $pps |"
|
||||
fi
|
||||
done 2>/dev/null
|
||||
|
||||
echo ""
|
||||
echo "## Files"
|
||||
echo ""
|
||||
echo '```'
|
||||
ls -la "$RESULTS_DIR"
|
||||
echo '```'
|
||||
|
||||
} > "$REPORT_FILE"
|
||||
|
||||
cat "$REPORT_FILE"
|
||||
|
||||
echo ""
|
||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
||||
echo "║ Benchmark Complete ║"
|
||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
||||
echo ""
|
||||
echo "Results saved to: $RESULTS_DIR"
|
||||
echo "Report: ${REPORT_FILE}"
|
||||
echo "Completed: $(date)"
|
||||
132
benchmarks/setup.sh
Executable file
132
benchmarks/setup.sh
Executable file
@@ -0,0 +1,132 @@
|
||||
#!/bin/bash
|
||||
# Volt Network Benchmark - Dependency Setup
|
||||
# Run on both client and server VMs
|
||||
|
||||
set -e
|
||||
|
||||
echo "=== Volt Network Benchmark Setup ==="
|
||||
echo ""
|
||||
|
||||
# Detect package manager
|
||||
if command -v apt-get &> /dev/null; then
|
||||
PKG_MGR="apt"
|
||||
INSTALL_CMD="sudo apt-get install -y"
|
||||
UPDATE_CMD="sudo apt-get update"
|
||||
elif command -v dnf &> /dev/null; then
|
||||
PKG_MGR="dnf"
|
||||
INSTALL_CMD="sudo dnf install -y"
|
||||
UPDATE_CMD="sudo dnf check-update || true"
|
||||
elif command -v yum &> /dev/null; then
|
||||
PKG_MGR="yum"
|
||||
INSTALL_CMD="sudo yum install -y"
|
||||
UPDATE_CMD="sudo yum check-update || true"
|
||||
else
|
||||
echo "ERROR: Unsupported package manager"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[1/5] Updating package cache..."
|
||||
$UPDATE_CMD
|
||||
|
||||
echo ""
|
||||
echo "[2/5] Installing iperf3..."
|
||||
$INSTALL_CMD iperf3
|
||||
|
||||
echo ""
|
||||
echo "[3/5] Installing netperf..."
|
||||
if [ "$PKG_MGR" = "apt" ]; then
|
||||
$INSTALL_CMD netperf || {
|
||||
echo "netperf not in repos, building from source..."
|
||||
$INSTALL_CMD build-essential autoconf automake
|
||||
cd /tmp
|
||||
git clone https://github.com/HewlettPackard/netperf.git
|
||||
cd netperf
|
||||
./autogen.sh
|
||||
./configure
|
||||
make
|
||||
sudo make install
|
||||
cd -
|
||||
}
|
||||
else
|
||||
$INSTALL_CMD netperf || {
|
||||
echo "netperf not in repos, building from source..."
|
||||
$INSTALL_CMD gcc make autoconf automake
|
||||
cd /tmp
|
||||
git clone https://github.com/HewlettPackard/netperf.git
|
||||
cd netperf
|
||||
./autogen.sh
|
||||
./configure
|
||||
make
|
||||
sudo make install
|
||||
cd -
|
||||
}
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "[4/5] Installing sockperf..."
|
||||
if [ "$PKG_MGR" = "apt" ]; then
|
||||
$INSTALL_CMD sockperf 2>/dev/null || {
|
||||
echo "sockperf not in repos, building from source..."
|
||||
$INSTALL_CMD build-essential autoconf automake libtool
|
||||
cd /tmp
|
||||
git clone https://github.com/Mellanox/sockperf.git
|
||||
cd sockperf
|
||||
./autogen.sh
|
||||
./configure
|
||||
make
|
||||
sudo make install
|
||||
cd -
|
||||
}
|
||||
else
|
||||
$INSTALL_CMD sockperf 2>/dev/null || {
|
||||
echo "sockperf not in repos, building from source..."
|
||||
$INSTALL_CMD gcc-c++ make autoconf automake libtool
|
||||
cd /tmp
|
||||
git clone https://github.com/Mellanox/sockperf.git
|
||||
cd sockperf
|
||||
./autogen.sh
|
||||
./configure
|
||||
make
|
||||
sudo make install
|
||||
cd -
|
||||
}
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "[5/5] Installing additional utilities..."
|
||||
$INSTALL_CMD jq bc ethtool 2>/dev/null || true
|
||||
|
||||
echo ""
|
||||
echo "=== Verifying Installation ==="
|
||||
echo ""
|
||||
|
||||
check_tool() {
|
||||
if command -v "$1" &> /dev/null; then
|
||||
echo "✓ $1: $(command -v $1)"
|
||||
else
|
||||
echo "✗ $1: NOT FOUND"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
FAILED=0
|
||||
check_tool iperf3 || FAILED=1
|
||||
check_tool netperf || FAILED=1
|
||||
check_tool netserver || FAILED=1
|
||||
check_tool sockperf || FAILED=1
|
||||
check_tool jq || echo " (jq optional, JSON parsing may fail)"
|
||||
check_tool bc || echo " (bc optional, calculations may fail)"
|
||||
|
||||
echo ""
|
||||
if [ $FAILED -eq 0 ]; then
|
||||
echo "=== Setup Complete ==="
|
||||
echo ""
|
||||
echo "To start servers (run on server VM):"
|
||||
echo " iperf3 -s -D"
|
||||
echo " sockperf sr --daemonize"
|
||||
echo " netserver"
|
||||
else
|
||||
echo "=== Setup Incomplete ==="
|
||||
echo "Some tools failed to install. Check errors above."
|
||||
exit 1
|
||||
fi
|
||||
139
benchmarks/throughput.sh
Executable file
139
benchmarks/throughput.sh
Executable file
@@ -0,0 +1,139 @@
|
||||
#!/bin/bash
|
||||
# Volt Network Benchmark - Throughput Tests
|
||||
# Tests TCP/UDP throughput using iperf3
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Parse arguments
|
||||
SERVER_IP="${1:?Usage: $0 <server-ip> [backend-name] [duration]}"
|
||||
BACKEND="${2:-unknown}"
|
||||
DURATION="${3:-30}"
|
||||
|
||||
# Setup results directory
|
||||
TIMESTAMP=$(date +%Y-%m-%d_%H%M%S)
|
||||
RESULTS_DIR="${SCRIPT_DIR}/results/${BACKEND}/${TIMESTAMP}"
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
echo "=== Volt Throughput Benchmark ==="
|
||||
echo "Server: $SERVER_IP"
|
||||
echo "Backend: $BACKEND"
|
||||
echo "Duration: ${DURATION}s per test"
|
||||
echo "Results: $RESULTS_DIR"
|
||||
echo ""
|
||||
|
||||
# Function to run iperf3 test
|
||||
run_iperf3() {
|
||||
local test_name="$1"
|
||||
local extra_args="$2"
|
||||
local output_file="${RESULTS_DIR}/${test_name}.json"
|
||||
|
||||
echo "[$(date +%H:%M:%S)] Running: $test_name"
|
||||
|
||||
if iperf3 -c "$SERVER_IP" -t "$DURATION" $extra_args -J > "$output_file" 2>&1; then
|
||||
# Extract key metrics
|
||||
if [ -f "$output_file" ] && command -v jq &> /dev/null; then
|
||||
local bps=$(jq -r '.end.sum_sent.bits_per_second // .end.sum.bits_per_second // 0' "$output_file" 2>/dev/null)
|
||||
local gbps=$(echo "scale=2; $bps / 1000000000" | bc 2>/dev/null || echo "N/A")
|
||||
echo " → ${gbps} Gbps"
|
||||
else
|
||||
echo " → Complete (see JSON for results)"
|
||||
fi
|
||||
else
|
||||
echo " → FAILED"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify connectivity
|
||||
echo "[$(date +%H:%M:%S)] Verifying connectivity to $SERVER_IP:5201..."
|
||||
if ! timeout 5 bash -c "echo > /dev/tcp/$SERVER_IP/5201" 2>/dev/null; then
|
||||
echo "ERROR: Cannot connect to iperf3 server at $SERVER_IP:5201"
|
||||
echo "Ensure iperf3 -s is running on the server"
|
||||
exit 1
|
||||
fi
|
||||
echo " → Connected"
|
||||
echo ""
|
||||
|
||||
# Record system info
|
||||
echo "=== System Info ===" > "${RESULTS_DIR}/system-info.txt"
|
||||
echo "Date: $(date)" >> "${RESULTS_DIR}/system-info.txt"
|
||||
echo "Kernel: $(uname -r)" >> "${RESULTS_DIR}/system-info.txt"
|
||||
echo "Backend: $BACKEND" >> "${RESULTS_DIR}/system-info.txt"
|
||||
ip addr show 2>/dev/null | grep -E "inet |mtu" >> "${RESULTS_DIR}/system-info.txt" || true
|
||||
echo "" >> "${RESULTS_DIR}/system-info.txt"
|
||||
|
||||
# TCP Tests
|
||||
echo "--- TCP Throughput Tests ---"
|
||||
echo ""
|
||||
|
||||
# Single stream TCP
|
||||
run_iperf3 "tcp-single" ""
|
||||
|
||||
# Wait between tests
|
||||
sleep 2
|
||||
|
||||
# Multi-stream TCP (8 parallel)
|
||||
run_iperf3 "tcp-multi-8" "-P 8"
|
||||
|
||||
sleep 2
|
||||
|
||||
# Reverse direction (download)
|
||||
run_iperf3 "tcp-reverse" "-R"
|
||||
|
||||
sleep 2
|
||||
|
||||
# UDP Tests
|
||||
echo ""
|
||||
echo "--- UDP Throughput Tests ---"
|
||||
echo ""
|
||||
|
||||
# UDP maximum bandwidth (let iperf3 find the limit)
|
||||
run_iperf3 "udp-max" "-u -b 0"
|
||||
|
||||
sleep 2
|
||||
|
||||
# UDP at specific rates for comparison
|
||||
for rate in 1G 5G 10G; do
|
||||
run_iperf3 "udp-${rate}" "-u -b ${rate}"
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Generate summary
|
||||
echo ""
|
||||
echo "=== Summary ==="
|
||||
SUMMARY_FILE="${RESULTS_DIR}/throughput-summary.txt"
|
||||
{
|
||||
echo "Volt Throughput Benchmark Results"
|
||||
echo "======================================"
|
||||
echo "Backend: $BACKEND"
|
||||
echo "Server: $SERVER_IP"
|
||||
echo "Date: $(date)"
|
||||
echo "Duration: ${DURATION}s per test"
|
||||
echo ""
|
||||
echo "Results:"
|
||||
echo "--------"
|
||||
|
||||
for json_file in "${RESULTS_DIR}"/*.json; do
|
||||
if [ -f "$json_file" ] && command -v jq &> /dev/null; then
|
||||
test_name=$(basename "$json_file" .json)
|
||||
|
||||
# Try to extract metrics based on test type
|
||||
if [[ "$test_name" == udp-* ]]; then
|
||||
bps=$(jq -r '.end.sum.bits_per_second // 0' "$json_file" 2>/dev/null)
|
||||
loss=$(jq -r '.end.sum.lost_percent // 0' "$json_file" 2>/dev/null)
|
||||
gbps=$(echo "scale=2; $bps / 1000000000" | bc 2>/dev/null || echo "N/A")
|
||||
printf "%-20s %8s Gbps (loss: %.2f%%)\n" "$test_name:" "$gbps" "$loss"
|
||||
else
|
||||
bps=$(jq -r '.end.sum_sent.bits_per_second // 0' "$json_file" 2>/dev/null)
|
||||
gbps=$(echo "scale=2; $bps / 1000000000" | bc 2>/dev/null || echo "N/A")
|
||||
printf "%-20s %8s Gbps\n" "$test_name:" "$gbps"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
} | tee "$SUMMARY_FILE"
|
||||
|
||||
echo ""
|
||||
echo "Full results saved to: $RESULTS_DIR"
|
||||
echo "JSON files available for detailed analysis"
|
||||
Reference in New Issue
Block a user