Files
volt-vmm/docs/volt-vs-firecracker-report.md
Karl Clinger 40ed108dd5 Volt VMM (Neutron Stardust): source-available under AGPSL v5.0
KVM-based microVMM for the Volt platform:
- Sub-second VM boot times
- Minimal memory footprint
- Landlock LSM + seccomp security
- Virtio device support
- Custom kernel management

Copyright (c) Armored Gates LLC. All rights reserved.
Licensed under AGPSL v5.0
2026-03-21 01:04:35 -05:00

16 KiB
Raw Permalink Blame History

Volt vs Firecracker: Consolidated Comparison Report

Date: 2026-03-08
Volt: v0.1.0 (pre-release)
Firecracker: v1.14.2 (stable)
Test Host: Intel Xeon Silver 4210R @ 2.40GHz, Linux 6.1.0-42-amd64
Kernel: Linux 4.14.174 (vmlinux ELF, 21MB) — same binary for both VMMs


1. Executive Summary

Volt is a promising early-stage microVMM that matches Firecracker's proven architecture in the fundamentals — KVM-based, Rust-written, virtio-mmio transport — while offering unique advantages in developer experience (CLI-first), planned Landlock-based unprivileged sandboxing, and content-addressed storage (Stellarium). However, Volt's VMM init time (~89ms) is comparable to Firecracker's (~80ms), while its total boot time is ~35% slower (1,723ms vs 1,127ms) due to kernel-level differences in i8042 handling. Memory overhead tells the real story: Volt uses only 6.6MB VMM overhead vs Firecracker's ~50MB, a 7.5× advantage. The critical blocker for production is the security gap — no seccomp, no capability dropping, no sandboxing — all of which are well-understood problems with clear 1-2 week implementation paths.


2. Performance Comparison

2.1 Boot Time

Both VMMs tested with identical kernel (vmlinux-4.14.174), 128MB RAM, 1 vCPU, no rootfs, default boot args (console=ttyS0 reboot=k panic=1 pci=off):

Metric Volt Firecracker Delta Winner
Cold boot to panic (median) 1,723 ms 1,127 ms +596 ms (+53%) 🏆 Firecracker
VMM init time (median) 110 ms¹ ~80 ms² +30 ms (+38%) 🏆 Firecracker
VMM init (TRACE-level) 88.9 ms
Kernel internal boot 1,413 ms 912 ms +501 ms 🏆 Firecracker
Boot spread (consistency) 51 ms (2.9%) 31 ms (2.7%) Comparable

¹ Measured via external polling; true init from TRACE logs is 88.9ms
² Measured from process start to InstanceStart API return

Why Firecracker boots faster overall: Firecracker's kernel reports ~912ms boot time vs Volt's ~1,413ms for the same kernel binary. The 500ms difference is likely explained by the i8042 keyboard controller timeout behavior — Firecracker implements a minimal i8042 device that responds to probes, while Volt doesn't implement i8042 at all, causing the kernel to wait for probe timeouts. With i8042.noaux i8042.nokbd boot args, Firecracker drops to 351ms total (138ms kernel time). Volt would likely see a similar reduction with these flags.

VMM-only overhead is comparable: Stripping out kernel boot time, both VMMs initialize in ~80-90ms — remarkably close for codebases of such different maturity levels.

Firecracker Optimized Boot (i8042 disabled)

Metric Firecracker (default) Firecracker (no i8042)
Wall clock (median) 1,127 ms 351 ms
Kernel internal 912 ms 138 ms

2.2 Binary Size

Metric Volt Firecracker Notes
Binary size 3.10 MB (3,258,448 B) 3.44 MB (3,436,512 B) Volt 5% smaller
Stripped 3.10 MB (no change) Not stripped Volt already stripped in release
Linking Dynamic (libc, libm, libgcc_s) Static-pie (self-contained) Firecracker is more portable

Volt's smaller binary is notable given that it includes Tokio + Axum. However, Firecracker includes musl libc statically and is fully self-contained — a significant operational advantage.

2.3 Memory Overhead

RSS measured during VM execution with guest kernel booted:

Guest Memory Volt RSS Firecracker RSS Volt Overhead Firecracker Overhead
128 MB 135 MB 50-52 MB 6.6 MB ~50 MB
256 MB 263 MB 56-57 MB 6.6 MB ~54 MB
512 MB 522 MB 60-61 MB 10.5 MB ~58 MB
1 GB 1,031 MB 6.5 MB
Metric Volt Firecracker Winner
VMM base overhead ~6.6 MB ~50 MB 🏆 Volt (7.5×)
Pre-boot RSS 3.3 MB
Scaling per +128MB ~0 MB ~4 MB 🏆 Volt

This is Volt's standout metric. The ~6.6MB overhead vs Firecracker's ~50MB means at scale (thousands of microVMs), Volt saves ~43MB per instance. For 1,000 VMs, that's ~42GB of host memory saved.

The difference is likely because Firecracker's guest kernel touches more pages during boot (THP allocates in 2MB chunks, inflating RSS), while Volt's memory mapping strategy results in less early-boot page faulting. This deserves deeper investigation to confirm it's a real architectural advantage vs measurement artifact.

2.4 VMM Startup Breakdown

Phase Volt (ms) Firecracker (ms) Notes
Process start → ready 0.1 8 FC starts API socket
CPUID configuration 29.8 Included in InstanceStart for FC
Memory allocation 42.1 Included in InstanceStart for FC
Kernel loading 16.0 13 PUT /boot-source for FC
Machine config 9 PUT /machine-config for FC
VM create + vCPU setup 0.9 44-74 InstanceStart for FC
Total VMM init 88.9 ~80 Comparable

3. Security Comparison

3.1 Security Layer Stack

Layer Volt Firecracker
KVM hardware isolation
CPUID filtering (46 entries, strips VMX/SMX/TSX/MPX) (+ CPU templates T2/C3/V1N1)
seccomp-bpf Not implemented (~50 syscall allowlist)
Capability dropping Not implemented All caps dropped
Filesystem isolation 📋 Landlock planned Jailer (chroot + pivot_root)
Namespace isolation (PID/Net) (via Jailer)
Cgroup resource limits (CPU, memory, IO)
CPU templates (5 templates for migration safety)

3.2 Security Posture Assessment

Volt Firecracker
Production-ready? No Yes
Multi-tenant safe? No Yes
VMM escape impact Full user-level access to host Limited to ~50 syscalls in chroot jail
Privilege required User with /dev/kvm access Root for jailer setup, then drops everything

Bottom line: Volt's CPUID filtering is functionally equivalent to Firecracker's, but everything above KVM-level isolation is missing. A VMM escape in Volt gives the attacker full access to the host user's filesystem and all syscalls. This is the #1 blocker for any production deployment.

3.3 Volt's Landlock Advantage (When Implemented)

Volt's planned Landlock-first approach has a genuine architectural advantage:

Aspect Volt (planned) Firecracker
Root required? No Yes (for jailer)
Setup binary None (in-process) Separate jailer binary
Mechanism Landlock restrict_self() chroot + pivot_root + namespaces
Kernel requirement 5.13+ Any Linux with namespaces

4. Feature Comparison

Feature Volt Firecracker
Core
KVM-based, Rust
x86_64
aarch64
Multi-vCPU (1-255) (1-32)
Boot
vmlinux (ELF64)
bzImage
Linux boot protocol
PVH boot
Devices
virtio-blk (+ rate limiting, io_uring)
virtio-net 🔨 Disabled (TAP, rate-limited)
virtio-vsock
virtio-balloon
Serial console (8250)
i8042 (keyboard/reset) (minimal)
vhost-net (kernel offload) 🔨 Code exists
Networking
TAP backend
macvtap 🔨 Code exists
MMDS (metadata service)
Storage
Raw disk images
Content-addressed (Stellarium) 🔨 Separate crate
io_uring backend
Security
CPUID filtering
CPU templates
seccomp-bpf
Jailer / sandboxing (Landlock planned)
Capability dropping
Cgroup integration
Operations
CLI boot (single command) (API only)
REST API (Unix socket) (Axum) (custom HTTP)
Snapshot/Restore
Live migration
Hot-plug (drives)
Prometheus metrics (basic) (comprehensive)
Structured logging (tracing)
JSON config file
OpenAPI spec

Legend: Production-ready | 🔨 Code exists, not integrated | 📋 Planned | Not present


5. Architecture Comparison

5.1 Key Architectural Differences

Aspect Volt Firecracker
Launch model CLI-first, optional API API-only (no CLI config)
Async runtime Tokio (full) None (raw epoll)
HTTP stack Axum + Hyper + Tower Custom HTTP parser
Serial handling Inline in vCPU exit loop Separate device with epoll
IO model Mixed (sync IO + Tokio) Pure synchronous epoll
Dependencies ~285 crates ~200-250 crates
Codebase ~18K lines Rust ~70K lines Rust
Test coverage ~1K lines (unit only) ~30K+ lines (unit + integration + perf)
Memory abstraction Custom GuestMemoryManager vm-memory crate (shared ecosystem)
Kernel loader Custom hand-written ELF/bzImage parser linux-loader crate

5.2 Threading Model

Component Volt Firecracker
Main thread Event loop + API Event loop + serial + devices
API thread Tokio runtime fc_api (custom HTTP)
vCPU threads 1 per vCPU 1 per vCPU (fc_vcpu_N)
Total (1 vCPU) 2+ (Tokio spawns workers) 3

5.3 Page Table Setup

Feature Volt Firecracker
Identity mapping 0 → 4GB (2MB pages) 0 → 1GB (2MB pages)
High kernel mapping (0xFFFFFFFF80000000+)
PML4 address 0x1000 0x9000
Coverage More thorough Minimal (kernel builds its own)

Volt's more thorough page table setup is technically superior but has no measurable performance impact since the kernel rebuilds page tables early in boot.


6. Volt Strengths

Where Volt Wins Today

  1. Memory efficiency (7.5× less overhead) — 6.6MB vs 50MB VMM overhead. At scale, this saves ~43MB per VM instance. For 10,000 VMs, that's ~420GB of host RAM.

  2. Smaller binary (5% smaller) — 3.10MB vs 3.44MB, despite including Tokio. Removing Tokio could push this further.

  3. Developer experience — Single-command CLI boot vs multi-step API configuration. Dramatically faster iteration for development and testing.

  4. Comparable VMM init time — ~89ms vs ~80ms. The VMM itself is nearly as fast despite being 4× less code.

Where Volt Could Win (With Completion)

  1. Unprivileged operation (Landlock) — No root required, no jailer binary. Enables deployment on developer laptops, edge devices, and rootless environments.

  2. Content-addressed storage (Stellarium) — Instant VM cloning, deduplication, efficient multi-image management. No equivalent in Firecracker.

  3. vhost-net / macvtap networking — Kernel-offloaded packet processing could deliver significantly higher network throughput than Firecracker's userspace virtio-net.

  4. systemd-networkd integration — Simplified network setup on modern Linux without manual bridge/TAP configuration.


7. Volt Gaps

🔴 Critical (Blocks Production Use)

Gap Impact Estimated Effort
No seccomp filter VMM escape → full syscall access 2-3 days
No capability dropping Process retains all user capabilities 1 day
virtio-net disabled VMs cannot network 3-5 days
No integration tests No confidence in boot-to-userspace 1-2 weeks
No i8042 device ~500ms boot penalty (kernel probe timeout) 1-2 days

🟡 Important (Blocks Feature Parity)

Gap Impact Estimated Effort
No Landlock sandboxing No filesystem isolation 2-3 days
No snapshot/restore No fast resume, no migration 2-3 weeks
No vsock No host-guest communication channel 1-2 weeks
No rate limiting Can't throttle noisy neighbors 1 week
No CPU templates Can't normalize across hardware 1-2 weeks
No aarch64 x86 only 2-4 weeks

🟢 Differentiators (Completion Opportunities)

Gap Impact Estimated Effort
Stellarium integration CAS storage not wired to virtio-blk 1-2 weeks
vhost-net completion Kernel-offloaded networking 1-2 weeks
macvtap completion Direct NIC attachment 1 week
io_uring block backend Higher IOPS 1-2 weeks
Tokio removal Smaller binary, deterministic latency 1-2 weeks

8. Recommendations

Prioritized Development Roadmap

Phase 1: Security Hardening (1-2 weeks)

Goal: Make Volt safe for single-tenant use

  1. Add seccomp-bpf filter — Allowlist ~50 syscalls. Use Firecracker's list as reference. (2-3 days)
  2. Drop capabilities — Call prctl(PR_SET_NO_NEW_PRIVS) and drop all caps after KVM/TAP setup. (1 day)
  3. Implement Landlock sandboxing — Restrict to kernel path, disk images, /dev/kvm, /dev/net/tun, API socket. (2-3 days)
  4. Add minimal i8042 device — Respond to keyboard controller probes to eliminate ~500ms boot penalty. (1-2 days)

Phase 2: Networking & Devices (2-3 weeks)

Goal: Boot a VM with working network

  1. Fix and integrate virtio-net — Wire TAP backend into vCPU IO exit handler. (3-5 days)
  2. Complete vhost-net — Kernel-offloaded networking for throughput advantage over Firecracker. (1-2 weeks)
  3. Integration tests — Automated boot-to-userspace, network connectivity, block IO tests. (1-2 weeks)

Phase 3: Operational Features (3-4 weeks)

Goal: Feature parity for orchestration use cases

  1. Snapshot/Restore — State save/load for fast resume and migration. (2-3 weeks)
  2. vsock — Host-guest communication for orchestration agents. (1-2 weeks)
  3. Rate limiting — IO throttling for multi-tenant fairness. (1 week)

Phase 4: Differentiation (4-6 weeks)

Goal: Surpass Firecracker in unique areas

  1. Stellarium integration — Wire CAS into virtio-blk for instant cloning and dedup. (1-2 weeks)
  2. CPU templates — Normalize CPUID across hardware for migration safety. (1-2 weeks)
  3. Remove Tokio — Replace with raw epoll for smaller binary and deterministic behavior. (1-2 weeks)
  4. macvtap completion — Direct NIC attachment without bridges. (1 week)

Quick Wins (< 1 day each)

  • Add i8042.noaux i8042.nokbd to default boot args (instant ~500ms boot improvement)
  • Drop capabilities after setup (prctl one-liner)
  • Add --no-default-features to Tokio to reduce binary size
  • Benchmark with hugepages enabled (echo 256 > /proc/sys/vm/nr_hugepages)

9. Raw Data

Individual detailed reports:

Report Path Size
Volt Benchmarks benchmark-volt-vmm.md 9.4 KB
Firecracker Benchmarks benchmark-firecracker.md 15.2 KB
Architecture & Security Comparison comparison-architecture.md 28.1 KB
Firecracker Test Results (earlier) firecracker-test-results.md 5.7 KB
Firecracker Comparison (earlier) firecracker-comparison.md 12.5 KB

Report generated: 2026-03-08 — Consolidated from benchmark and architecture analysis by three parallel agents