KVM-based microVMM for the Volt platform: - Sub-second VM boot times - Minimal memory footprint - Landlock LSM + seccomp security - Virtio device support - Custom kernel management Copyright (c) Armored Gates LLC. All rights reserved. Licensed under AGPSL v5.0
9.2 KiB
Volt VMM Benchmark Results
Date: 2026-03-08 Version: Volt v0.1.0 Host: Intel Xeon Silver 4210R @ 2.40GHz (2 sockets × 10 cores, 40 threads) Host Kernel: Linux 6.1.0-42-amd64 (Debian) Methodology: 10 iterations per test, measuring wall-clock time from process start to kernel panic (no rootfs). Kernel: Linux 4.14.174 (vmlinux ELF format).
Summary
| Metric | Value |
|---|---|
| Binary size | 3.10 MB (3,258,448 bytes) |
| Binary size (stripped) | 3.10 MB (3,258,440 bytes) |
| Cold boot to kernel panic (median) | 1,723 ms |
| VMM init time (median) | 110 ms |
| VMM init time (min) | 95 ms |
| Memory overhead (RSS - guest) | ~6.6 MB |
| Startup breakdown (first log → VM running) | 88.8 ms |
| Kernel boot time (internal) | ~1.41 s |
| Dynamic dependencies | libc, libm, libgcc_s |
1. Binary Size
| Metric | Size |
|---|---|
| Release binary | 3,258,448 bytes (3.10 MB) |
| Stripped binary | 3,258,440 bytes (3.10 MB) |
| Format | ELF 64-bit LSB PIE executable, dynamically linked |
Dynamic dependencies:
libc.so.6libm.so.6libgcc_s.so.1linux-vdso.so.1ld-linux-x86-64.so.2
Note: Binary is already stripped in release profile (only 8 bytes difference).
2. Cold Boot Time (Process Start → Kernel Panic)
Full end-to-end time from process launch to kernel panic detection. This includes VMM initialization, kernel loading, and the Linux kernel's full boot sequence (which ends with a panic because no rootfs is provided).
vmlinux-4.14 (128M RAM)
| Iteration | Time (ms) |
|---|---|
| 1 | 1,750 |
| 2 | 1,732 |
| 3 | 1,699 |
| 4 | 1,704 |
| 5 | 1,730 |
| 6 | 1,736 |
| 7 | 1,717 |
| 8 | 1,714 |
| 9 | 1,747 |
| 10 | 1,703 |
| Stat | Value |
|---|---|
| Minimum | 1,699 ms |
| Maximum | 1,750 ms |
| Median | 1,723 ms |
| Average | 1,723 ms |
| Spread | 51 ms (2.9%) |
vmlinux-firecracker-official (128M RAM)
Same kernel binary, different symlink path.
| Iteration | Time (ms) |
|---|---|
| 1 | 1,717 |
| 2 | 1,707 |
| 3 | 1,734 |
| 4 | 1,736 |
| 5 | 1,710 |
| 6 | 1,720 |
| 7 | 1,729 |
| 8 | 1,742 |
| 9 | 1,714 |
| 10 | 1,726 |
| Stat | Value |
|---|---|
| Minimum | 1,707 ms |
| Maximum | 1,742 ms |
| Median | 1,723 ms |
| Average | 1,723 ms |
Both kernel files are identical (21,441,304 bytes each). Results are consistent.
3. VMM Init Time (Process Start → "VM is running")
This measures only the VMM's own initialization overhead, before any guest code executes. Includes KVM setup, memory allocation, CPUID configuration, kernel loading, vCPU creation, and register setup.
| Iteration | Time (ms) |
|---|---|
| 1 | 100 |
| 2 | 95 |
| 3 | 112 |
| 4 | 114 |
| 5 | 121 |
| 6 | 116 |
| 7 | 105 |
| 8 | 108 |
| 9 | 99 |
| 10 | 112 |
| Stat | Value |
|---|---|
| Minimum | 95 ms |
| Maximum | 121 ms |
| Median | 110 ms |
Note: Measurement uses
date +%s%Nand polling for "VM is running" in output, which adds ~5-10ms of polling overhead. True VMM init time from TRACE logs is ~89ms.
4. Startup Breakdown (TRACE-level Timing)
Detailed timing from TRACE-level logs, showing each VMM initialization phase:
| Δ from start (ms) | Phase |
|---|---|
| +0.000 | Program start (Volt VMM v0.1.0) |
| +0.124 | KVM initialized (API v12, max 1024 vCPUs) |
| +0.138 | Creating virtual machine |
| +29.945 | CPUID configured (46 entries) |
| +72.049 | Guest memory allocated (128 MB, anonymous mmap) |
| +72.234 | VM created |
| +72.255 | Loading kernel |
| +88.276 | Kernel loaded (ELF vmlinux at 0x100000, entry 0x1000000) |
| +88.284 | Serial console initialized (0x3f8) |
| +88.288 | Creating vCPU |
| +88.717 | vCPU 0 configured (64-bit long mode) |
| +88.804 | Starting VM |
| +88.814 | VM running |
| +88.926 | vCPU 0 enters KVM_RUN |
Phase Durations
| Phase | Duration (ms) | % of Total |
|---|---|---|
| Program init → KVM init | 0.1 | 0.1% |
| KVM init → CPUID config | 29.8 | 33.5% |
| CPUID config → Memory alloc | 42.1 | 47.4% |
| Memory alloc → VM create | 0.2 | 0.2% |
| Kernel loading | 16.0 | 18.0% |
| Device init + vCPU setup | 0.6 | 0.7% |
| Total VMM init | 88.9 | 100% |
Key Observations
- CPUID configuration takes ~30ms — calls
KVM_GET_SUPPORTED_CPUIDand filters 46 entries - Memory allocation takes ~42ms —
mmapof 128MB anonymous memory +KVM_SET_USER_MEMORY_REGION - Kernel loading takes ~16ms — parsing 21MB ELF binary + page table setup
- vCPU setup is fast — under 1ms including MSR configuration and register setup
5. Memory Overhead
Measured RSS 2 seconds after VM start (guest kernel booted and running).
| Guest Memory | RSS (kB) | VmSize (kB) | VmPeak (kB) | Overhead (kB) | Overhead (MB) |
|---|---|---|---|---|---|
| 128 MB | 137,848 | 2,909,504 | 2,909,504 | 6,776 | 6.6 |
| 256 MB | 268,900 | 3,040,576 | 3,106,100 | 6,756 | 6.6 |
| 512 MB | 535,000 | 3,302,720 | 3,368,244 | 10,712 | 10.5 |
| 1 GB | 1,055,244 | 3,827,008 | 3,892,532 | 6,668 | 6.5 |
Overhead = RSS − Guest Memory Size
| Stat | Value |
|---|---|
| Typical VMM overhead | ~6.6 MB |
| Overhead components | Binary code/data, KVM structures, kernel image in-memory, page tables, serial buffer |
Note: The 512MB case shows slightly higher overhead (10.5 MB). This may be due to kernel memory allocation patterns or measurement timing. The consistent ~6.6 MB for 128M/256M/1G suggests the true VMM overhead is approximately 6.6 MB.
6. Kernel Internal Boot Time
Time from first kernel log message to kernel panic (measured from kernel's own timestamps in serial output):
| Metric | Value |
|---|---|
| First kernel message | [0.000000] Linux version 4.14.174 |
| Kernel panic | [1.413470] VFS: Unable to mount root fs |
| Kernel boot time | ~1.41 seconds |
This is the kernel's own view of boot time. The remaining ~0.3s of the 1.72s total is:
- VMM init: ~89ms
- Kernel rebooting after panic: ~1s (configured
panic=1) - Process teardown: small
Actual cold boot to usable kernel: ~89ms (VMM) + ~1.41s (kernel) ≈ 1.5s total.
7. CPUID Configuration
Volt configures 46 CPUID entries for the guest vCPU.
Strategy
- Starts from
KVM_GET_SUPPORTED_CPUID(host capabilities) - Filters out features not suitable for guests:
- Removed from leaf 0x1 ECX: DTES64, MONITOR/MWAIT, DS_CPL, VMX, SMX, EIST, TM2, PDCM
- Added to leaf 0x1 ECX: HYPERVISOR bit (signals VM to guest)
- Removed from leaf 0x1 EDX: MCE, MCA, ACPI thermal, HTT (single vCPU)
- Removed from leaf 0x7 EBX: HLE, RTM (TSX), RDT_M, RDT_A, MPX
- Removed from leaf 0x7 ECX: PKU, OSPKE, LA57
- Cleared leaves: 0x6 (thermal), 0xA (perf monitoring)
- Preserved: All SSE/AVX/AVX-512, AES, XSAVE, POPCNT, RDRAND, RDSEED, FSGSBASE, etc.
Key CPUID Values (from TRACE)
| Leaf | Register | Value | Notes |
|---|---|---|---|
| 0x0 | EAX | 22 | Max standard leaf |
| 0x0 | EBX/EDX/ECX | GenuineIntel | Host vendor passthrough |
| 0x1 | ECX | 0xf6fa3203 | SSE3, SSSE3, SSE4.1/4.2, AVX, AES, XSAVE, POPCNT, HYPERVISOR |
| 0x1 | EDX | 0x0f8bbb7f | FPU, TSC, MSR, PAE, CX8, APIC, SEP, PGE, CMOV, PAT, CLFLUSH, MMX, FXSR, SSE, SSE2 |
| 0x7 | EBX | 0xd19f27eb | FSGSBASE, BMI1, AVX2, SMEP, BMI2, ERMS, INVPCID, RDSEED, ADX, SMAP, CLFLUSHOPT, CLWB, AVX-512(F/DQ/CD/BW/VL) |
| 0x7 | EDX | 0xac000400 | SPEC_CTRL, STIBP, ARCH_CAP, SSBD |
| 0x80000001 | ECX | 0x00000121 | LAHF_LM, ABM, PREFETCHW |
| 0x80000001 | EDX | — | SYSCALL ✓, NX ✓, LM ✓, RDTSCP, 1GB pages |
| 0x40000000 | — | KVMKVMKVM | KVM hypervisor signature |
Features Exposed to Guest
- Compute: SSE through SSE4.2, AVX, AVX2, AVX-512 (F/DQ/CD/BW/VL/VNNI), FMA, AES-NI, SHA
- Memory: SMEP, SMAP, CLFLUSHOPT, CLWB, INVPCID, PCID
- Security: IBRS, IBPB, STIBP, SSBD, ARCH_CAPABILITIES, NX
- Misc: RDRAND, RDSEED, XSAVE/XSAVEC/XSAVES, TSC (invariant), RDTSCP
8. Test Environment
| Component | Details |
|---|---|
| Host CPU | Intel Xeon Silver 4210R @ 2.40GHz (Cascade Lake) |
| Host RAM | Available (no contention during tests) |
| Host OS | Debian, Linux 6.1.0-42-amd64 |
| KVM | API version 12, max 1024 vCPUs |
| Guest kernel | Linux 4.14.174 (vmlinux ELF, 21 MB) |
| Guest config | 1 vCPU, variable RAM, no rootfs, console=ttyS0 reboot=k panic=1 pci=off |
| Volt | v0.1.0, release build, dynamically linked |
| Rust | nightly (cargo build --release) |
Notes
- Boot time is dominated by the kernel (~1.41s kernel vs ~89ms VMM). VMM overhead is <6% of total boot time.
- Memory overhead is minimal at ~6.6 MB regardless of guest memory size.
- Binary is already stripped in release profile —
stripsaves only 8 bytes. - CPUID filtering is comprehensive — removes dangerous features (VMX, TSX, MPX) while preserving compute-heavy features (AVX-512, AES-NI).
- Hugepages not tested — host has no hugepages allocated (
HugePages_Total=0). The--hugepagesflag is available but untestable. - Both kernels are identical —
vmlinux-4.14andvmlinux-firecracker-official.binare the same file (same size, same boot times).