Complete infrastructure platform CLI: - Container runtime (systemd-nspawn) - VoltVisor VMs (Neutron Stardust / QEMU) - Stellarium CAS (content-addressed storage) - ORAS Registry - GitOps integration - Landlock LSM security - Compose orchestration - Mesh networking Copyright (c) Armored Gates LLC. All rights reserved. Licensed under AGPSL v5.0
23 KiB
Volt Architecture
Volt is a unified platform management CLI built on three engines:
- Voltainer — Container engine (
systemd-nspawn) - Voltvisor — Virtual machine engine (KVM/QEMU)
- Stellarium — Content-addressed storage (CAS)
This document describes how they work internally and how they integrate with the host system.
Design Philosophy
systemd-Native
Volt works with systemd, not against it. Every workload is a systemd unit:
- Containers are
systemd-nspawnmachines managed viavolt-container@<name>.service - VMs are QEMU processes managed via
volt-vm@<name>.service - Tasks are
systemd timer+servicepairs - All logging flows through the systemd journal
This gives Volt free cgroup integration, dependency management, process tracking, and socket activation.
One Binary
The volt binary at /usr/local/bin/volt handles everything. It communicates with the volt daemon (voltd) over a Unix socket at /var/run/volt/volt.sock. For read-only operations like volt ps, volt top, and volt service list, the CLI can query systemd directly without the daemon.
Human-Readable Everything
Every workload has a human-assigned name. volt ps shows names, not hex IDs. Status columns use natural language (running, stopped, failed), not codes.
Voltainer — Container Engine
How Containers Work
Voltainer containers are systemd-nspawn machines. When you create a container:
- Image resolution: Volt locates the rootfs directory under
/var/lib/volt/images/ - Rootfs copy: The image rootfs is copied (or overlaid) to
/var/lib/volt/containers/<name>/rootfs/ - Unit generation: A systemd unit file is generated at
/var/lib/volt/units/volt-container@<name>.service - Network setup: A veth pair is created, one end in the container namespace, the other attached to the specified bridge (default:
volt0) - Start:
systemctl start volt-container@<name>.servicelaunchessystemd-nspawnwith the appropriate flags
Container Lifecycle
create → stopped → start → running → stop → stopped → delete
↑ |
└── restart ───────┘
State transitions are all mediated through systemd. volt container stop is systemctl stop. volt container start is systemctl start. This means systemd handles process cleanup, cgroup teardown, and signal delivery.
Container Isolation
Each container gets:
- Mount namespace: Own rootfs, bind mounts for volumes
- PID namespace: PID 1 is the container init
- Network namespace: Own network stack, connected via veth to bridge
- UTS namespace: Own hostname
- IPC namespace: Isolated IPC
- cgroup v2: Resource limits (CPU, memory, I/O) enforced via cgroup controllers
Containers share the host kernel. They are not VMs — there is no hypervisor overhead.
Container Storage
/var/lib/volt/containers/<name>/
├── rootfs/ # Container filesystem
├── config.json # Container configuration (image, resources, network, etc.)
└── state.json # Runtime state (PID, IP, start time, etc.)
Volumes are bind-mounted into the container rootfs at start time.
Resource Limits
Resource limits map directly to cgroup v2 controllers:
| Volt Flag | cgroup v2 Controller | File |
|---|---|---|
--memory 1G |
memory.max |
Memory limit |
--cpu 200 |
cpu.max |
CPU quota (percentage × 100) |
Limits can be updated on a running container via volt container update, which writes directly to the cgroup filesystem.
Voltvisor — VM Engine
How VMs Work
Voltvisor manages KVM/QEMU virtual machines. When you create a VM:
- Image resolution: The base image is located or pulled
- Disk creation: A qcow2 disk is created at
/var/lib/volt/vms/<name>/disk.qcow2 - Kernel selection: The appropriate kernel is selected from
/var/lib/volt/kernels/based on the--kernelprofile - Unit generation: A systemd unit is generated at
/var/lib/volt/units/volt-vm@<name>.service - Start:
systemctl start volt-vm@<name>.servicelaunches QEMU with appropriate flags
Kernel Profiles
Voltvisor supports multiple kernel profiles:
| Profile | Description |
|---|---|
server |
Default. Optimized for server workloads. |
desktop |
Includes graphics drivers, input support for VDI. |
rt |
Real-time kernel for latency-sensitive workloads. |
minimal |
Stripped-down kernel for maximum density. |
dev |
Debug-enabled kernel with extra tracing. |
VM Storage
/var/lib/volt/vms/<name>/
├── disk.qcow2 # Primary disk image
├── config.json # VM configuration
├── state.json # Runtime state
└── snapshots/ # VM snapshots
└── <snap-name>.qcow2
VM Networking
VMs connect to volt bridges via TAP interfaces. The TAP device is created when the VM starts and attached to the specified bridge. From the network's perspective, a VM on volt0 and a container on volt0 are peers — they communicate at L2.
VM Performance Tuning
Voltvisor supports hardware-level tuning:
- CPU pinning: Pin vCPUs to physical CPUs via
volt tune cpu pin - Hugepages: Use 2M or 1G hugepages via
volt tune memory hugepages - I/O scheduling: Set per-device I/O scheduler via
volt tune io scheduler - NUMA awareness: Pin to specific NUMA nodes
Stellarium — Content-Addressed Storage
How CAS Works
Stellarium is the storage backend shared by Voltainer and Voltvisor. Files are stored by their content hash (BLAKE3), enabling:
- Deduplication: Identical files across images are stored once
- Integrity verification: Every object can be verified against its hash
- Efficient transfer: Only missing objects need to be pulled
CAS Layout
/var/lib/volt/cas/
├── objects/ # Content-addressed objects (hash → data)
│ ├── ab/ # First two chars of hash for fanout
│ │ ├── ab1234...
│ │ └── ab5678...
│ └── cd/
│ └── cd9012...
├── refs/ # Named references to object trees
│ ├── images/
│ └── manifests/
└── tmp/ # Temporary staging area
CAS Operations
# Check store health
volt cas status
# Verify all objects
volt cas verify
# Garbage collect unreferenced objects
volt cas gc --dry-run
volt cas gc
# Build CAS objects from a directory
volt cas build /path/to/rootfs
# Deduplication analysis
volt cas dedup
Image to CAS Flow
When an image is pulled:
- The rootfs is downloaded/built (e.g., via debootstrap)
- Each file is hashed and stored as a CAS object
- A manifest is created mapping paths to hashes
- The manifest is stored as a ref under
/var/lib/volt/cas/refs/
When a container is created from that image, files are assembled from CAS objects into the container rootfs.
Filesystem Layout
Configuration
/etc/volt/
├── config.yaml # Main configuration file
├── compose/ # System-level Constellation definitions
└── profiles/ # Custom tuning profiles
Persistent Data
/var/lib/volt/
├── containers/ # Container rootfs and metadata
├── vms/ # VM disks and state
├── kernels/ # VM kernels
├── images/ # Downloaded/built images
├── volumes/ # Named persistent volumes
├── cas/ # Stellarium CAS object store
├── networks/ # Network configuration
├── units/ # Generated systemd unit files
└── backups/ # System backups
Runtime State
/var/run/volt/
├── volt.sock # Daemon Unix socket
├── volt.pid # Daemon PID file
└── locks/ # Lock files for concurrent operations
Cache (Safe to Delete)
/var/cache/volt/
├── cas/ # CAS object cache
├── images/ # Image layer cache
└── dns/ # DNS resolution cache
Logs
/var/log/volt/
├── daemon.log # Daemon operational log
└── audit.log # Audit trail of all state-changing operations
systemd Integration
Unit Templates
Volt uses systemd template units to manage workloads:
| Unit | Description |
|---|---|
volt.service |
Main volt daemon |
volt.socket |
Socket activation for daemon |
volt-network.service |
Network bridge management |
volt-dns.service |
Internal DNS resolver |
volt-container@<name>.service |
Per-container unit |
volt-vm@<name>.service |
Per-VM unit |
volt-task-<name>.timer |
Per-task timer |
volt-task-<name>.service |
Per-task service |
Journal Integration
All workload logs flow through the systemd journal. volt logs queries the journal with appropriate filters:
- Container logs:
_SYSTEMD_UNIT=volt-container@<name>.service - VM logs:
_SYSTEMD_UNIT=volt-vm@<name>.service - Service logs:
_SYSTEMD_UNIT=<name>.service - Task logs:
_SYSTEMD_UNIT=volt-task-<name>.service
cgroup v2
Volt relies on cgroup v2 for resource accounting and limits. The cgroup hierarchy:
/sys/fs/cgroup/
└── system.slice/
├── volt-container@web.service/ # Container cgroup
├── volt-vm@db-primary.service/ # VM cgroup
└── nginx.service/ # Service cgroup
This is where volt top reads CPU, memory, and I/O metrics from.
ORAS Registry
Volt includes a built-in OCI Distribution Spec compliant container registry. The registry is backed entirely by Stellarium CAS — there is no separate storage engine.
CAS Mapping
The key insight: an OCI blob digest IS a CAS address. When a client pushes a blob with digest sha256:abc123..., that blob is stored directly as a CAS object at /var/lib/volt/cas/objects/ab/abc123.... No translation, no indirection.
OCI Client Volt Registry Stellarium CAS
───────── ───────────── ──────────────
PUT /v2/myapp/blobs/uploads/... ─→ Receive blob ─→ Store as CAS object
Content: <binary data> Compute sha256 digest objects/ab/abc123...
←──────────────────────────────────────────────────────────────
201 Created Index digest→repo
Location: sha256:abc123... in refs/registry/
Manifests are stored as CAS objects too, with an additional index mapping repository:tag → digest under /var/lib/volt/cas/refs/registry/.
Deduplication
Because all storage is CAS-backed, deduplication is automatic and cross-system:
- Two repositories sharing the same layer → stored once
- A registry blob matching a local container image layer → stored once
- A snapshot and a registry artifact sharing files → stored once
Architecture
┌────────────────────┐
│ OCI Client │ (oras, helm, podman, skopeo, etc.)
│ (push / pull) │
└────────┬───────────┘
│ HTTP/HTTPS (OCI Distribution Spec)
┌────────┴───────────┐
│ Registry Server │ volt registry serve --port 5000
│ (Go net/http) │
│ │
│ ┌──────────────┐ │
│ │ Tag Index │ │ refs/registry/<repo>/<tag> → digest
│ │ Manifest DB │ │ refs/registry/<repo>/manifests/<digest>
│ └──────────────┘ │
│ │
│ ┌──────────────┐ │
│ │ Auth Layer │ │ HMAC-SHA256 bearer tokens
│ │ │ │ Anonymous pull (configurable)
│ └──────────────┘ │
└────────┬───────────┘
│ Direct read/write
┌────────┴───────────┐
│ Stellarium CAS │ objects/ (content-addressed by sha256)
│ /var/lib/volt/cas │
└────────────────────┘
See Registry for usage documentation.
GitOps Pipeline
Volt's built-in GitOps system links Git repositories to workloads for automated deployment.
Pipeline Architecture
┌──────────────┐ ┌──────────────────────────┐ ┌──────────────┐
│ Git Provider │ │ Volt GitOps Server │ │ Workloads │
│ │ │ │ │ │
│ GitHub ─────┼──────┼→ POST /hooks/github │ │ │
│ GitLab ─────┼──────┼→ POST /hooks/gitlab │ │ │
│ Bitbucket ──┼──────┼→ POST /hooks/bitbucket │ │ │
│ │ │ │ │ │
│ SVN ────────┼──────┼→ Polling (configurable) │ │ │
└──────────────┘ │ │ │ │
│ ┌─────────────────────┐ │ │ │
│ │ Pipeline Manager │ │ │ │
│ │ │ │ │ │
│ │ 1. Validate webhook │ │ │ │
│ │ 2. Clone/pull repo │─┼──┐ │ │
│ │ 3. Detect Voltfile │ │ │ │ │
│ │ 4. Deploy workload │─┼──┼──→│ container │
│ │ 5. Log result │ │ │ │ vm │
│ └─────────────────────┘ │ │ │ service │
│ │ │ └──────────────┘
│ ┌─────────────────────┐ │ │
│ │ Deploy History │ │ │
│ │ (JSON log) │ │ │ ┌──────────────┐
│ └─────────────────────┘ │ └──→│ Git Cache │
└──────────────────────────┘ │ /var/lib/ │
│ volt/gitops/ │
└──────────────┘
Webhook Flow
- Git provider sends a push event to the webhook endpoint
- The GitOps server validates the HMAC signature against the pipeline's configured secret
- The event is matched to a pipeline by repository URL and branch
- The repository is cloned (or pulled if cached) to
/var/lib/volt/gitops/<pipeline>/ - Volt scans the repo root for
volt-manifest.yaml,Voltfile, orvolt-compose.yaml - The workload is created or updated according to the manifest
- The result is logged to the pipeline's deploy history
SVN Polling
For SVN repositories, a polling goroutine checks for revision changes at the configured interval (default: 60s). When a new revision is detected, the same clone→detect→deploy flow is triggered.
See GitOps for usage documentation.
Ingress Proxy
Volt includes a built-in reverse proxy for routing external HTTP/HTTPS traffic to workloads.
Architecture
┌─────────────────┐
│ Internet │
│ (HTTP/HTTPS) │
└────────┬────────┘
│
┌────────┴────────┐
│ Ingress Proxy │ volt ingress serve
│ │ Ports: 80 (HTTP), 443 (HTTPS)
│ ┌───────────┐ │
│ │ Router │ │ Hostname + path prefix matching
│ │ │ │ Route: app.example.com → web:8080
│ │ │ │ Route: api.example.com/v1 → api:3000
│ └─────┬─────┘ │
│ │ │
│ ┌─────┴─────┐ │
│ │ TLS │ │ Auto: ACME (Let's Encrypt)
│ │ Terminator│ │ Manual: user-provided certs
│ │ │ │ Passthrough: forward TLS to backend
│ └───────────┘ │
│ │
│ ┌───────────┐ │
│ │ Health │ │ Backend health checks
│ │ Checker │ │ Automatic failover
│ └───────────┘ │
└────────┬────────┘
│ Reverse proxy to backends
┌────────┴────────┐
│ Workloads │
│ web:8080 │
│ api:3000 │
│ static:80 │
└─────────────────┘
Route Resolution
Routes are matched in order of specificity:
- Exact hostname + longest path prefix
- Exact hostname (no path)
- Wildcard hostname + longest path prefix
TLS Modes
| Mode | Description |
|---|---|
auto |
Automatic certificate provisioning via ACME (Let's Encrypt). Volt handles certificate issuance, renewal, and storage. |
manual |
User-provided certificate and key files. |
passthrough |
TLS is forwarded to the backend without termination. |
Hot Reload
Routes can be updated without proxy restart:
volt ingress reload
The reload is zero-downtime — existing connections are drained while new connections use the updated routes.
See Networking — Ingress Proxy for usage documentation.
License Tier Feature Matrix
| Feature | Free | Pro |
|---|---|---|
| Containers (Voltainer) | ✓ | ✓ |
| VMs (Voltvisor) | ✓ | ✓ |
| Services & Tasks | ✓ | ✓ |
| Networking & Firewall | ✓ | ✓ |
| Stellarium CAS | ✓ | ✓ |
| Compose / Constellations | ✓ | ✓ |
| Snapshots | ✓ | ✓ |
| Bundles | ✓ | ✓ |
| ORAS Registry (pull) | ✓ | ✓ |
| Ingress Proxy | ✓ | ✓ |
| GitOps Pipelines | ✓ | ✓ |
| ORAS Registry (push) | — | ✓ |
| CDN Integration | — | ✓ |
| Deploy (rolling/canary) | — | ✓ |
| RBAC | — | ✓ |
| Cluster Multi-Node | — | ✓ |
| Audit Log Signing | — | ✓ |
| Priority Support | — | ✓ |
Networking Architecture
Bridge Topology
┌─────────────────────────────┐
│ Host Network │
│ (eth0, wlan0, etc.) │
└─────────────┬───────────────┘
│ NAT / routing
┌─────────────┴───────────────┐
│ volt0 (bridge) │
│ 10.0.0.1/24 │
├──────┬──────┬──────┬─────────┤
│ veth │ veth │ tap │ veth │
│ ↓ │ ↓ │ ↓ │ ↓ │
│ web │ api │ db │ cache │
│(con) │(con) │(vm) │(con) │
└──────┴──────┴──────┴─────────┘
- Containers connect via veth pairs — one end in the container namespace, one on the bridge
- VMs connect via TAP interfaces — the TAP device is on the bridge, passed to QEMU
- Both are L2 peers on the same bridge, so they communicate directly
DNS Resolution
Volt runs an internal DNS resolver (volt-dns.service) that provides name resolution for all workloads. When container api needs to reach VM db, it resolves db to its bridge IP via the internal DNS.
Firewall
Firewall rules are implemented via nftables. Volt manages a dedicated nftables table (volt) with chains for:
- Input filtering (host-bound traffic)
- Forward filtering (inter-workload traffic)
- NAT (port forwarding, SNAT for outbound)
See networking.md for full details.
Security Model
Privilege Levels
| Operation | Required | Method |
|---|---|---|
| Container lifecycle | root or volt group |
polkit |
| VM lifecycle | root or volt + kvm groups |
polkit |
| Service creation | root | sudo |
| Network/firewall | root | polkit |
volt ps, volt top, volt logs |
any user | read-only |
volt config show |
any user | read-only |
Audit Trail
All state-changing operations are logged to /var/log/volt/audit.log in JSON format:
{
"timestamp": "2025-07-12T14:23:01.123Z",
"user": "karl",
"uid": 1000,
"action": "container.create",
"resource": "web",
"result": "success"
}
Exit Codes
| Code | Name | Description |
|---|---|---|
| 0 | OK |
Success |
| 1 | ERR_GENERAL |
Unspecified error |
| 2 | ERR_USAGE |
Invalid arguments |
| 3 | ERR_NOT_FOUND |
Resource not found |
| 4 | ERR_ALREADY_EXISTS |
Resource already exists |
| 5 | ERR_PERMISSION |
Permission denied |
| 6 | ERR_DAEMON |
Daemon unreachable |
| 7 | ERR_TIMEOUT |
Operation timed out |
| 8 | ERR_NETWORK |
Network error |
| 9 | ERR_CONFLICT |
Conflicting state |
| 10 | ERR_DEPENDENCY |
Missing dependency |
| 11 | ERR_RESOURCE |
Insufficient resources |
| 12 | ERR_INVALID_CONFIG |
Invalid configuration |
| 13 | ERR_INTERRUPTED |
Interrupted by signal |
Environment Variables
| Variable | Description | Default |
|---|---|---|
VOLT_CONFIG |
Config file path | /etc/volt/config.yaml |
VOLT_COLOR |
Color mode: auto, always, never |
auto |
VOLT_OUTPUT |
Default output format | table |
VOLT_DEBUG |
Enable debug output | false |
VOLT_HOST |
Daemon socket path | /var/run/volt/volt.sock |
VOLT_CONTEXT |
Named context (multi-cluster) | default |
VOLT_COMPOSE_FILE |
Default Constellation file path | volt-compose.yaml |
EDITOR |
Editor for volt service edit, volt config edit |
vi |
Signal Handling
| Signal | Behavior |
|---|---|
SIGTERM |
Graceful shutdown — drain, save state, stop workloads in order |
SIGINT |
Same as SIGTERM |
SIGHUP |
Reload configuration |
SIGUSR1 |
Dump goroutine stacks to log |
SIGUSR2 |
Trigger log rotation |