Table-driven tests covering: - manifest: TOML parsing, encoding, validation, memory/port parsing, CAS references, variable substitution, inheritance, default resolution - storage: CAS put/get/exists/delete, dedup, GC, blob manifests, writable layers, snapshots, build-from-dir, manager operations - storage/tinyvol: assembly, disassembly, verification, cleanup - backend/systemd: unit naming, init, capabilities, path resolution - backend/hybrid: isolation config, resource limits, landlock policies, seccomp profiles, network config, nspawn arg generation 8 test files, 231 test cases across 4 packages. All passing. Co-Authored-By: Claude Code (Opus 4.6) <noreply@anthropic.com>
Volt
Containers on Linux without the Docker tax. One binary, no daemon, no runtime, no bullshit.
Volt uses systemd-nspawn — the container runtime that's been sitting in your kernel since 2013 — and wraps it with sane defaults, Landlock security, and a CLI that doesn't require a CS degree to operate.
curl -fsSL https://get.armoredgate.com/volt | sh
volt run -it ubuntu:24.04 bash
That's it. No service to start. No socket to configure. No 400MB daemon eating RAM in the background.
Why
Docker solved a real problem in 2013. Then it mass-gained in complexity for a decade. Today, running a container requires a privileged daemon, a storage driver, a networking plugin, and a prayer. Podman dropped the daemon but kept everything else.
We wanted something different:
- No daemon. Volt talks to systemd directly. Your init system already manages services — why add another one?
- Landlock by default. Every container gets filesystem access control from the first syscall. Not opt-in. Not "add this flag." Default.
- ~50ms cold starts. Because
systemd-nspawndoesn't need to set up overlay filesystems and negotiate with a storage driver. - One binary.
voltis a statically-linked Go binary. It depends on systemd and a Linux kernel. That's the list.
What works today
# The basics
volt run -d --name web nginx:alpine
volt ps
volt logs web
volt exec web sh
volt stop web
volt rm web
# Images
volt pull node:20-alpine
volt images
volt rmi old-image:v1
# Port forwarding
volt run -d --name api --port 3000:3000 my-api:latest
# Compose (multi-container stacks)
volt compose up # reads volt-compose.yaml
volt compose status
volt compose down
# Networking
volt network create backend --subnet 10.50.0.0/24
volt run -d --name db --network backend postgres:16
# Storage
volt storage status
volt storage gc # clean up unused layers
What's coming
- VoltVisor — VM management (KVM + QEMU) under the same CLI. Run Windows guests, cross-arch emulation, and microVMs alongside your containers. Early builds working internally.
- Stellarium CAS — Content-addressed storage backend. Deduplicates identical layers across images. Working prototype, not yet integrated into the main CLI.
- GitOps deploys —
volt deploy --from git@...for declarative infrastructure.
These aren't vaporware — they're in active development with internal builds. But they're not ready for other people to use yet, so they're not in the install.
Compose format
name: my-stack
services:
web:
image: nginx:alpine
ports: ["8080:80"]
api:
image: node:20
memory: 512M
db:
image: postgres:16
memory: 1G
volumes: ["pgdata:/var/lib/postgresql/data"]
If you've used Docker Compose, this will look familiar. That's intentional — migration shouldn't require learning a new config language.
Security model
Volt's security isn't a feature you enable. It's structural.
Every container runs with:
- Landlock LSM — path-based filesystem access control
- seccomp-bpf — syscall allowlist (78 syscalls, everything else denied)
- Cgroups v2 — memory, CPU, and I/O limits
- Linux namespaces — PID, net, mount, user, UTS, IPC isolation
No --privileged. No --cap-add SYS_ADMIN. If you need those, you probably need a VM, and VoltVisor will handle that when it's ready.
Migrating from Docker
# If you have Docker Compose files, they mostly work:
volt compose -f docker-compose.yaml up
# Pull the same images:
volt pull docker.io/library/nginx:alpine
# Same muscle memory:
volt ps # like docker ps
volt logs -f container-name # like docker logs -f
volt exec -it name bash # like docker exec -it
The CLI is intentionally similar to Docker's where it makes sense. We're not being different for the sake of it.
Building from source
git clone https://git.armoredgate.com/ArmoredGate/volt.git
cd volt
make build
sudo make install
Requires Go 1.22+ and a Linux system with systemd.
Project status
Volt is in active development. The container runtime (Voltainer) is stable and used internally in production. The VM layer (VoltVisor) and content-addressed storage (Stellarium) are in development.
If you find bugs, open an issue on git.armoredgate.com/ArmoredGate/volt or reach out at support.armoredgate.com.
License
Source-available under the Armored Gate Public Source License (AGPSL) v5.0.
Copyright 2026 Armored Gate LLC.