Volt CLI: source-available under AGPSL v5.0
Complete infrastructure platform CLI: - Container runtime (systemd-nspawn) - VoltVisor VMs (Neutron Stardust / QEMU) - Stellarium CAS (content-addressed storage) - ORAS Registry - GitOps integration - Landlock LSM security - Compose orchestration - Mesh networking Copyright (c) Armored Gates LLC. All rights reserved. Licensed under AGPSL v5.0
This commit is contained in:
601
docs/architecture.md
Normal file
601
docs/architecture.md
Normal file
@@ -0,0 +1,601 @@
|
||||
# Volt Architecture
|
||||
|
||||
Volt is a unified platform management CLI built on three engines:
|
||||
|
||||
- **Voltainer** — Container engine (`systemd-nspawn`)
|
||||
- **Voltvisor** — Virtual machine engine (KVM/QEMU)
|
||||
- **Stellarium** — Content-addressed storage (CAS)
|
||||
|
||||
This document describes how they work internally and how they integrate with the host system.
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
### systemd-Native
|
||||
|
||||
Volt works **with** systemd, not against it. Every workload is a systemd unit:
|
||||
|
||||
- Containers are `systemd-nspawn` machines managed via `volt-container@<name>.service`
|
||||
- VMs are QEMU processes managed via `volt-vm@<name>.service`
|
||||
- Tasks are `systemd timer` + `service` pairs
|
||||
- All logging flows through the systemd journal
|
||||
|
||||
This gives Volt free cgroup integration, dependency management, process tracking, and socket activation.
|
||||
|
||||
### One Binary
|
||||
|
||||
The `volt` binary at `/usr/local/bin/volt` handles everything. It communicates with the volt daemon (`voltd`) over a Unix socket at `/var/run/volt/volt.sock`. For read-only operations like `volt ps`, `volt top`, and `volt service list`, the CLI can query systemd directly without the daemon.
|
||||
|
||||
### Human-Readable Everything
|
||||
|
||||
Every workload has a human-assigned name. `volt ps` shows names, not hex IDs. Status columns use natural language (`running`, `stopped`, `failed`), not codes.
|
||||
|
||||
## Voltainer — Container Engine
|
||||
|
||||
### How Containers Work
|
||||
|
||||
Voltainer containers are `systemd-nspawn` machines. When you create a container:
|
||||
|
||||
1. **Image resolution**: Volt locates the rootfs directory under `/var/lib/volt/images/`
|
||||
2. **Rootfs copy**: The image rootfs is copied (or overlaid) to `/var/lib/volt/containers/<name>/rootfs/`
|
||||
3. **Unit generation**: A systemd unit file is generated at `/var/lib/volt/units/volt-container@<name>.service`
|
||||
4. **Network setup**: A veth pair is created, one end in the container namespace, the other attached to the specified bridge (default: `volt0`)
|
||||
5. **Start**: `systemctl start volt-container@<name>.service` launches `systemd-nspawn` with the appropriate flags
|
||||
|
||||
### Container Lifecycle
|
||||
|
||||
```
|
||||
create → stopped → start → running → stop → stopped → delete
|
||||
↑ |
|
||||
└── restart ───────┘
|
||||
```
|
||||
|
||||
State transitions are all mediated through systemd. `volt container stop` is `systemctl stop`. `volt container start` is `systemctl start`. This means systemd handles process cleanup, cgroup teardown, and signal delivery.
|
||||
|
||||
### Container Isolation
|
||||
|
||||
Each container gets:
|
||||
|
||||
- **Mount namespace**: Own rootfs, bind mounts for volumes
|
||||
- **PID namespace**: PID 1 is the container init
|
||||
- **Network namespace**: Own network stack, connected via veth to bridge
|
||||
- **UTS namespace**: Own hostname
|
||||
- **IPC namespace**: Isolated IPC
|
||||
- **cgroup v2**: Resource limits (CPU, memory, I/O) enforced via cgroup controllers
|
||||
|
||||
Containers share the host kernel. They are not VMs — there is no hypervisor overhead.
|
||||
|
||||
### Container Storage
|
||||
|
||||
```
|
||||
/var/lib/volt/containers/<name>/
|
||||
├── rootfs/ # Container filesystem
|
||||
├── config.json # Container configuration (image, resources, network, etc.)
|
||||
└── state.json # Runtime state (PID, IP, start time, etc.)
|
||||
```
|
||||
|
||||
Volumes are bind-mounted into the container rootfs at start time.
|
||||
|
||||
### Resource Limits
|
||||
|
||||
Resource limits map directly to cgroup v2 controllers:
|
||||
|
||||
| Volt Flag | cgroup v2 Controller | File |
|
||||
|-----------|---------------------|------|
|
||||
| `--memory 1G` | `memory.max` | Memory limit |
|
||||
| `--cpu 200` | `cpu.max` | CPU quota (percentage × 100) |
|
||||
|
||||
Limits can be updated on a running container via `volt container update`, which writes directly to the cgroup filesystem.
|
||||
|
||||
## Voltvisor — VM Engine
|
||||
|
||||
### How VMs Work
|
||||
|
||||
Voltvisor manages KVM/QEMU virtual machines. When you create a VM:
|
||||
|
||||
1. **Image resolution**: The base image is located or pulled
|
||||
2. **Disk creation**: A qcow2 disk is created at `/var/lib/volt/vms/<name>/disk.qcow2`
|
||||
3. **Kernel selection**: The appropriate kernel is selected from `/var/lib/volt/kernels/` based on the `--kernel` profile
|
||||
4. **Unit generation**: A systemd unit is generated at `/var/lib/volt/units/volt-vm@<name>.service`
|
||||
5. **Start**: `systemctl start volt-vm@<name>.service` launches QEMU with appropriate flags
|
||||
|
||||
### Kernel Profiles
|
||||
|
||||
Voltvisor supports multiple kernel profiles:
|
||||
|
||||
| Profile | Description |
|
||||
|---------|-------------|
|
||||
| `server` | Default. Optimized for server workloads. |
|
||||
| `desktop` | Includes graphics drivers, input support for VDI. |
|
||||
| `rt` | Real-time kernel for latency-sensitive workloads. |
|
||||
| `minimal` | Stripped-down kernel for maximum density. |
|
||||
| `dev` | Debug-enabled kernel with extra tracing. |
|
||||
|
||||
### VM Storage
|
||||
|
||||
```
|
||||
/var/lib/volt/vms/<name>/
|
||||
├── disk.qcow2 # Primary disk image
|
||||
├── config.json # VM configuration
|
||||
├── state.json # Runtime state
|
||||
└── snapshots/ # VM snapshots
|
||||
└── <snap-name>.qcow2
|
||||
```
|
||||
|
||||
### VM Networking
|
||||
|
||||
VMs connect to volt bridges via TAP interfaces. The TAP device is created when the VM starts and attached to the specified bridge. From the network's perspective, a VM on `volt0` and a container on `volt0` are peers — they communicate at L2.
|
||||
|
||||
### VM Performance Tuning
|
||||
|
||||
Voltvisor supports hardware-level tuning:
|
||||
|
||||
- **CPU pinning**: Pin vCPUs to physical CPUs via `volt tune cpu pin`
|
||||
- **Hugepages**: Use 2M or 1G hugepages via `volt tune memory hugepages`
|
||||
- **I/O scheduling**: Set per-device I/O scheduler via `volt tune io scheduler`
|
||||
- **NUMA awareness**: Pin to specific NUMA nodes
|
||||
|
||||
## Stellarium — Content-Addressed Storage
|
||||
|
||||
### How CAS Works
|
||||
|
||||
Stellarium is the storage backend shared by Voltainer and Voltvisor. Files are stored by their content hash (BLAKE3), enabling:
|
||||
|
||||
- **Deduplication**: Identical files across images are stored once
|
||||
- **Integrity verification**: Every object can be verified against its hash
|
||||
- **Efficient transfer**: Only missing objects need to be pulled
|
||||
|
||||
### CAS Layout
|
||||
|
||||
```
|
||||
/var/lib/volt/cas/
|
||||
├── objects/ # Content-addressed objects (hash → data)
|
||||
│ ├── ab/ # First two chars of hash for fanout
|
||||
│ │ ├── ab1234...
|
||||
│ │ └── ab5678...
|
||||
│ └── cd/
|
||||
│ └── cd9012...
|
||||
├── refs/ # Named references to object trees
|
||||
│ ├── images/
|
||||
│ └── manifests/
|
||||
└── tmp/ # Temporary staging area
|
||||
```
|
||||
|
||||
### CAS Operations
|
||||
|
||||
```bash
|
||||
# Check store health
|
||||
volt cas status
|
||||
|
||||
# Verify all objects
|
||||
volt cas verify
|
||||
|
||||
# Garbage collect unreferenced objects
|
||||
volt cas gc --dry-run
|
||||
volt cas gc
|
||||
|
||||
# Build CAS objects from a directory
|
||||
volt cas build /path/to/rootfs
|
||||
|
||||
# Deduplication analysis
|
||||
volt cas dedup
|
||||
```
|
||||
|
||||
### Image to CAS Flow
|
||||
|
||||
When an image is pulled:
|
||||
|
||||
1. The rootfs is downloaded/built (e.g., via debootstrap)
|
||||
2. Each file is hashed and stored as a CAS object
|
||||
3. A manifest is created mapping paths to hashes
|
||||
4. The manifest is stored as a ref under `/var/lib/volt/cas/refs/`
|
||||
|
||||
When a container is created from that image, files are assembled from CAS objects into the container rootfs.
|
||||
|
||||
## Filesystem Layout
|
||||
|
||||
### Configuration
|
||||
|
||||
```
|
||||
/etc/volt/
|
||||
├── config.yaml # Main configuration file
|
||||
├── compose/ # System-level Constellation definitions
|
||||
└── profiles/ # Custom tuning profiles
|
||||
```
|
||||
|
||||
### Persistent Data
|
||||
|
||||
```
|
||||
/var/lib/volt/
|
||||
├── containers/ # Container rootfs and metadata
|
||||
├── vms/ # VM disks and state
|
||||
├── kernels/ # VM kernels
|
||||
├── images/ # Downloaded/built images
|
||||
├── volumes/ # Named persistent volumes
|
||||
├── cas/ # Stellarium CAS object store
|
||||
├── networks/ # Network configuration
|
||||
├── units/ # Generated systemd unit files
|
||||
└── backups/ # System backups
|
||||
```
|
||||
|
||||
### Runtime State
|
||||
|
||||
```
|
||||
/var/run/volt/
|
||||
├── volt.sock # Daemon Unix socket
|
||||
├── volt.pid # Daemon PID file
|
||||
└── locks/ # Lock files for concurrent operations
|
||||
```
|
||||
|
||||
### Cache (Safe to Delete)
|
||||
|
||||
```
|
||||
/var/cache/volt/
|
||||
├── cas/ # CAS object cache
|
||||
├── images/ # Image layer cache
|
||||
└── dns/ # DNS resolution cache
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```
|
||||
/var/log/volt/
|
||||
├── daemon.log # Daemon operational log
|
||||
└── audit.log # Audit trail of all state-changing operations
|
||||
```
|
||||
|
||||
## systemd Integration
|
||||
|
||||
### Unit Templates
|
||||
|
||||
Volt uses systemd template units to manage workloads:
|
||||
|
||||
| Unit | Description |
|
||||
|------|-------------|
|
||||
| `volt.service` | Main volt daemon |
|
||||
| `volt.socket` | Socket activation for daemon |
|
||||
| `volt-network.service` | Network bridge management |
|
||||
| `volt-dns.service` | Internal DNS resolver |
|
||||
| `volt-container@<name>.service` | Per-container unit |
|
||||
| `volt-vm@<name>.service` | Per-VM unit |
|
||||
| `volt-task-<name>.timer` | Per-task timer |
|
||||
| `volt-task-<name>.service` | Per-task service |
|
||||
|
||||
### Journal Integration
|
||||
|
||||
All workload logs flow through the systemd journal. `volt logs` queries the journal with appropriate filters:
|
||||
|
||||
- Container logs: `_SYSTEMD_UNIT=volt-container@<name>.service`
|
||||
- VM logs: `_SYSTEMD_UNIT=volt-vm@<name>.service`
|
||||
- Service logs: `_SYSTEMD_UNIT=<name>.service`
|
||||
- Task logs: `_SYSTEMD_UNIT=volt-task-<name>.service`
|
||||
|
||||
### cgroup v2
|
||||
|
||||
Volt relies on cgroup v2 for resource accounting and limits. The cgroup hierarchy:
|
||||
|
||||
```
|
||||
/sys/fs/cgroup/
|
||||
└── system.slice/
|
||||
├── volt-container@web.service/ # Container cgroup
|
||||
├── volt-vm@db-primary.service/ # VM cgroup
|
||||
└── nginx.service/ # Service cgroup
|
||||
```
|
||||
|
||||
This is where `volt top` reads CPU, memory, and I/O metrics from.
|
||||
|
||||
## ORAS Registry
|
||||
|
||||
Volt includes a built-in OCI Distribution Spec compliant container registry. The registry is backed entirely by Stellarium CAS — there is no separate storage engine.
|
||||
|
||||
### CAS Mapping
|
||||
|
||||
The key insight: **an OCI blob digest IS a CAS address**. When a client pushes a blob with digest `sha256:abc123...`, that blob is stored directly as a CAS object at `/var/lib/volt/cas/objects/ab/abc123...`. No translation, no indirection.
|
||||
|
||||
```
|
||||
OCI Client Volt Registry Stellarium CAS
|
||||
───────── ───────────── ──────────────
|
||||
PUT /v2/myapp/blobs/uploads/... ─→ Receive blob ─→ Store as CAS object
|
||||
Content: <binary data> Compute sha256 digest objects/ab/abc123...
|
||||
←──────────────────────────────────────────────────────────────
|
||||
201 Created Index digest→repo
|
||||
Location: sha256:abc123... in refs/registry/
|
||||
```
|
||||
|
||||
Manifests are stored as CAS objects too, with an additional index mapping `repository:tag → digest` under `/var/lib/volt/cas/refs/registry/`.
|
||||
|
||||
### Deduplication
|
||||
|
||||
Because all storage is CAS-backed, deduplication is automatic and cross-system:
|
||||
|
||||
- Two repositories sharing the same layer → stored once
|
||||
- A registry blob matching a local container image layer → stored once
|
||||
- A snapshot and a registry artifact sharing files → stored once
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌────────────────────┐
|
||||
│ OCI Client │ (oras, helm, podman, skopeo, etc.)
|
||||
│ (push / pull) │
|
||||
└────────┬───────────┘
|
||||
│ HTTP/HTTPS (OCI Distribution Spec)
|
||||
┌────────┴───────────┐
|
||||
│ Registry Server │ volt registry serve --port 5000
|
||||
│ (Go net/http) │
|
||||
│ │
|
||||
│ ┌──────────────┐ │
|
||||
│ │ Tag Index │ │ refs/registry/<repo>/<tag> → digest
|
||||
│ │ Manifest DB │ │ refs/registry/<repo>/manifests/<digest>
|
||||
│ └──────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────┐ │
|
||||
│ │ Auth Layer │ │ HMAC-SHA256 bearer tokens
|
||||
│ │ │ │ Anonymous pull (configurable)
|
||||
│ └──────────────┘ │
|
||||
└────────┬───────────┘
|
||||
│ Direct read/write
|
||||
┌────────┴───────────┐
|
||||
│ Stellarium CAS │ objects/ (content-addressed by sha256)
|
||||
│ /var/lib/volt/cas │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
See [Registry](registry.md) for usage documentation.
|
||||
|
||||
---
|
||||
|
||||
## GitOps Pipeline
|
||||
|
||||
Volt's built-in GitOps system links Git repositories to workloads for automated deployment.
|
||||
|
||||
### Pipeline Architecture
|
||||
|
||||
```
|
||||
┌──────────────┐ ┌──────────────────────────┐ ┌──────────────┐
|
||||
│ Git Provider │ │ Volt GitOps Server │ │ Workloads │
|
||||
│ │ │ │ │ │
|
||||
│ GitHub ─────┼──────┼→ POST /hooks/github │ │ │
|
||||
│ GitLab ─────┼──────┼→ POST /hooks/gitlab │ │ │
|
||||
│ Bitbucket ──┼──────┼→ POST /hooks/bitbucket │ │ │
|
||||
│ │ │ │ │ │
|
||||
│ SVN ────────┼──────┼→ Polling (configurable) │ │ │
|
||||
└──────────────┘ │ │ │ │
|
||||
│ ┌─────────────────────┐ │ │ │
|
||||
│ │ Pipeline Manager │ │ │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ 1. Validate webhook │ │ │ │
|
||||
│ │ 2. Clone/pull repo │─┼──┐ │ │
|
||||
│ │ 3. Detect Voltfile │ │ │ │ │
|
||||
│ │ 4. Deploy workload │─┼──┼──→│ container │
|
||||
│ │ 5. Log result │ │ │ │ vm │
|
||||
│ └─────────────────────┘ │ │ │ service │
|
||||
│ │ │ └──────────────┘
|
||||
│ ┌─────────────────────┐ │ │
|
||||
│ │ Deploy History │ │ │
|
||||
│ │ (JSON log) │ │ │ ┌──────────────┐
|
||||
│ └─────────────────────┘ │ └──→│ Git Cache │
|
||||
└──────────────────────────┘ │ /var/lib/ │
|
||||
│ volt/gitops/ │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
### Webhook Flow
|
||||
|
||||
1. Git provider sends a push event to the webhook endpoint
|
||||
2. The GitOps server validates the HMAC signature against the pipeline's configured secret
|
||||
3. The event is matched to a pipeline by repository URL and branch
|
||||
4. The repository is cloned (or pulled if cached) to `/var/lib/volt/gitops/<pipeline>/`
|
||||
5. Volt scans the repo root for `volt-manifest.yaml`, `Voltfile`, or `volt-compose.yaml`
|
||||
6. The workload is created or updated according to the manifest
|
||||
7. The result is logged to the pipeline's deploy history
|
||||
|
||||
### SVN Polling
|
||||
|
||||
For SVN repositories, a polling goroutine checks for revision changes at the configured interval (default: 60s). When a new revision is detected, the same clone→detect→deploy flow is triggered.
|
||||
|
||||
See [GitOps](gitops.md) for usage documentation.
|
||||
|
||||
---
|
||||
|
||||
## Ingress Proxy
|
||||
|
||||
Volt includes a built-in reverse proxy for routing external HTTP/HTTPS traffic to workloads.
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Internet │
|
||||
│ (HTTP/HTTPS) │
|
||||
└────────┬────────┘
|
||||
│
|
||||
┌────────┴────────┐
|
||||
│ Ingress Proxy │ volt ingress serve
|
||||
│ │ Ports: 80 (HTTP), 443 (HTTPS)
|
||||
│ ┌───────────┐ │
|
||||
│ │ Router │ │ Hostname + path prefix matching
|
||||
│ │ │ │ Route: app.example.com → web:8080
|
||||
│ │ │ │ Route: api.example.com/v1 → api:3000
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ┌─────┴─────┐ │
|
||||
│ │ TLS │ │ Auto: ACME (Let's Encrypt)
|
||||
│ │ Terminator│ │ Manual: user-provided certs
|
||||
│ │ │ │ Passthrough: forward TLS to backend
|
||||
│ └───────────┘ │
|
||||
│ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ Health │ │ Backend health checks
|
||||
│ │ Checker │ │ Automatic failover
|
||||
│ └───────────┘ │
|
||||
└────────┬────────┘
|
||||
│ Reverse proxy to backends
|
||||
┌────────┴────────┐
|
||||
│ Workloads │
|
||||
│ web:8080 │
|
||||
│ api:3000 │
|
||||
│ static:80 │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Route Resolution
|
||||
|
||||
Routes are matched in order of specificity:
|
||||
1. Exact hostname + longest path prefix
|
||||
2. Exact hostname (no path)
|
||||
3. Wildcard hostname + longest path prefix
|
||||
|
||||
### TLS Modes
|
||||
|
||||
| Mode | Description |
|
||||
|------|-------------|
|
||||
| `auto` | Automatic certificate provisioning via ACME (Let's Encrypt). Volt handles certificate issuance, renewal, and storage. |
|
||||
| `manual` | User-provided certificate and key files. |
|
||||
| `passthrough` | TLS is forwarded to the backend without termination. |
|
||||
|
||||
### Hot Reload
|
||||
|
||||
Routes can be updated without proxy restart:
|
||||
```bash
|
||||
volt ingress reload
|
||||
```
|
||||
|
||||
The reload is zero-downtime — existing connections are drained while new connections use the updated routes.
|
||||
|
||||
See [Networking — Ingress Proxy](networking.md#ingress-proxy) for usage documentation.
|
||||
|
||||
---
|
||||
|
||||
## License Tier Feature Matrix
|
||||
|
||||
| Feature | Free | Pro |
|
||||
|---------|------|-----|
|
||||
| Containers (Voltainer) | ✓ | ✓ |
|
||||
| VMs (Voltvisor) | ✓ | ✓ |
|
||||
| Services & Tasks | ✓ | ✓ |
|
||||
| Networking & Firewall | ✓ | ✓ |
|
||||
| Stellarium CAS | ✓ | ✓ |
|
||||
| Compose / Constellations | ✓ | ✓ |
|
||||
| Snapshots | ✓ | ✓ |
|
||||
| Bundles | ✓ | ✓ |
|
||||
| ORAS Registry (pull) | ✓ | ✓ |
|
||||
| Ingress Proxy | ✓ | ✓ |
|
||||
| GitOps Pipelines | ✓ | ✓ |
|
||||
| ORAS Registry (push) | — | ✓ |
|
||||
| CDN Integration | — | ✓ |
|
||||
| Deploy (rolling/canary) | — | ✓ |
|
||||
| RBAC | — | ✓ |
|
||||
| Cluster Multi-Node | — | ✓ |
|
||||
| Audit Log Signing | — | ✓ |
|
||||
| Priority Support | — | ✓ |
|
||||
|
||||
---
|
||||
|
||||
## Networking Architecture
|
||||
|
||||
### Bridge Topology
|
||||
|
||||
```
|
||||
┌─────────────────────────────┐
|
||||
│ Host Network │
|
||||
│ (eth0, wlan0, etc.) │
|
||||
└─────────────┬───────────────┘
|
||||
│ NAT / routing
|
||||
┌─────────────┴───────────────┐
|
||||
│ volt0 (bridge) │
|
||||
│ 10.0.0.1/24 │
|
||||
├──────┬──────┬──────┬─────────┤
|
||||
│ veth │ veth │ tap │ veth │
|
||||
│ ↓ │ ↓ │ ↓ │ ↓ │
|
||||
│ web │ api │ db │ cache │
|
||||
│(con) │(con) │(vm) │(con) │
|
||||
└──────┴──────┴──────┴─────────┘
|
||||
```
|
||||
|
||||
- Containers connect via **veth pairs** — one end in the container namespace, one on the bridge
|
||||
- VMs connect via **TAP interfaces** — the TAP device is on the bridge, passed to QEMU
|
||||
- Both are L2 peers on the same bridge, so they communicate directly
|
||||
|
||||
### DNS Resolution
|
||||
|
||||
Volt runs an internal DNS resolver (`volt-dns.service`) that provides name resolution for all workloads. When container `api` needs to reach VM `db`, it resolves `db` to its bridge IP via the internal DNS.
|
||||
|
||||
### Firewall
|
||||
|
||||
Firewall rules are implemented via `nftables`. Volt manages a dedicated nftables table (`volt`) with chains for:
|
||||
|
||||
- Input filtering (host-bound traffic)
|
||||
- Forward filtering (inter-workload traffic)
|
||||
- NAT (port forwarding, SNAT for outbound)
|
||||
|
||||
See [networking.md](networking.md) for full details.
|
||||
|
||||
## Security Model
|
||||
|
||||
### Privilege Levels
|
||||
|
||||
| Operation | Required | Method |
|
||||
|-----------|----------|--------|
|
||||
| Container lifecycle | root or `volt` group | polkit |
|
||||
| VM lifecycle | root or `volt` + `kvm` groups | polkit |
|
||||
| Service creation | root | sudo |
|
||||
| Network/firewall | root | polkit |
|
||||
| `volt ps`, `volt top`, `volt logs` | any user | read-only |
|
||||
| `volt config show` | any user | read-only |
|
||||
|
||||
### Audit Trail
|
||||
|
||||
All state-changing operations are logged to `/var/log/volt/audit.log` in JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-07-12T14:23:01.123Z",
|
||||
"user": "karl",
|
||||
"uid": 1000,
|
||||
"action": "container.create",
|
||||
"resource": "web",
|
||||
"result": "success"
|
||||
}
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Name | Description |
|
||||
|------|------|-------------|
|
||||
| 0 | `OK` | Success |
|
||||
| 1 | `ERR_GENERAL` | Unspecified error |
|
||||
| 2 | `ERR_USAGE` | Invalid arguments |
|
||||
| 3 | `ERR_NOT_FOUND` | Resource not found |
|
||||
| 4 | `ERR_ALREADY_EXISTS` | Resource already exists |
|
||||
| 5 | `ERR_PERMISSION` | Permission denied |
|
||||
| 6 | `ERR_DAEMON` | Daemon unreachable |
|
||||
| 7 | `ERR_TIMEOUT` | Operation timed out |
|
||||
| 8 | `ERR_NETWORK` | Network error |
|
||||
| 9 | `ERR_CONFLICT` | Conflicting state |
|
||||
| 10 | `ERR_DEPENDENCY` | Missing dependency |
|
||||
| 11 | `ERR_RESOURCE` | Insufficient resources |
|
||||
| 12 | `ERR_INVALID_CONFIG` | Invalid configuration |
|
||||
| 13 | `ERR_INTERRUPTED` | Interrupted by signal |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `VOLT_CONFIG` | Config file path | `/etc/volt/config.yaml` |
|
||||
| `VOLT_COLOR` | Color mode: `auto`, `always`, `never` | `auto` |
|
||||
| `VOLT_OUTPUT` | Default output format | `table` |
|
||||
| `VOLT_DEBUG` | Enable debug output | `false` |
|
||||
| `VOLT_HOST` | Daemon socket path | `/var/run/volt/volt.sock` |
|
||||
| `VOLT_CONTEXT` | Named context (multi-cluster) | `default` |
|
||||
| `VOLT_COMPOSE_FILE` | Default Constellation file path | `volt-compose.yaml` |
|
||||
| `EDITOR` | Editor for `volt service edit`, `volt config edit` | `vi` |
|
||||
|
||||
## Signal Handling
|
||||
|
||||
| Signal | Behavior |
|
||||
|--------|----------|
|
||||
| `SIGTERM` | Graceful shutdown — drain, save state, stop workloads in order |
|
||||
| `SIGINT` | Same as SIGTERM |
|
||||
| `SIGHUP` | Reload configuration |
|
||||
| `SIGUSR1` | Dump goroutine stacks to log |
|
||||
| `SIGUSR2` | Trigger log rotation |
|
||||
335
docs/bundles.md
Normal file
335
docs/bundles.md
Normal file
@@ -0,0 +1,335 @@
|
||||
# Volt Bundles
|
||||
|
||||
`volt bundle` manages portable, self-contained application bundles. A bundle packages everything needed to deploy a stack — container images, VM disk images, a Constellation definition, configuration, and lifecycle hooks — into a single `.vbundle` file.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Create a bundle from your Constellation
|
||||
volt bundle create -o my-stack.vbundle
|
||||
|
||||
# Inspect a bundle
|
||||
volt bundle inspect my-stack.vbundle
|
||||
|
||||
# Deploy a bundle
|
||||
volt bundle import my-stack.vbundle
|
||||
|
||||
# Export a running stack as a bundle
|
||||
volt bundle export my-stack -o my-stack.vbundle
|
||||
```
|
||||
|
||||
## Bundle Format
|
||||
|
||||
A `.vbundle` is a ZIP archive with this structure:
|
||||
|
||||
```
|
||||
my-stack.vbundle
|
||||
├── bundle.json # Bundle manifest (version, platforms, service inventory, hashes)
|
||||
├── compose.yaml # Constellation definition / Voltfile (service topology)
|
||||
├── images/ # Container/VM images per service
|
||||
│ ├── web-proxy/
|
||||
│ │ ├── linux-amd64.tar.gz
|
||||
│ │ └── linux-arm64.tar.gz
|
||||
│ ├── api-server/
|
||||
│ │ └── linux-amd64.tar.gz
|
||||
│ └── db-primary/
|
||||
│ └── linux-amd64.qcow2
|
||||
├── config/ # Per-service configuration overlays (optional)
|
||||
│ ├── web-proxy/
|
||||
│ │ └── nginx.conf
|
||||
│ └── api-server/
|
||||
│ └── .env.production
|
||||
├── signatures/ # Cryptographic signatures (optional)
|
||||
│ └── bundle.sig
|
||||
└── hooks/ # Lifecycle scripts (optional)
|
||||
├── pre-deploy.sh
|
||||
└── post-deploy.sh
|
||||
```
|
||||
|
||||
## Bundle Manifest (`bundle.json`)
|
||||
|
||||
The bundle manifest describes the bundle contents, target platforms, and integrity information:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"name": "my-stack",
|
||||
"bundleVersion": "1.2.0",
|
||||
"created": "2025-07-14T15:30:00Z",
|
||||
"platforms": [
|
||||
{ "os": "linux", "arch": "amd64" },
|
||||
{ "os": "linux", "arch": "arm64" },
|
||||
{ "os": "android", "arch": "arm64-v8a" }
|
||||
],
|
||||
"services": {
|
||||
"web-proxy": {
|
||||
"type": "container",
|
||||
"images": {
|
||||
"linux/amd64": {
|
||||
"path": "images/web-proxy/linux-amd64.tar.gz",
|
||||
"format": "oci",
|
||||
"size": 52428800,
|
||||
"digest": "blake3:a1b2c3d4..."
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"integrity": {
|
||||
"algorithm": "blake3",
|
||||
"files": { "compose.yaml": "blake3:1234...", "..." : "..." }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Multi-Architecture Support
|
||||
|
||||
A single bundle can contain images for multiple architectures. During import, Volt selects the right image for the host:
|
||||
|
||||
```bash
|
||||
# Build a multi-arch bundle
|
||||
volt bundle create --platforms linux/amd64,linux/arm64,android/arm64-v8a -o my-stack.vbundle
|
||||
```
|
||||
|
||||
### Supported Platforms
|
||||
|
||||
| OS | Architecture | Notes |
|
||||
|----|-------------|-------|
|
||||
| Linux | `amd64` (x86_64) | Primary server platform |
|
||||
| Linux | `arm64` (aarch64) | Raspberry Pi 4+, ARM servers |
|
||||
| Linux | `armv7` | Older ARM SBCs |
|
||||
| Android | `arm64-v8a` | Modern Android devices |
|
||||
| Android | `armeabi-v7a` | Older 32-bit Android |
|
||||
| Android | `x86_64` | Emulators, Chromebooks |
|
||||
|
||||
## Image Formats
|
||||
|
||||
| Format | Extension | Type | Description |
|
||||
|--------|-----------|------|-------------|
|
||||
| `oci` | `.tar`, `.tar.gz` | Container | OCI/Docker image archive |
|
||||
| `rootfs` | `.tar.gz` | Container | Plain filesystem tarball |
|
||||
| `qcow2` | `.qcow2` | VM | QEMU disk image |
|
||||
| `raw` | `.raw`, `.img` | VM | Raw disk image |
|
||||
|
||||
## CAS Integration
|
||||
|
||||
Instead of embedding full images, bundles can reference Stellarium CAS hashes for deduplication:
|
||||
|
||||
```bash
|
||||
# Create bundle with CAS references (smaller, requires CAS access to deploy)
|
||||
volt bundle create --cas -o my-stack.vbundle
|
||||
```
|
||||
|
||||
In the bundle manifest, CAS-referenced images have `path: null` and a `casRef` field:
|
||||
|
||||
```json
|
||||
{
|
||||
"path": null,
|
||||
"format": "oci",
|
||||
"digest": "blake3:a1b2c3d4...",
|
||||
"casRef": "stellarium://a1b2c3d4..."
|
||||
}
|
||||
```
|
||||
|
||||
During import, Volt resolves CAS references from the local store or pulls from remote peers.
|
||||
|
||||
## Commands
|
||||
|
||||
### `volt bundle create`
|
||||
|
||||
Build a bundle from a Voltfile or running composition.
|
||||
|
||||
```bash
|
||||
# From Constellation in current directory
|
||||
volt bundle create -o my-stack.vbundle
|
||||
|
||||
# Multi-platform, signed
|
||||
volt bundle create \
|
||||
--platforms linux/amd64,linux/arm64 \
|
||||
--sign --sign-key ~/.config/volt/signing-key \
|
||||
-o my-stack.vbundle
|
||||
|
||||
# From a running stack
|
||||
volt bundle create --from-running my-stack -o snapshot.vbundle
|
||||
|
||||
# ACE-compatible (for Android deployment)
|
||||
volt bundle create --format ace --platforms android/arm64-v8a -o my-stack.zip
|
||||
|
||||
# Dry run
|
||||
volt bundle create --dry-run
|
||||
```
|
||||
|
||||
### `volt bundle import`
|
||||
|
||||
Deploy a bundle to the local system.
|
||||
|
||||
```bash
|
||||
# Basic import
|
||||
volt bundle import my-stack.vbundle
|
||||
|
||||
# With verification and hooks
|
||||
volt bundle import --verify --run-hooks prod.vbundle
|
||||
|
||||
# With environment overrides
|
||||
volt bundle import --set DB_PASSWORD=secret --set APP_ENV=staging my-stack.vbundle
|
||||
|
||||
# Import without starting
|
||||
volt bundle import --no-start my-stack.vbundle
|
||||
|
||||
# Force overwrite existing
|
||||
volt bundle import --force my-stack.vbundle
|
||||
```
|
||||
|
||||
### `volt bundle export`
|
||||
|
||||
Export a running composition as a bundle.
|
||||
|
||||
```bash
|
||||
# Export running stack
|
||||
volt bundle export my-stack -o my-stack.vbundle
|
||||
|
||||
# Include volume data
|
||||
volt bundle export my-stack --include-volumes -o full-snapshot.vbundle
|
||||
```
|
||||
|
||||
### `volt bundle inspect`
|
||||
|
||||
Show bundle contents and metadata.
|
||||
|
||||
```bash
|
||||
$ volt bundle inspect my-stack.vbundle
|
||||
Bundle: my-stack v1.2.0
|
||||
Created: 2025-07-14 15:30:00 UTC
|
||||
Platforms: linux/amd64, linux/arm64
|
||||
Signed: Yes (ed25519)
|
||||
|
||||
Services:
|
||||
NAME TYPE IMAGES CONFIG FILES SIZE
|
||||
web-proxy container 2 (amd64, arm64) 1 95 MB
|
||||
api-server container 1 (amd64) 1 210 MB
|
||||
db-primary vm 1 (amd64) 1 2.1 GB
|
||||
|
||||
# Show full bundle manifest
|
||||
volt bundle inspect my-stack.vbundle --show-manifest
|
||||
|
||||
# JSON output
|
||||
volt bundle inspect my-stack.vbundle -o json
|
||||
```
|
||||
|
||||
### `volt bundle verify`
|
||||
|
||||
Verify signatures and content integrity.
|
||||
|
||||
```bash
|
||||
$ volt bundle verify prod.vbundle
|
||||
✓ Bundle signature valid (ed25519, signer: karl@armoredgate.com)
|
||||
✓ Manifest integrity verified (12 files, BLAKE3)
|
||||
Bundle verification: PASSED
|
||||
|
||||
# Deep verify (check CAS references)
|
||||
volt bundle verify --deep cas-bundle.vbundle
|
||||
```
|
||||
|
||||
### `volt bundle push` / `volt bundle pull`
|
||||
|
||||
Registry operations.
|
||||
|
||||
```bash
|
||||
# Push to registry
|
||||
volt bundle push my-stack.vbundle --tag v1.2.0 --tag latest
|
||||
|
||||
# Pull from registry
|
||||
volt bundle pull my-stack:v1.2.0
|
||||
|
||||
# Pull for specific platform
|
||||
volt bundle pull my-stack:latest --platform linux/amd64
|
||||
```
|
||||
|
||||
### `volt bundle list`
|
||||
|
||||
List locally cached bundles.
|
||||
|
||||
```bash
|
||||
$ volt bundle list
|
||||
NAME VERSION PLATFORMS SIZE CREATED SIGNED
|
||||
my-stack 1.2.0 amd64,arm64 1.8 GB 2025-07-14 15:30 ✓
|
||||
dev-env 0.1.0 amd64 450 MB 2025-07-13 10:00 ✗
|
||||
```
|
||||
|
||||
## Lifecycle Hooks
|
||||
|
||||
Hooks are executable scripts that run at defined points during deployment:
|
||||
|
||||
| Hook | Trigger |
|
||||
|------|---------|
|
||||
| `validate` | Before deployment — pre-flight checks |
|
||||
| `pre-deploy` | After extraction, before service start |
|
||||
| `post-deploy` | After all services are healthy |
|
||||
| `pre-destroy` | Before services are stopped |
|
||||
| `post-destroy` | After cleanup |
|
||||
|
||||
Hooks are **opt-in** — use `--run-hooks` to enable:
|
||||
|
||||
```bash
|
||||
volt bundle import --run-hooks my-stack.vbundle
|
||||
```
|
||||
|
||||
Review hooks before enabling:
|
||||
|
||||
```bash
|
||||
volt bundle inspect --show-hooks my-stack.vbundle
|
||||
```
|
||||
|
||||
## Signing & Verification
|
||||
|
||||
Bundles support Ed25519 cryptographic signatures for supply chain integrity.
|
||||
|
||||
```bash
|
||||
# Create a signed bundle
|
||||
volt bundle create --sign --sign-key ~/.config/volt/signing-key -o prod.vbundle
|
||||
|
||||
# Verify before deploying
|
||||
volt bundle import --verify prod.vbundle
|
||||
|
||||
# Trust a signing key
|
||||
volt config set bundle.trusted_keys += "age1z3x..."
|
||||
```
|
||||
|
||||
Every file in a bundle is content-hashed (BLAKE3) and recorded in the bundle manifest's `integrity` field. Verification checks both the signature and all content hashes.
|
||||
|
||||
## ACE Compatibility
|
||||
|
||||
Volt bundles are an evolution of the ACE (Android Container Engine) project bundle format. ACE bundles (ZIP files with `compose.json` and `images/` directory) are imported transparently by `volt bundle import`.
|
||||
|
||||
```bash
|
||||
# Import an ACE bundle directly
|
||||
volt bundle import legacy-project.zip
|
||||
|
||||
# Create an ACE-compatible bundle
|
||||
volt bundle create --format ace -o project.zip
|
||||
```
|
||||
|
||||
## Configuration Overlays
|
||||
|
||||
The `config/` directory contains per-service configuration files applied after image extraction:
|
||||
|
||||
```
|
||||
config/
|
||||
├── web-proxy/
|
||||
│ └── nginx.conf # Overwrites /etc/nginx/nginx.conf in container
|
||||
└── api-server/
|
||||
└── .env.production # Injected via volume mount
|
||||
```
|
||||
|
||||
Config files support `${VARIABLE}` template expansion, resolved from the Constellation's environment definitions, env_file references, or `--set` flags during import.
|
||||
|
||||
## Full Specification
|
||||
|
||||
See the complete [Volt Bundle Format Specification](/Knowledge/Projects/Volt-Bundle-Spec.md) for:
|
||||
|
||||
- Detailed `bundle.json` schema and JSON Schema definition
|
||||
- Platform/architecture matrix
|
||||
- CAS reference resolution
|
||||
- Signature verification flow
|
||||
- Registry HTTP API
|
||||
- Error handling and recovery
|
||||
- Comparison with OCI Image Spec
|
||||
2438
docs/cli-reference.md
Normal file
2438
docs/cli-reference.md
Normal file
File diff suppressed because it is too large
Load Diff
741
docs/compose.md
Normal file
741
docs/compose.md
Normal file
@@ -0,0 +1,741 @@
|
||||
# Voltfile / Constellation Format
|
||||
|
||||
A **Constellation** is the definition of how containers, VMs, services, and resources form a coherent system. `volt compose` manages Constellations as declarative multi-service stacks — define containers, VMs, services, tasks, networks, and volumes in a single YAML file and deploy them together.
|
||||
|
||||
## File Discovery
|
||||
|
||||
`volt compose` looks for Constellation definitions in this order:
|
||||
|
||||
1. `-f <path>` flag (explicit)
|
||||
2. `volt-compose.yaml` in current directory
|
||||
3. `volt-compose.yml` in current directory
|
||||
4. `Voltfile` in current directory (YAML format)
|
||||
|
||||
## Quick Example
|
||||
|
||||
```yaml
|
||||
version: "1"
|
||||
name: web-stack
|
||||
|
||||
containers:
|
||||
web:
|
||||
image: armoredgate/nginx:1.25
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
- frontend
|
||||
depends_on:
|
||||
api:
|
||||
condition: service_started
|
||||
|
||||
api:
|
||||
image: armoredgate/node:20
|
||||
ports:
|
||||
- "8080:8080"
|
||||
environment:
|
||||
DATABASE_URL: "postgresql://app:secret@db:5432/myapp"
|
||||
networks:
|
||||
- frontend
|
||||
- backend
|
||||
|
||||
vms:
|
||||
db:
|
||||
image: armoredgate/ubuntu-24.04
|
||||
cpu: 2
|
||||
memory: 4G
|
||||
networks:
|
||||
- backend
|
||||
|
||||
networks:
|
||||
frontend:
|
||||
subnet: 10.20.0.0/24
|
||||
backend:
|
||||
subnet: 10.30.0.0/24
|
||||
internal: true
|
||||
```
|
||||
|
||||
Deploy:
|
||||
|
||||
```bash
|
||||
volt compose up -d # Create and start in background
|
||||
volt compose ps # Check status
|
||||
volt compose logs -f # Follow all logs
|
||||
volt compose down # Tear down
|
||||
```
|
||||
|
||||
## Top-Level Keys
|
||||
|
||||
| Key | Type | Required | Description |
|
||||
|-----|------|----------|-------------|
|
||||
| `version` | string | Yes | File format version. Currently `"1"`. |
|
||||
| `name` | string | No | Stack name. Used as prefix for workload names. |
|
||||
| `description` | string | No | Human-readable description. |
|
||||
| `containers` | map | No | Container definitions (Voltainer). |
|
||||
| `vms` | map | No | VM definitions (Voltvisor). |
|
||||
| `services` | map | No | systemd service definitions. |
|
||||
| `tasks` | map | No | Scheduled task definitions. |
|
||||
| `networks` | map | No | Network definitions. |
|
||||
| `volumes` | map | No | Volume definitions. |
|
||||
| `configs` | map | No | Configuration file references. |
|
||||
| `secrets` | map | No | Secret file references. |
|
||||
|
||||
## Container Definition
|
||||
|
||||
```yaml
|
||||
containers:
|
||||
<name>:
|
||||
image: <string> # Image name (required)
|
||||
build: # Build configuration (optional)
|
||||
context: <path> # Build context directory
|
||||
file: <path> # Build spec file
|
||||
ports: # Port mappings
|
||||
- "host:container"
|
||||
volumes: # Volume mounts
|
||||
- host_path:container_path[:ro]
|
||||
- volume_name:container_path
|
||||
networks: # Networks to join
|
||||
- network_name
|
||||
environment: # Environment variables
|
||||
KEY: value
|
||||
env_file: # Load env vars from files
|
||||
- .env
|
||||
depends_on: # Dependencies
|
||||
other_service:
|
||||
condition: service_started|service_healthy|service_completed_successfully
|
||||
restart: no|always|on-failure|unless-stopped
|
||||
restart_max_retries: <int> # Max restart attempts (for on-failure)
|
||||
resources:
|
||||
cpu: "<number>" # CPU shares/quota
|
||||
memory: <size> # e.g., 256M, 1G
|
||||
memory_swap: <size> # Swap limit
|
||||
healthcheck:
|
||||
command: ["cmd", "args"] # Health check command
|
||||
interval: <duration> # Check interval (e.g., 30s)
|
||||
timeout: <duration> # Check timeout
|
||||
retries: <int> # Retries before unhealthy
|
||||
start_period: <duration> # Grace period on start
|
||||
labels:
|
||||
key: value
|
||||
```
|
||||
|
||||
### Container Example
|
||||
|
||||
```yaml
|
||||
containers:
|
||||
app-server:
|
||||
image: armoredgate/node:20
|
||||
build:
|
||||
context: ./app
|
||||
file: build-spec.yaml
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- app-data:/app/data
|
||||
- ./config:/app/config:ro
|
||||
networks:
|
||||
- backend
|
||||
environment:
|
||||
NODE_ENV: production
|
||||
DATABASE_URL: "postgresql://app:${DB_PASSWORD}@db:5432/myapp"
|
||||
env_file:
|
||||
- .env
|
||||
- .env.production
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
cache:
|
||||
condition: service_started
|
||||
restart: on-failure
|
||||
restart_max_retries: 5
|
||||
resources:
|
||||
cpu: "2"
|
||||
memory: 1G
|
||||
memory_swap: 2G
|
||||
healthcheck:
|
||||
command: ["curl", "-sf", "http://localhost:8080/health"]
|
||||
interval: 15s
|
||||
timeout: 3s
|
||||
retries: 5
|
||||
```
|
||||
|
||||
## VM Definition
|
||||
|
||||
```yaml
|
||||
vms:
|
||||
<name>:
|
||||
image: <string> # Base image (required)
|
||||
cpu: <int> # vCPU count
|
||||
memory: <size> # Memory allocation (e.g., 4G)
|
||||
disks: # Additional disks
|
||||
- name: <string>
|
||||
size: <size>
|
||||
mount: <path> # Mount point inside VM
|
||||
networks:
|
||||
- network_name
|
||||
ports:
|
||||
- "host:vm"
|
||||
provision: # First-boot scripts
|
||||
- name: <string>
|
||||
shell: |
|
||||
commands to run
|
||||
healthcheck:
|
||||
command: ["cmd", "args"]
|
||||
interval: <duration>
|
||||
timeout: <duration>
|
||||
retries: <int>
|
||||
restart: no|always|on-failure
|
||||
tune: # Performance tuning
|
||||
cpu_pin: [<int>, ...] # Pin to physical CPUs
|
||||
hugepages: <bool> # Use hugepages
|
||||
io_scheduler: <string> # I/O scheduler
|
||||
```
|
||||
|
||||
### VM Example
|
||||
|
||||
```yaml
|
||||
vms:
|
||||
db-primary:
|
||||
image: armoredgate/ubuntu-24.04
|
||||
cpu: 4
|
||||
memory: 8G
|
||||
disks:
|
||||
- name: system
|
||||
size: 40G
|
||||
- name: pgdata
|
||||
size: 200G
|
||||
mount: /var/lib/postgresql/data
|
||||
networks:
|
||||
- backend
|
||||
ports:
|
||||
- "5432:5432"
|
||||
provision:
|
||||
- name: install-postgres
|
||||
shell: |
|
||||
apt-get update && apt-get install -y postgresql-16
|
||||
systemctl enable postgresql
|
||||
healthcheck:
|
||||
command: ["pg_isready", "-U", "postgres"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
restart: always
|
||||
tune:
|
||||
cpu_pin: [4, 5, 6, 7]
|
||||
hugepages: true
|
||||
io_scheduler: none
|
||||
```
|
||||
|
||||
## Service Definition
|
||||
|
||||
Define systemd services managed by the Constellation:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
<name>:
|
||||
unit:
|
||||
type: simple|oneshot|forking|notify
|
||||
exec: <string> # Command to run (required)
|
||||
user: <string>
|
||||
group: <string>
|
||||
restart: no|always|on-failure
|
||||
networks:
|
||||
- network_name
|
||||
healthcheck:
|
||||
command: ["cmd", "args"]
|
||||
interval: <duration>
|
||||
resources:
|
||||
memory: <size>
|
||||
depends_on:
|
||||
other_service:
|
||||
condition: service_started
|
||||
```
|
||||
|
||||
### Service Example
|
||||
|
||||
```yaml
|
||||
services:
|
||||
cache-redis:
|
||||
unit:
|
||||
type: simple
|
||||
exec: "/usr/bin/redis-server /etc/redis/redis.conf"
|
||||
user: redis
|
||||
group: redis
|
||||
restart: always
|
||||
networks:
|
||||
- backend
|
||||
healthcheck:
|
||||
command: ["redis-cli", "ping"]
|
||||
interval: 10s
|
||||
resources:
|
||||
memory: 512M
|
||||
```
|
||||
|
||||
## Task Definition
|
||||
|
||||
Define scheduled tasks (systemd timers):
|
||||
|
||||
```yaml
|
||||
tasks:
|
||||
<name>:
|
||||
exec: <string> # Command to run (required)
|
||||
schedule:
|
||||
on_calendar: <string> # systemd calendar syntax
|
||||
every: <duration> # Alternative: interval
|
||||
environment:
|
||||
KEY: value
|
||||
user: <string>
|
||||
persistent: <bool> # Run missed tasks on boot
|
||||
```
|
||||
|
||||
### Task Example
|
||||
|
||||
```yaml
|
||||
tasks:
|
||||
db-backup:
|
||||
exec: "/usr/local/bin/backup.sh --target db-primary"
|
||||
schedule:
|
||||
on_calendar: "*-*-* 02:00:00"
|
||||
environment:
|
||||
BACKUP_DEST: /mnt/backups
|
||||
|
||||
cleanup:
|
||||
exec: "/usr/local/bin/cleanup-old-logs.sh"
|
||||
schedule:
|
||||
every: 6h
|
||||
```
|
||||
|
||||
## Network Definition
|
||||
|
||||
```yaml
|
||||
networks:
|
||||
<name>:
|
||||
driver: bridge # Network driver (default: bridge)
|
||||
subnet: <cidr> # e.g., 10.20.0.0/24
|
||||
internal: <bool> # If true, no external access
|
||||
options:
|
||||
mtu: <int> # MTU (default: 1500)
|
||||
```
|
||||
|
||||
### Network Examples
|
||||
|
||||
```yaml
|
||||
networks:
|
||||
# Public-facing network
|
||||
frontend:
|
||||
driver: bridge
|
||||
subnet: 10.20.0.0/24
|
||||
options:
|
||||
mtu: 9000
|
||||
|
||||
# Internal only — no external access
|
||||
backend:
|
||||
driver: bridge
|
||||
subnet: 10.30.0.0/24
|
||||
internal: true
|
||||
```
|
||||
|
||||
## Volume Definition
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
<name>:
|
||||
driver: local # Storage driver
|
||||
size: <size> # Optional size for file-backed volumes
|
||||
```
|
||||
|
||||
### Volume Examples
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
web-static:
|
||||
driver: local
|
||||
|
||||
app-data:
|
||||
driver: local
|
||||
size: 10G
|
||||
|
||||
pgdata:
|
||||
driver: local
|
||||
size: 200G
|
||||
```
|
||||
|
||||
## Configs and Secrets
|
||||
|
||||
```yaml
|
||||
configs:
|
||||
<name>:
|
||||
file: <path> # Path to config file
|
||||
|
||||
secrets:
|
||||
<name>:
|
||||
file: <path> # Path to secret file
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
```yaml
|
||||
configs:
|
||||
nginx-conf:
|
||||
file: ./config/nginx.conf
|
||||
app-env:
|
||||
file: ./.env.production
|
||||
|
||||
secrets:
|
||||
db-password:
|
||||
file: ./secrets/db-password.txt
|
||||
tls-cert:
|
||||
file: ./secrets/server.crt
|
||||
tls-key:
|
||||
file: ./secrets/server.key
|
||||
```
|
||||
|
||||
## Dependency Conditions
|
||||
|
||||
When specifying `depends_on`, the `condition` field controls when the dependent service starts:
|
||||
|
||||
| Condition | Description |
|
||||
|-----------|-------------|
|
||||
| `service_started` | Dependency has started (default) |
|
||||
| `service_healthy` | Dependency passes its health check |
|
||||
| `service_completed_successfully` | Dependency ran and exited with code 0 |
|
||||
|
||||
```yaml
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
migrations:
|
||||
condition: service_completed_successfully
|
||||
cache:
|
||||
condition: service_started
|
||||
```
|
||||
|
||||
## Environment Variable Interpolation
|
||||
|
||||
The Constellation definition supports shell-style variable interpolation:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
DATABASE_URL: "postgresql://app:${DB_PASSWORD}@db:5432/myapp"
|
||||
APP_VERSION: "${APP_VERSION:-latest}"
|
||||
```
|
||||
|
||||
Variables are resolved from:
|
||||
|
||||
1. Host environment variables
|
||||
2. `.env` file in the same directory as the Constellation definition
|
||||
3. Files specified in `env_file`
|
||||
|
||||
Unset variables with no default cause an error.
|
||||
|
||||
## Compose Commands
|
||||
|
||||
### Lifecycle
|
||||
|
||||
```bash
|
||||
# Deploy the Constellation — create and start everything
|
||||
volt compose up
|
||||
|
||||
# Detached mode (background)
|
||||
volt compose up -d
|
||||
|
||||
# Specific Constellation file
|
||||
volt compose -f production.yaml up -d
|
||||
|
||||
# Build images first
|
||||
volt compose up --build
|
||||
|
||||
# Force recreate
|
||||
volt compose up --force-recreate
|
||||
|
||||
# Tear down the Constellation
|
||||
volt compose down
|
||||
|
||||
# Also remove volumes
|
||||
volt compose down --volumes
|
||||
```
|
||||
|
||||
### Status and Logs
|
||||
|
||||
```bash
|
||||
# Stack status
|
||||
volt compose ps
|
||||
|
||||
# All logs
|
||||
volt compose logs
|
||||
|
||||
# Follow logs
|
||||
volt compose logs --follow
|
||||
|
||||
# Logs for one service
|
||||
volt compose logs api
|
||||
|
||||
# Last 50 lines
|
||||
volt compose logs --tail 50 api
|
||||
|
||||
# Resource usage
|
||||
volt compose top
|
||||
|
||||
# Events
|
||||
volt compose events
|
||||
```
|
||||
|
||||
### Operations
|
||||
|
||||
```bash
|
||||
# Start existing (without recreating)
|
||||
volt compose start
|
||||
|
||||
# Stop (without removing)
|
||||
volt compose stop
|
||||
|
||||
# Restart
|
||||
volt compose restart
|
||||
|
||||
# Execute command in a service
|
||||
volt compose exec api -- node --version
|
||||
|
||||
# Pull images
|
||||
volt compose pull
|
||||
|
||||
# Build images
|
||||
volt compose build
|
||||
|
||||
# Validate Constellation
|
||||
volt compose config
|
||||
```
|
||||
|
||||
### Project Naming
|
||||
|
||||
```bash
|
||||
# Override project name
|
||||
volt compose --project my-project up
|
||||
|
||||
# This prefixes all workload names: my-project-web, my-project-api, etc.
|
||||
```
|
||||
|
||||
## Full Example: Production Constellation
|
||||
|
||||
```yaml
|
||||
# volt-compose.yaml — Production Constellation
|
||||
version: "1"
|
||||
name: production
|
||||
description: "Production web application"
|
||||
|
||||
containers:
|
||||
web-proxy:
|
||||
image: armoredgate/nginx:1.25
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
- web-static:/usr/share/nginx/html:ro
|
||||
networks:
|
||||
- frontend
|
||||
- backend
|
||||
depends_on:
|
||||
app-server:
|
||||
condition: service_healthy
|
||||
restart: always
|
||||
resources:
|
||||
cpu: "0.5"
|
||||
memory: 256M
|
||||
healthcheck:
|
||||
command: ["curl", "-sf", "http://localhost/health"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
app-server:
|
||||
image: armoredgate/node:20
|
||||
build:
|
||||
context: ./app
|
||||
file: build-spec.yaml
|
||||
environment:
|
||||
NODE_ENV: production
|
||||
DATABASE_URL: "postgresql://app:${DB_PASSWORD}@db-primary:5432/myapp"
|
||||
REDIS_URL: "redis://cache-redis:6379"
|
||||
env_file:
|
||||
- .env.production
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- app-data:/app/data
|
||||
networks:
|
||||
- backend
|
||||
depends_on:
|
||||
db-primary:
|
||||
condition: service_healthy
|
||||
cache-redis:
|
||||
condition: service_started
|
||||
restart: on-failure
|
||||
restart_max_retries: 5
|
||||
resources:
|
||||
cpu: "2"
|
||||
memory: 1G
|
||||
healthcheck:
|
||||
command: ["curl", "-sf", "http://localhost:8080/health"]
|
||||
interval: 15s
|
||||
timeout: 3s
|
||||
retries: 5
|
||||
|
||||
vms:
|
||||
db-primary:
|
||||
image: armoredgate/ubuntu-24.04
|
||||
cpu: 4
|
||||
memory: 8G
|
||||
disks:
|
||||
- name: system
|
||||
size: 40G
|
||||
- name: pgdata
|
||||
size: 200G
|
||||
mount: /var/lib/postgresql/data
|
||||
networks:
|
||||
- backend
|
||||
ports:
|
||||
- "5432:5432"
|
||||
provision:
|
||||
- name: install-postgres
|
||||
shell: |
|
||||
apt-get update && apt-get install -y postgresql-16
|
||||
systemctl enable postgresql
|
||||
healthcheck:
|
||||
command: ["pg_isready", "-U", "postgres"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
restart: always
|
||||
tune:
|
||||
cpu_pin: [4, 5, 6, 7]
|
||||
hugepages: true
|
||||
io_scheduler: none
|
||||
|
||||
services:
|
||||
cache-redis:
|
||||
unit:
|
||||
type: simple
|
||||
exec: "/usr/bin/redis-server /etc/redis/redis.conf"
|
||||
user: redis
|
||||
group: redis
|
||||
restart: always
|
||||
networks:
|
||||
- backend
|
||||
healthcheck:
|
||||
command: ["redis-cli", "ping"]
|
||||
interval: 10s
|
||||
resources:
|
||||
memory: 512M
|
||||
|
||||
log-shipper:
|
||||
unit:
|
||||
type: simple
|
||||
exec: "/usr/local/bin/vector --config /etc/vector/vector.toml"
|
||||
restart: on-failure
|
||||
depends_on:
|
||||
app-server:
|
||||
condition: service_started
|
||||
|
||||
tasks:
|
||||
db-backup:
|
||||
exec: "/usr/local/bin/backup.sh --target db-primary"
|
||||
schedule:
|
||||
on_calendar: "*-*-* 02:00:00"
|
||||
environment:
|
||||
BACKUP_DEST: /mnt/backups
|
||||
|
||||
cleanup:
|
||||
exec: "/usr/local/bin/cleanup-old-logs.sh"
|
||||
schedule:
|
||||
every: 6h
|
||||
|
||||
networks:
|
||||
frontend:
|
||||
driver: bridge
|
||||
subnet: 10.20.0.0/24
|
||||
options:
|
||||
mtu: 9000
|
||||
|
||||
backend:
|
||||
driver: bridge
|
||||
subnet: 10.30.0.0/24
|
||||
internal: true
|
||||
|
||||
volumes:
|
||||
web-static:
|
||||
driver: local
|
||||
app-data:
|
||||
driver: local
|
||||
size: 10G
|
||||
|
||||
configs:
|
||||
nginx-conf:
|
||||
file: ./config/nginx.conf
|
||||
|
||||
secrets:
|
||||
db-password:
|
||||
file: ./secrets/db-password.txt
|
||||
tls-cert:
|
||||
file: ./secrets/server.crt
|
||||
tls-key:
|
||||
file: ./secrets/server.key
|
||||
```
|
||||
|
||||
## Full Example: Developer Constellation
|
||||
|
||||
```yaml
|
||||
# volt-compose.yaml — Developer Constellation
|
||||
version: "1"
|
||||
name: dev-environment
|
||||
|
||||
vms:
|
||||
dev-box:
|
||||
image: armoredgate/fedora-workstation
|
||||
cpu: 4
|
||||
memory: 8G
|
||||
disks:
|
||||
- name: system
|
||||
size: 80G
|
||||
volumes:
|
||||
- ~/projects:/home/dev/projects
|
||||
networks:
|
||||
- devnet
|
||||
ports:
|
||||
- "2222:22"
|
||||
- "3000:3000"
|
||||
- "5173:5173"
|
||||
provision:
|
||||
- name: dev-tools
|
||||
shell: |
|
||||
dnf install -y git nodejs rust golang
|
||||
npm install -g pnpm
|
||||
|
||||
containers:
|
||||
test-db:
|
||||
image: armoredgate/postgres:16
|
||||
environment:
|
||||
POSTGRES_PASSWORD: devpass
|
||||
POSTGRES_DB: myapp_dev
|
||||
volumes:
|
||||
- test-pgdata:/var/lib/postgresql/data
|
||||
networks:
|
||||
- devnet
|
||||
ports:
|
||||
- "5432:5432"
|
||||
|
||||
mailhog:
|
||||
image: armoredgate/mailhog:latest
|
||||
networks:
|
||||
- devnet
|
||||
ports:
|
||||
- "1025:1025"
|
||||
- "8025:8025"
|
||||
|
||||
networks:
|
||||
devnet:
|
||||
subnet: 10.99.0.0/24
|
||||
|
||||
volumes:
|
||||
test-pgdata:
|
||||
driver: local
|
||||
```
|
||||
337
docs/getting-started.md
Normal file
337
docs/getting-started.md
Normal file
@@ -0,0 +1,337 @@
|
||||
# Getting Started with Volt
|
||||
|
||||
Volt is the unified Linux platform management CLI by Armored Gates LLC. One binary replaces `systemctl`, `journalctl`, `machinectl`, `ip`, `nft`, `virsh`, and dozens of other tools.
|
||||
|
||||
Volt manages three engines:
|
||||
|
||||
- **Voltainer** — Containers built on `systemd-nspawn`
|
||||
- **Voltvisor** — Virtual machines built on KVM/QEMU with the Neutron Stardust VMM
|
||||
- **Stellarium** — Content-addressed storage (CAS) shared by both engines
|
||||
|
||||
Security is enforced via **Landlock LSM** and seccomp-bpf — no heavyweight security modules required.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Linux with systemd (Debian 12+, Ubuntu 22.04+, Fedora 38+, Rocky 9+)
|
||||
- Root access (or membership in the `volt` group)
|
||||
- For VMs: KVM support (`/dev/kvm` accessible)
|
||||
- For containers: `systemd-nspawn` installed (`systemd-container` package)
|
||||
|
||||
## Installation
|
||||
|
||||
Install Volt with a single command:
|
||||
|
||||
```bash
|
||||
curl https://get.armoredgate.com/volt | sh
|
||||
```
|
||||
|
||||
This downloads the latest Volt binary, places it at `/usr/local/bin/volt`, and creates the required directory structure.
|
||||
|
||||
Verify the installation:
|
||||
|
||||
```bash
|
||||
volt --version
|
||||
```
|
||||
|
||||
### Manual Installation
|
||||
|
||||
If you prefer to install manually:
|
||||
|
||||
```bash
|
||||
# Download the binary
|
||||
curl -Lo /usr/local/bin/volt https://releases.armoredgate.com/volt/latest/volt-linux-amd64
|
||||
chmod +x /usr/local/bin/volt
|
||||
|
||||
# Create required directories
|
||||
sudo mkdir -p /etc/volt
|
||||
sudo mkdir -p /var/lib/volt/{containers,vms,images,volumes,cas,kernels,units}
|
||||
sudo mkdir -p /var/run/volt
|
||||
sudo mkdir -p /var/cache/volt/{cas,images,dns}
|
||||
sudo mkdir -p /var/log/volt
|
||||
|
||||
# Initialize configuration
|
||||
sudo volt config reset
|
||||
volt config validate
|
||||
```
|
||||
|
||||
### Start the Daemon
|
||||
|
||||
```bash
|
||||
sudo volt daemon start
|
||||
volt daemon status
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Pull an Image
|
||||
|
||||
```bash
|
||||
volt image pull nginx:alpine
|
||||
```
|
||||
|
||||
### Create and Start a Container
|
||||
|
||||
```bash
|
||||
# Create a container with port mapping
|
||||
volt container create nginx:alpine --name my-web -p 8080:80
|
||||
|
||||
# Start it
|
||||
volt start my-web
|
||||
```
|
||||
|
||||
Your web server is now running at `http://localhost:8080`.
|
||||
|
||||
### Interact with the Container
|
||||
|
||||
```bash
|
||||
# Open a shell
|
||||
volt container shell my-web
|
||||
|
||||
# Execute a single command
|
||||
volt container exec my-web -- cat /etc/os-release
|
||||
|
||||
# View logs
|
||||
volt container logs my-web
|
||||
|
||||
# Follow logs in real-time
|
||||
volt container logs -f my-web
|
||||
```
|
||||
|
||||
### Copy Files In and Out
|
||||
|
||||
```bash
|
||||
# Copy a config file into the container
|
||||
volt container cp ./myapp.conf my-web:/etc/myapp.conf
|
||||
|
||||
# Copy logs out
|
||||
volt container cp my-web:/var/log/syslog ./container-syslog.log
|
||||
```
|
||||
|
||||
### Stop and Clean Up
|
||||
|
||||
```bash
|
||||
volt container stop my-web
|
||||
volt container delete my-web
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Stellarium CAS
|
||||
|
||||
Every image and filesystem in Volt is backed by **Stellarium**, the content-addressed storage engine. Files are stored by their BLAKE3 hash, giving you:
|
||||
|
||||
- **Automatic deduplication** — identical files across images are stored once
|
||||
- **Integrity verification** — every object can be verified against its hash
|
||||
- **Efficient snapshots** — only changed files produce new CAS blobs
|
||||
|
||||
```bash
|
||||
# Check CAS store health
|
||||
volt cas status
|
||||
|
||||
# Verify integrity
|
||||
volt cas verify
|
||||
```
|
||||
|
||||
### ORAS Registry
|
||||
|
||||
Volt includes a built-in **OCI Distribution Spec compliant registry** backed by Stellarium CAS. Push and pull OCI artifacts using any standard client:
|
||||
|
||||
```bash
|
||||
# Start the registry
|
||||
volt registry serve --port 5000
|
||||
|
||||
# Push artifacts using ORAS or any OCI-compliant tool
|
||||
oras push localhost:5000/myapp:v1 ./artifact
|
||||
```
|
||||
|
||||
See [Registry](registry.md) for full documentation.
|
||||
|
||||
### Landlock Security
|
||||
|
||||
All workloads are isolated using **Landlock LSM** (Linux Security Module) combined with seccomp-bpf and cgroups v2. This provides kernel-enforced filesystem access control without requiring complex security profiles.
|
||||
|
||||
## The Unified Process View
|
||||
|
||||
`volt ps` is the flagship command. It shows every running workload — containers, VMs, and services — in one view:
|
||||
|
||||
```bash
|
||||
volt ps
|
||||
```
|
||||
|
||||
```
|
||||
NAME TYPE STATUS CPU% MEM UPTIME
|
||||
my-web container running 2.3% 256M 1h 15m
|
||||
db-primary vm running 8.7% 4.0G 3d 2h
|
||||
nginx service active 0.1% 32M 12d 6h
|
||||
```
|
||||
|
||||
### Filter by Type
|
||||
|
||||
```bash
|
||||
volt ps containers # Only containers
|
||||
volt ps vms # Only VMs
|
||||
volt ps services # Only services
|
||||
```
|
||||
|
||||
### Output Formats
|
||||
|
||||
```bash
|
||||
volt ps -o json # JSON output for scripting
|
||||
volt ps -o yaml # YAML output
|
||||
volt ps -o wide # All columns
|
||||
```
|
||||
|
||||
## Managing Services
|
||||
|
||||
Volt wraps `systemctl` with a cleaner interface:
|
||||
|
||||
```bash
|
||||
# List running services
|
||||
volt service list
|
||||
|
||||
# Check a specific service
|
||||
volt service status nginx
|
||||
|
||||
# Create a new service without writing unit files
|
||||
sudo volt service create --name my-app \
|
||||
--exec "/usr/local/bin/my-app --port 8080" \
|
||||
--user my-app \
|
||||
--restart on-failure \
|
||||
--enable --start
|
||||
|
||||
# View service logs
|
||||
volt service logs -f my-app
|
||||
```
|
||||
|
||||
## Scheduled Tasks
|
||||
|
||||
Replace `crontab` with systemd timers:
|
||||
|
||||
```bash
|
||||
# Run a backup every day at 2 AM
|
||||
sudo volt task create --name nightly-backup \
|
||||
--exec "/usr/local/bin/backup.sh" \
|
||||
--calendar "*-*-* 02:00:00" \
|
||||
--enable
|
||||
|
||||
# Run a health check every 5 minutes
|
||||
sudo volt task create --name health-check \
|
||||
--exec "curl -sf http://localhost:8080/health" \
|
||||
--interval 5min \
|
||||
--enable
|
||||
```
|
||||
|
||||
## Networking Basics
|
||||
|
||||
### View Network Status
|
||||
|
||||
```bash
|
||||
volt net status
|
||||
volt net bridge list
|
||||
```
|
||||
|
||||
### Create a Network
|
||||
|
||||
```bash
|
||||
sudo volt net create --name backend --subnet 10.30.0.0/24
|
||||
```
|
||||
|
||||
### Connect Workloads
|
||||
|
||||
```bash
|
||||
volt net connect backend web-frontend
|
||||
volt net connect backend db-primary
|
||||
```
|
||||
|
||||
Workloads on the same network can communicate by name.
|
||||
|
||||
## Constellations (Compose Stacks)
|
||||
|
||||
Define multi-service Constellations in a `volt-compose.yaml`:
|
||||
|
||||
```yaml
|
||||
version: "1"
|
||||
name: my-stack
|
||||
|
||||
containers:
|
||||
web:
|
||||
image: armoredgate/nginx:1.25
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
- frontend
|
||||
|
||||
api:
|
||||
image: armoredgate/node:20
|
||||
ports:
|
||||
- "8080:8080"
|
||||
networks:
|
||||
- frontend
|
||||
- backend
|
||||
|
||||
networks:
|
||||
frontend:
|
||||
subnet: 10.20.0.0/24
|
||||
backend:
|
||||
subnet: 10.30.0.0/24
|
||||
internal: true
|
||||
```
|
||||
|
||||
Deploy it:
|
||||
|
||||
```bash
|
||||
volt compose up -d
|
||||
volt compose ps
|
||||
volt compose logs -f
|
||||
volt compose down
|
||||
```
|
||||
|
||||
## System Health
|
||||
|
||||
```bash
|
||||
# Platform overview
|
||||
volt system info
|
||||
|
||||
# Health check all subsystems
|
||||
volt system health
|
||||
|
||||
# Backup configuration
|
||||
sudo volt system backup
|
||||
```
|
||||
|
||||
## Getting Help
|
||||
|
||||
Every command has built-in help. Three equivalent ways:
|
||||
|
||||
```bash
|
||||
volt net --help
|
||||
volt net help
|
||||
volt help net
|
||||
```
|
||||
|
||||
## Global Flags
|
||||
|
||||
These work on every command:
|
||||
|
||||
| Flag | Short | Description |
|
||||
|------|-------|-------------|
|
||||
| `--help` | `-h` | Show help |
|
||||
| `--output` | `-o` | Output format: `table`, `json`, `yaml`, `wide` |
|
||||
| `--quiet` | `-q` | Suppress non-essential output |
|
||||
| `--debug` | | Enable debug logging |
|
||||
| `--no-color` | | Disable colored output |
|
||||
| `--config` | | Config file path (default: `/etc/volt/config.yaml`) |
|
||||
| `--timeout` | | Command timeout in seconds (default: 30) |
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you have Volt installed and running, explore these areas:
|
||||
|
||||
- **[CLI Reference](cli-reference.md)** — Every command documented
|
||||
- **[Registry](registry.md)** — Host your own OCI-compliant artifact registry
|
||||
- **[GitOps](gitops.md)** — Automated deployments from Git pushes
|
||||
- **[Compose](compose.md)** — Constellation / Voltfile format specification
|
||||
- **[Networking](networking.md)** — Network architecture, ingress proxy, and firewall
|
||||
- **[Bundles](bundles.md)** — Portable, self-contained application bundles
|
||||
- **[Architecture](architecture.md)** — How Volt works internally
|
||||
- **[Troubleshooting](troubleshooting.md)** — Common issues and fixes
|
||||
333
docs/gitops.md
Normal file
333
docs/gitops.md
Normal file
@@ -0,0 +1,333 @@
|
||||
# Volt GitOps
|
||||
|
||||
Volt includes built-in GitOps pipelines that automatically deploy workloads when code is pushed to a Git repository. No external CI/CD system required — Volt handles the entire flow from webhook to deployment.
|
||||
|
||||
## Overview
|
||||
|
||||
A GitOps pipeline links a Git repository branch to a Volt workload. When a push is detected on the tracked branch:
|
||||
|
||||
1. **Webhook received** — GitHub, GitLab, or Bitbucket sends a push event (or SVN revision changes are detected via polling)
|
||||
2. **Validate** — The webhook signature is verified against the configured HMAC secret
|
||||
3. **Clone** — The repository is cloned (or pulled if already cached)
|
||||
4. **Detect** — Volt looks for `volt-manifest.yaml` or `Voltfile` in the repo root
|
||||
5. **Deploy** — The workload is updated according to the manifest
|
||||
6. **Log** — The result (success or failure) is recorded in the deploy history
|
||||
|
||||
```
|
||||
┌──────────┐ push ┌──────────────┐ clone ┌──────────┐ deploy ┌──────────┐
|
||||
│ GitHub │───────────→ │ Volt GitOps │──────────→ │ Repo │──────────→ │ Workload │
|
||||
│ GitLab │ webhook │ Server │ │ (cached) │ │ │
|
||||
│Bitbucket │ │ :9090 │ └──────────┘ └──────────┘
|
||||
│ SVN │ polling │ │
|
||||
└──────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
## Supported Providers
|
||||
|
||||
| Provider | Method | Signature Validation |
|
||||
|----------|--------|---------------------|
|
||||
| GitHub | Webhook (`POST /hooks/github`) | HMAC-SHA256 (`X-Hub-Signature-256`) |
|
||||
| GitLab | Webhook (`POST /hooks/gitlab`) | Secret token (`X-Gitlab-Token`) |
|
||||
| Bitbucket | Webhook (`POST /hooks/bitbucket`) | HMAC-SHA256 |
|
||||
| SVN | Polling (configurable interval) | N/A |
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Create a Pipeline
|
||||
|
||||
```bash
|
||||
volt gitops create \
|
||||
--name web-app \
|
||||
--repo https://github.com/myorg/myapp \
|
||||
--provider github \
|
||||
--branch main \
|
||||
--workload web \
|
||||
--secret my-webhook-secret
|
||||
```
|
||||
|
||||
### 2. Start the Webhook Server
|
||||
|
||||
```bash
|
||||
# Foreground (for testing)
|
||||
volt gitops serve --port 9090
|
||||
|
||||
# Or install as a systemd service (production)
|
||||
sudo volt gitops install-service
|
||||
sudo systemctl enable --now volt-gitops.service
|
||||
```
|
||||
|
||||
### 3. Configure Your Git Provider
|
||||
|
||||
Add a webhook in your repository settings:
|
||||
|
||||
**GitHub:**
|
||||
- Payload URL: `https://your-server:9090/hooks/github`
|
||||
- Content type: `application/json`
|
||||
- Secret: `my-webhook-secret` (must match `--secret`)
|
||||
- Events: "Just the push event"
|
||||
|
||||
**GitLab:**
|
||||
- URL: `https://your-server:9090/hooks/gitlab`
|
||||
- Secret token: `my-webhook-secret`
|
||||
- Trigger: Push events
|
||||
|
||||
**Bitbucket:**
|
||||
- URL: `https://your-server:9090/hooks/bitbucket`
|
||||
- Events: Repository push
|
||||
|
||||
### 4. Push and Deploy
|
||||
|
||||
Push to your tracked branch. The pipeline will automatically detect the push, clone the repo, and deploy the workload.
|
||||
|
||||
```bash
|
||||
# Check pipeline status
|
||||
volt gitops status
|
||||
|
||||
# View deploy history
|
||||
volt gitops logs --name web-app
|
||||
```
|
||||
|
||||
## Creating Pipelines
|
||||
|
||||
### GitHub
|
||||
|
||||
```bash
|
||||
volt gitops create \
|
||||
--name web-app \
|
||||
--repo https://github.com/myorg/myapp \
|
||||
--provider github \
|
||||
--branch main \
|
||||
--workload web \
|
||||
--secret my-webhook-secret
|
||||
```
|
||||
|
||||
The `--secret` flag sets the HMAC secret used to validate webhook signatures. This ensures only authentic GitHub push events trigger deployments.
|
||||
|
||||
### GitLab
|
||||
|
||||
```bash
|
||||
volt gitops create \
|
||||
--name api \
|
||||
--repo https://gitlab.com/myorg/api \
|
||||
--provider gitlab \
|
||||
--branch develop \
|
||||
--workload api-svc \
|
||||
--secret my-gitlab-secret
|
||||
```
|
||||
|
||||
### Bitbucket
|
||||
|
||||
```bash
|
||||
volt gitops create \
|
||||
--name frontend \
|
||||
--repo https://bitbucket.org/myorg/frontend \
|
||||
--provider bitbucket \
|
||||
--branch main \
|
||||
--workload frontend-app \
|
||||
--secret my-bitbucket-secret
|
||||
```
|
||||
|
||||
### SVN (Polling)
|
||||
|
||||
For SVN repositories, Volt polls for revision changes instead of using webhooks:
|
||||
|
||||
```bash
|
||||
volt gitops create \
|
||||
--name legacy-app \
|
||||
--repo svn://svn.example.com/trunk \
|
||||
--provider svn \
|
||||
--branch trunk \
|
||||
--workload legacy-app \
|
||||
--poll-interval 60
|
||||
```
|
||||
|
||||
The `--poll-interval` flag sets how often (in seconds) Volt checks for new SVN revisions. Default: 60 seconds.
|
||||
|
||||
## Repository Structure
|
||||
|
||||
Volt looks for deployment configuration in the repository root:
|
||||
|
||||
```
|
||||
myapp/
|
||||
├── volt-manifest.yaml # Preferred — workload manifest
|
||||
├── Voltfile # Alternative — Voltfile format
|
||||
├── volt-compose.yaml # Alternative — Constellation definition
|
||||
├── src/
|
||||
└── ...
|
||||
```
|
||||
|
||||
The lookup order is:
|
||||
1. `volt-manifest.yaml`
|
||||
2. `Voltfile`
|
||||
3. `volt-compose.yaml`
|
||||
|
||||
## Pipeline Management
|
||||
|
||||
### List Pipelines
|
||||
|
||||
```bash
|
||||
volt gitops list
|
||||
volt gitops list -o json
|
||||
```
|
||||
|
||||
### Check Status
|
||||
|
||||
```bash
|
||||
volt gitops status
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
NAME REPO BRANCH PROVIDER LAST DEPLOY STATUS
|
||||
web-app https://github.com/myorg/myapp main github 2m ago success
|
||||
api https://gitlab.com/myorg/api develop gitlab 1h ago success
|
||||
legacy svn://svn.example.com/trunk trunk svn 5m ago failed
|
||||
```
|
||||
|
||||
### Manual Sync
|
||||
|
||||
Trigger a deployment manually without waiting for a webhook:
|
||||
|
||||
```bash
|
||||
volt gitops sync --name web-app
|
||||
```
|
||||
|
||||
This is useful for:
|
||||
- Initial deployment
|
||||
- Re-deploying after a failed webhook
|
||||
- Testing the pipeline
|
||||
|
||||
### View Deploy History
|
||||
|
||||
```bash
|
||||
volt gitops logs --name web-app
|
||||
volt gitops logs --name web-app --limit 50
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
TIMESTAMP COMMIT BRANCH STATUS DURATION NOTES
|
||||
2025-07-14 15:30:01 abc1234 main success 12s webhook (github)
|
||||
2025-07-14 14:15:22 def5678 main success 8s manual sync
|
||||
2025-07-14 10:00:03 789abcd main failed 3s Voltfile parse error
|
||||
```
|
||||
|
||||
### Delete a Pipeline
|
||||
|
||||
```bash
|
||||
volt gitops delete --name web-app
|
||||
```
|
||||
|
||||
## Webhook Server
|
||||
|
||||
### Foreground Mode
|
||||
|
||||
For testing or development:
|
||||
|
||||
```bash
|
||||
volt gitops serve --port 9090
|
||||
```
|
||||
|
||||
### Endpoints
|
||||
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| `POST` | `/hooks/github` | GitHub push webhooks |
|
||||
| `POST` | `/hooks/gitlab` | GitLab push webhooks |
|
||||
| `POST` | `/hooks/bitbucket` | Bitbucket push webhooks |
|
||||
| `GET` | `/healthz` | Health check |
|
||||
|
||||
### Production Deployment (systemd)
|
||||
|
||||
Install the webhook server as a systemd service for production use:
|
||||
|
||||
```bash
|
||||
# Install the service unit
|
||||
sudo volt gitops install-service
|
||||
|
||||
# Enable and start
|
||||
sudo systemctl enable --now volt-gitops.service
|
||||
|
||||
# Check status
|
||||
systemctl status volt-gitops.service
|
||||
|
||||
# View logs
|
||||
journalctl -u volt-gitops.service -f
|
||||
```
|
||||
|
||||
The installed service runs the webhook server on port 9090 by default. To customize, edit the service:
|
||||
|
||||
```bash
|
||||
volt service edit volt-gitops
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Webhook Signature Validation
|
||||
|
||||
Always configure a webhook secret (`--secret`) for GitHub and Bitbucket pipelines. Without a secret, any HTTP POST to the webhook endpoint could trigger a deployment.
|
||||
|
||||
**GitHub** — Volt validates the `X-Hub-Signature-256` header against the configured HMAC-SHA256 secret.
|
||||
|
||||
**GitLab** — Volt validates the `X-Gitlab-Token` header against the configured secret.
|
||||
|
||||
**Bitbucket** — Volt validates the HMAC-SHA256 signature.
|
||||
|
||||
If signature validation fails, the webhook is rejected with `403 Forbidden` and no deployment occurs.
|
||||
|
||||
### Network Security
|
||||
|
||||
In production, place the webhook server behind the Volt ingress proxy with TLS:
|
||||
|
||||
```bash
|
||||
volt ingress create --name gitops-webhook \
|
||||
--hostname webhooks.example.com \
|
||||
--path /hooks \
|
||||
--backend localhost:9090 \
|
||||
--tls auto
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Webhook Not Triggering
|
||||
|
||||
1. Check the webhook server is running:
|
||||
```bash
|
||||
volt gitops status
|
||||
systemctl status volt-gitops.service
|
||||
```
|
||||
|
||||
2. Check the pipeline exists:
|
||||
```bash
|
||||
volt gitops list
|
||||
```
|
||||
|
||||
3. Verify the webhook URL is correct in your Git provider settings
|
||||
|
||||
4. Check the webhook secret matches
|
||||
|
||||
5. Check deploy logs for errors:
|
||||
```bash
|
||||
volt gitops logs --name <pipeline>
|
||||
```
|
||||
|
||||
### Deploy Fails After Webhook
|
||||
|
||||
1. Check the deploy logs:
|
||||
```bash
|
||||
volt gitops logs --name <pipeline>
|
||||
```
|
||||
|
||||
2. Verify the repo contains a valid `volt-manifest.yaml` or `Voltfile`
|
||||
|
||||
3. Try a manual sync to see detailed error output:
|
||||
```bash
|
||||
volt gitops sync --name <pipeline>
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [CLI Reference — GitOps Commands](cli-reference.md#volt-gitops--gitops-pipelines)
|
||||
- [Architecture — GitOps Pipeline](architecture.md#gitops-pipeline)
|
||||
- [Compose / Voltfile Format](compose.md)
|
||||
- [Ingress Proxy](networking.md#ingress-proxy)
|
||||
278
docs/man/volt.1.md
Normal file
278
docs/man/volt.1.md
Normal file
@@ -0,0 +1,278 @@
|
||||
# VOLT(1) — Unified Linux Platform Management
|
||||
|
||||
## NAME
|
||||
|
||||
**volt** — unified CLI for managing containers, VMs, services, networking, storage, and more
|
||||
|
||||
## SYNOPSIS
|
||||
|
||||
**volt** [*command*] [*subcommand*] [*flags*]
|
||||
|
||||
**volt** **ps** [*filter*] [*flags*]
|
||||
|
||||
**volt** **container** *command* [*name*] [*flags*]
|
||||
|
||||
**volt** **vm** *command* [*name*] [*flags*]
|
||||
|
||||
**volt** **service** *command* [*name*] [*flags*]
|
||||
|
||||
**volt** **net** *command* [*flags*]
|
||||
|
||||
**volt** **compose** *command* [*flags*]
|
||||
|
||||
## DESCRIPTION
|
||||
|
||||
**volt** is a unified Linux platform management CLI that replaces the fragmented toolchain of `systemctl`, `journalctl`, `machinectl`, `ip`, `nft`, `virsh`, and other utilities with a single binary.
|
||||
|
||||
It manages three engines:
|
||||
|
||||
**Voltainer**
|
||||
: Container engine built on `systemd-nspawn`(1). Provides OS-level containerization using Linux namespaces, cgroups v2, and systemd service management.
|
||||
|
||||
**Voltvisor**
|
||||
: Virtual machine engine built on KVM/QEMU. Full hypervisor capabilities with support for live migration, snapshots, and hardware passthrough.
|
||||
|
||||
**Stellarium**
|
||||
: Content-addressed storage backend shared by both engines. Provides deduplication, integrity verification, and efficient image storage using BLAKE3 hashing.
|
||||
|
||||
## COMMANDS
|
||||
|
||||
### Workloads
|
||||
|
||||
**container**
|
||||
: Manage Voltainer containers. Subcommands: create, start, stop, restart, kill, exec, attach, shell, list, inspect, logs, cp, rename, update, export, delete.
|
||||
|
||||
**vm**
|
||||
: Manage Voltvisor virtual machines. Subcommands: create, start, stop, destroy, ssh, exec, attach, list.
|
||||
|
||||
**desktop**
|
||||
: Manage desktop VMs (VDI). Subcommands: create, connect, list.
|
||||
|
||||
**service**
|
||||
: Manage systemd services. Subcommands: create, start, stop, restart, reload, enable, disable, status, list, inspect, show, edit, deps, logs, mask, unmask, template, delete.
|
||||
|
||||
**task**
|
||||
: Manage scheduled tasks (systemd timers). Subcommands: create, list, run, status, logs, enable, disable, edit, delete.
|
||||
|
||||
### Infrastructure
|
||||
|
||||
**net**
|
||||
: Manage networking. Subcommands: create, list, inspect, delete, connect, disconnect, status. Subsystems: bridge, firewall, dns, port, policy, vlan.
|
||||
|
||||
**volume**
|
||||
: Manage persistent volumes. Subcommands: create, list, inspect, attach, detach, resize, snapshot, backup, delete.
|
||||
|
||||
**image**
|
||||
: Manage images. Subcommands: list, pull, build, inspect, import, export, tag, push, delete.
|
||||
|
||||
**cas**
|
||||
: Stellarium CAS operations. Subcommands: status, info, build, verify, gc, dedup, pull, push, sync.
|
||||
|
||||
### Observability
|
||||
|
||||
**ps**
|
||||
: List all running workloads — containers, VMs, and services — in one unified view.
|
||||
|
||||
**logs**
|
||||
: View logs for any workload. Auto-detects type via the systemd journal.
|
||||
|
||||
**top**
|
||||
: Show real-time CPU, memory, and process counts for all workloads.
|
||||
|
||||
**events**
|
||||
: Stream real-time platform events.
|
||||
|
||||
### Composition & Orchestration
|
||||
|
||||
**compose**
|
||||
: Manage declarative multi-service stacks. Subcommands: up, down, start, stop, restart, ps, logs, build, pull, exec, config, top, events.
|
||||
|
||||
**cluster**
|
||||
: Manage cluster nodes. Subcommands: status, node (list, add, drain, remove).
|
||||
|
||||
### System
|
||||
|
||||
**daemon**
|
||||
: Manage the volt daemon. Subcommands: start, stop, restart, status, reload, config.
|
||||
|
||||
**system**
|
||||
: Platform information and maintenance. Subcommands: info, health, update, backup, restore, reset.
|
||||
|
||||
**config**
|
||||
: Configuration management. Subcommands: show, get, set, edit, validate, reset.
|
||||
|
||||
**tune**
|
||||
: Performance tuning. Subcommands: show, profile, cpu, memory, io, net, sysctl.
|
||||
|
||||
### Shortcuts
|
||||
|
||||
**get** *resource*
|
||||
: List resources by type. Routes to canonical list commands.
|
||||
|
||||
**describe** *resource* *name*
|
||||
: Show detailed resource info. Routes to canonical inspect commands.
|
||||
|
||||
**delete** *resource* *name*
|
||||
: Delete a resource. Routes to canonical delete commands.
|
||||
|
||||
**run** *image*
|
||||
: Quick-start a container from an image.
|
||||
|
||||
**ssh** *vm-name*
|
||||
: SSH into a VM.
|
||||
|
||||
**exec** *container* **--** *command*
|
||||
: Execute a command in a container.
|
||||
|
||||
**connect** *desktop*
|
||||
: Connect to a desktop VM.
|
||||
|
||||
**status**
|
||||
: Platform status overview (alias for **system info**).
|
||||
|
||||
## GLOBAL FLAGS
|
||||
|
||||
**-h**, **--help**
|
||||
: Show help for the command.
|
||||
|
||||
**-o**, **--output** *format*
|
||||
: Output format: **table** (default), **json**, **yaml**, **wide**.
|
||||
|
||||
**-q**, **--quiet**
|
||||
: Suppress non-essential output.
|
||||
|
||||
**--debug**
|
||||
: Enable debug logging to stderr.
|
||||
|
||||
**--no-color**
|
||||
: Disable colored output.
|
||||
|
||||
**--config** *path*
|
||||
: Config file path (default: /etc/volt/config.yaml).
|
||||
|
||||
**--timeout** *seconds*
|
||||
: Command timeout in seconds (default: 30).
|
||||
|
||||
## FILES
|
||||
|
||||
*/usr/local/bin/volt*
|
||||
: The volt binary.
|
||||
|
||||
*/etc/volt/config.yaml*
|
||||
: Main configuration file.
|
||||
|
||||
*/etc/volt/profiles/*
|
||||
: Custom tuning profiles.
|
||||
|
||||
*/var/lib/volt/*
|
||||
: Persistent data (containers, VMs, images, volumes, CAS store).
|
||||
|
||||
*/var/run/volt/volt.sock*
|
||||
: Daemon Unix socket.
|
||||
|
||||
*/var/run/volt/volt.pid*
|
||||
: Daemon PID file.
|
||||
|
||||
*/var/log/volt/daemon.log*
|
||||
: Daemon log.
|
||||
|
||||
*/var/log/volt/audit.log*
|
||||
: Audit trail of state-changing operations.
|
||||
|
||||
*/var/cache/volt/*
|
||||
: Cache directory (safe to delete).
|
||||
|
||||
## ENVIRONMENT
|
||||
|
||||
**VOLT_CONFIG**
|
||||
: Config file path override.
|
||||
|
||||
**VOLT_COLOR**
|
||||
: Color mode: **auto**, **always**, **never**.
|
||||
|
||||
**VOLT_OUTPUT**
|
||||
: Default output format.
|
||||
|
||||
**VOLT_DEBUG**
|
||||
: Enable debug output.
|
||||
|
||||
**VOLT_HOST**
|
||||
: Daemon socket path or remote host.
|
||||
|
||||
**VOLT_CONTEXT**
|
||||
: Named context for multi-cluster operation.
|
||||
|
||||
**VOLT_COMPOSE_FILE**
|
||||
: Default compose file path.
|
||||
|
||||
**EDITOR**
|
||||
: Editor for **volt service edit** and **volt config edit**.
|
||||
|
||||
## EXIT CODES
|
||||
|
||||
| Code | Description |
|
||||
|------|-------------|
|
||||
| 0 | Success |
|
||||
| 1 | General error |
|
||||
| 2 | Invalid usage / bad arguments |
|
||||
| 3 | Resource not found |
|
||||
| 4 | Resource already exists |
|
||||
| 5 | Permission denied |
|
||||
| 6 | Daemon not running |
|
||||
| 7 | Timeout |
|
||||
| 8 | Network error |
|
||||
| 9 | Conflicting state |
|
||||
| 10 | Dependency error |
|
||||
| 11 | Insufficient resources |
|
||||
| 12 | Invalid configuration |
|
||||
| 13 | Interrupted by signal |
|
||||
|
||||
## EXAMPLES
|
||||
|
||||
List all running workloads:
|
||||
|
||||
volt ps
|
||||
|
||||
Create and start a container:
|
||||
|
||||
volt container create --name web --image ubuntu:24.04 --start
|
||||
|
||||
SSH into a VM:
|
||||
|
||||
volt ssh db-primary
|
||||
|
||||
Check service status:
|
||||
|
||||
volt service status nginx
|
||||
|
||||
View logs:
|
||||
|
||||
volt logs -f web-frontend
|
||||
|
||||
Create a scheduled task:
|
||||
|
||||
volt task create --name backup --exec /usr/local/bin/backup.sh --calendar daily --enable
|
||||
|
||||
Deploy a compose stack:
|
||||
|
||||
volt compose up -d
|
||||
|
||||
Show platform health:
|
||||
|
||||
volt system health
|
||||
|
||||
Apply a tuning profile:
|
||||
|
||||
volt tune profile apply web-server
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
**systemd-nspawn**(1), **systemctl**(1), **journalctl**(1), **qemu-system-x86_64**(1), **nft**(8), **ip**(8)
|
||||
|
||||
## VERSION
|
||||
|
||||
Volt version 0.2.0
|
||||
|
||||
## AUTHORS
|
||||
|
||||
Volt Platform — https://armoredgate.com
|
||||
557
docs/networking.md
Normal file
557
docs/networking.md
Normal file
@@ -0,0 +1,557 @@
|
||||
# Volt Networking
|
||||
|
||||
Volt networking provides a unified interface for all workload connectivity. It is built on Linux bridge interfaces and nftables, supporting containers and VMs on the same L2 network.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌──────────────────────────────┐
|
||||
│ Host Network │
|
||||
│ (eth0, etc.) │
|
||||
└──────────────┬────────────────┘
|
||||
│ NAT / routing
|
||||
┌──────────────┴────────────────┐
|
||||
│ volt0 (bridge) │
|
||||
│ 10.0.0.1/24 │
|
||||
├───────┬───────┬───────┬───────┤
|
||||
│ veth │ veth │ tap │ veth │
|
||||
│ ↓ │ ↓ │ ↓ │ ↓ │
|
||||
│ web │ api │ db │ cache │
|
||||
│(con) │(con) │ (vm) │(con) │
|
||||
└───────┴───────┴───────┴───────┘
|
||||
```
|
||||
|
||||
### Key Concepts
|
||||
|
||||
- **Bridges**: Linux bridge interfaces that act as virtual switches
|
||||
- **veth pairs**: Virtual ethernet pairs connecting containers to bridges
|
||||
- **TAP interfaces**: Virtual network interfaces connecting VMs to bridges
|
||||
- **L2 peers**: Containers and VMs on the same bridge communicate directly at Layer 2
|
||||
|
||||
## Default Bridge: volt0
|
||||
|
||||
When Volt initializes, it creates the `volt0` bridge with a default subnet of `10.0.0.0/24`. All workloads connect here unless assigned to a different network.
|
||||
|
||||
The bridge IP (`10.0.0.1`) serves as the default gateway for workloads. NAT rules handle outbound traffic to the host network and beyond.
|
||||
|
||||
```bash
|
||||
# View bridge status
|
||||
volt net bridge list
|
||||
|
||||
# View all network status
|
||||
volt net status
|
||||
```
|
||||
|
||||
## Creating Networks
|
||||
|
||||
### Basic Network
|
||||
|
||||
```bash
|
||||
volt net create --name backend --subnet 10.30.0.0/24
|
||||
```
|
||||
|
||||
This creates:
|
||||
1. A Linux bridge named `volt-backend`
|
||||
2. Assigns `10.30.0.1/24` to the bridge interface
|
||||
3. Configures NAT for outbound connectivity
|
||||
4. Updates internal DNS for name resolution
|
||||
|
||||
### Internal (Isolated) Network
|
||||
|
||||
```bash
|
||||
volt net create --name internal --subnet 10.50.0.0/24 --no-nat
|
||||
```
|
||||
|
||||
Internal networks have no NAT rules and no outbound connectivity. Workloads on internal networks can only communicate with each other.
|
||||
|
||||
### Inspecting Networks
|
||||
|
||||
```bash
|
||||
volt net inspect backend
|
||||
volt net list
|
||||
volt net list -o json
|
||||
```
|
||||
|
||||
## Connecting Workloads
|
||||
|
||||
### Connect to a Network
|
||||
|
||||
```bash
|
||||
# Connect a container
|
||||
volt net connect backend api-server
|
||||
|
||||
# Connect a VM
|
||||
volt net connect backend db-primary
|
||||
```
|
||||
|
||||
When connected, the workload gets:
|
||||
- A veth pair (container) or TAP interface (VM) attached to the bridge
|
||||
- An IP address from the network's subnet via DHCP or static assignment
|
||||
- DNS resolution for all other workloads on the same network
|
||||
|
||||
### Disconnect
|
||||
|
||||
```bash
|
||||
volt net disconnect api-server
|
||||
```
|
||||
|
||||
### Cross-Type Communication
|
||||
|
||||
A key feature of Volt networking: containers and VMs on the same network are L2 peers. There is no translation layer.
|
||||
|
||||
```bash
|
||||
# Both on "backend" network
|
||||
volt net connect backend api-server # container
|
||||
volt net connect backend db-primary # VM
|
||||
|
||||
# From inside api-server container:
|
||||
psql -h db-primary -U app -d myapp # just works
|
||||
```
|
||||
|
||||
This works because:
|
||||
- The container's veth and the VM's TAP are both bridge ports on the same bridge
|
||||
- Frames flow directly between them at L2
|
||||
- Internal DNS resolves `db-primary` to its bridge IP
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
Volt firewall wraps `nftables` with a workload-aware interface. Rules can reference workloads by name.
|
||||
|
||||
### Listing Rules
|
||||
|
||||
```bash
|
||||
volt net firewall list
|
||||
```
|
||||
|
||||
### Adding Rules
|
||||
|
||||
```bash
|
||||
# Allow HTTP to a workload
|
||||
volt net firewall add --name allow-http \
|
||||
--source any --dest 10.0.0.5 --port 80,443 --proto tcp --action accept
|
||||
|
||||
# Allow DB access from specific subnet
|
||||
volt net firewall add --name db-access \
|
||||
--source 10.0.0.0/24 --dest 10.30.0.10 --port 5432 --proto tcp --action accept
|
||||
|
||||
# Block SSH from everywhere
|
||||
volt net firewall add --name block-ssh \
|
||||
--source any --dest 10.0.0.5 --port 22 --proto tcp --action drop
|
||||
```
|
||||
|
||||
### Deleting Rules
|
||||
|
||||
```bash
|
||||
volt net firewall delete --name allow-http
|
||||
```
|
||||
|
||||
### Flushing All Rules
|
||||
|
||||
```bash
|
||||
volt net firewall flush
|
||||
```
|
||||
|
||||
### How It Works Internally
|
||||
|
||||
Volt manages a dedicated nftables table called `volt` with chains for:
|
||||
|
||||
| Chain | Purpose |
|
||||
|-------|---------|
|
||||
| `volt-input` | Traffic destined for the host |
|
||||
| `volt-forward` | Traffic between workloads (inter-bridge) |
|
||||
| `volt-nat-pre` | DNAT rules (port forwarding inbound) |
|
||||
| `volt-nat-post` | SNAT rules (masquerade for outbound) |
|
||||
|
||||
Rules added via `volt net firewall add` are inserted into the appropriate chain based on source/destination. The chain is determined automatically — you don't need to know whether traffic is "input" or "forward".
|
||||
|
||||
### Default Policy
|
||||
|
||||
- **Inbound to host**: deny all (except established connections)
|
||||
- **Inter-workload (same network)**: allow
|
||||
- **Inter-workload (different network)**: deny
|
||||
- **Outbound from workloads**: allow (via NAT)
|
||||
- **Host access from workloads**: deny by default
|
||||
|
||||
## Port Forwarding
|
||||
|
||||
Forward host ports to workloads:
|
||||
|
||||
### Adding Port Forwards
|
||||
|
||||
```bash
|
||||
# Forward host:80 to container web-frontend:80
|
||||
volt net port add --host-port 80 --target web-frontend --target-port 80
|
||||
|
||||
# Forward host:5432 to VM db-primary:5432
|
||||
volt net port add --host-port 5432 --target db-primary --target-port 5432
|
||||
```
|
||||
|
||||
### Listing Port Forwards
|
||||
|
||||
```bash
|
||||
volt net port list
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
HOST-PORT TARGET TARGET-PORT PROTO STATUS
|
||||
80 web-frontend 80 tcp active
|
||||
443 web-frontend 443 tcp active
|
||||
5432 db-primary 5432 tcp active
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
Port forwards create DNAT rules in nftables:
|
||||
1. Incoming traffic on `host:port` is DNATed to `workload-ip:target-port`
|
||||
2. Return traffic is tracked by conntrack and SNATed back
|
||||
|
||||
## DNS Resolution
|
||||
|
||||
Volt runs an internal DNS resolver (`volt-dns.service`) that provides automatic name resolution for all workloads.
|
||||
|
||||
### How It Works
|
||||
|
||||
1. When a workload starts, Volt registers its name and IP in the internal DNS
|
||||
2. All workloads are configured to use the bridge gateway IP as their DNS server
|
||||
3. Lookups for workload names resolve to their bridge IPs
|
||||
4. Unknown queries are forwarded to upstream DNS servers
|
||||
|
||||
### Upstream DNS
|
||||
|
||||
Configured in `/etc/volt/config.yaml`:
|
||||
|
||||
```yaml
|
||||
network:
|
||||
dns:
|
||||
enabled: true
|
||||
upstream:
|
||||
- 1.1.1.1
|
||||
- 8.8.8.8
|
||||
search_domains:
|
||||
- volt.local
|
||||
```
|
||||
|
||||
### DNS Management
|
||||
|
||||
```bash
|
||||
# List DNS entries
|
||||
volt net dns list
|
||||
|
||||
# Flush DNS cache
|
||||
volt net dns flush
|
||||
```
|
||||
|
||||
### Name Resolution Examples
|
||||
|
||||
Within any workload on the same network:
|
||||
|
||||
```bash
|
||||
# Resolve by name
|
||||
ping db-primary # resolves to 10.30.0.10
|
||||
curl http://api-server:8080/health
|
||||
psql -h db-primary -U app -d myapp
|
||||
```
|
||||
|
||||
## Network Policies
|
||||
|
||||
Policies define allowed communication patterns between specific workloads. They provide finer-grained control than firewall rules.
|
||||
|
||||
### Creating Policies
|
||||
|
||||
```bash
|
||||
# Only app-server can reach db-primary on port 5432
|
||||
volt net policy create --name app-to-db \
|
||||
--from app-server --to db-primary --port 5432 --action allow
|
||||
```
|
||||
|
||||
### Listing Policies
|
||||
|
||||
```bash
|
||||
volt net policy list
|
||||
```
|
||||
|
||||
### Testing Connectivity
|
||||
|
||||
Before deploying, test whether traffic would be allowed:
|
||||
|
||||
```bash
|
||||
# This should succeed
|
||||
volt net policy test --from app-server --to db-primary --port 5432
|
||||
# ✓ app-server → db-primary:5432 — ALLOWED (policy: app-to-db)
|
||||
|
||||
# This should fail
|
||||
volt net policy test --from web-frontend --to db-primary --port 5432
|
||||
# ✗ web-frontend → db-primary:5432 — DENIED
|
||||
```
|
||||
|
||||
### Deleting Policies
|
||||
|
||||
```bash
|
||||
volt net policy delete --name app-to-db
|
||||
```
|
||||
|
||||
## VLANs
|
||||
|
||||
### Listing VLANs
|
||||
|
||||
```bash
|
||||
volt net vlan list
|
||||
```
|
||||
|
||||
VLAN management is available for advanced network segmentation. VLANs are created on top of physical interfaces and can be used as bridge uplinks.
|
||||
|
||||
## Ingress Proxy
|
||||
|
||||
Volt includes a built-in reverse proxy for routing external HTTP/HTTPS traffic to workloads by hostname and path prefix. It supports automatic TLS via ACME (Let's Encrypt), manual certificates, WebSocket passthrough, health checks, and zero-downtime route reloading.
|
||||
|
||||
### Creating Routes
|
||||
|
||||
Route external traffic to workloads by hostname:
|
||||
|
||||
```bash
|
||||
# Simple HTTP route
|
||||
volt ingress create --name web \
|
||||
--hostname app.example.com \
|
||||
--backend web:8080
|
||||
|
||||
# Route with path prefix
|
||||
volt ingress create --name api \
|
||||
--hostname api.example.com \
|
||||
--path /v1 \
|
||||
--backend api:3000
|
||||
|
||||
# Route with automatic TLS (Let's Encrypt)
|
||||
volt ingress create --name secure-web \
|
||||
--hostname app.example.com \
|
||||
--backend web:8080 \
|
||||
--tls auto
|
||||
|
||||
# Route with manual TLS certificate
|
||||
volt ingress create --name cdn \
|
||||
--hostname cdn.example.com \
|
||||
--backend static:80 \
|
||||
--tls manual \
|
||||
--cert /etc/certs/cdn.pem \
|
||||
--key /etc/certs/cdn.key
|
||||
```
|
||||
|
||||
### TLS Termination
|
||||
|
||||
Three TLS modes are available:
|
||||
|
||||
| Mode | Description |
|
||||
|------|-------------|
|
||||
| `auto` | ACME (Let's Encrypt) — automatic certificate issuance, renewal, and storage |
|
||||
| `manual` | User-provided certificate and key files |
|
||||
| `passthrough` | Forward TLS directly to the backend without termination |
|
||||
|
||||
```bash
|
||||
# Auto ACME — Volt handles everything
|
||||
volt ingress create --name web --hostname app.example.com --backend web:8080 --tls auto
|
||||
|
||||
# Manual certs
|
||||
volt ingress create --name web --hostname app.example.com --backend web:8080 \
|
||||
--tls manual --cert /etc/certs/app.pem --key /etc/certs/app.key
|
||||
|
||||
# TLS passthrough — backend handles TLS
|
||||
volt ingress create --name web --hostname app.example.com --backend web:443 --tls passthrough
|
||||
```
|
||||
|
||||
For ACME to work, the ingress proxy must be reachable on port 80 from the internet (for HTTP-01 challenges). Ensure your DNS records point to the server running the proxy.
|
||||
|
||||
### WebSocket Passthrough
|
||||
|
||||
WebSocket connections are passed through automatically. When a client sends an HTTP Upgrade request, the ingress proxy upgrades the connection and proxies frames bidirectionally to the backend. No additional configuration is needed.
|
||||
|
||||
### Health Checks
|
||||
|
||||
The ingress proxy monitors backend health. If a backend becomes unreachable, it is temporarily removed from the routing table until it recovers. Configure backend timeouts per route:
|
||||
|
||||
```bash
|
||||
volt ingress create --name api --hostname api.example.com \
|
||||
--backend api:3000 --timeout 60
|
||||
```
|
||||
|
||||
The `--timeout` flag sets the backend timeout in seconds (default: 30).
|
||||
|
||||
### Hot Reload
|
||||
|
||||
Update routes without restarting the proxy or dropping active connections:
|
||||
|
||||
```bash
|
||||
volt ingress reload
|
||||
```
|
||||
|
||||
Existing connections are drained gracefully while new connections immediately use the updated routes. This is safe to call from CI/CD pipelines or GitOps workflows.
|
||||
|
||||
### Managing Routes
|
||||
|
||||
```bash
|
||||
# List all routes
|
||||
volt ingress list
|
||||
|
||||
# Show proxy status
|
||||
volt ingress status
|
||||
|
||||
# Delete a route
|
||||
volt ingress delete --name web
|
||||
```
|
||||
|
||||
### Running the Proxy
|
||||
|
||||
**Foreground (testing):**
|
||||
```bash
|
||||
volt ingress serve
|
||||
volt ingress serve --http-port 8080 --https-port 8443
|
||||
```
|
||||
|
||||
**Production (systemd):**
|
||||
```bash
|
||||
systemctl enable --now volt-ingress.service
|
||||
```
|
||||
|
||||
### Example: Full Ingress Setup
|
||||
|
||||
```bash
|
||||
# Create routes for a web application
|
||||
volt ingress create --name web \
|
||||
--hostname app.example.com \
|
||||
--backend web:8080 \
|
||||
--tls auto
|
||||
|
||||
volt ingress create --name api \
|
||||
--hostname api.example.com \
|
||||
--path /v1 \
|
||||
--backend api:3000 \
|
||||
--tls auto
|
||||
|
||||
volt ingress create --name ws \
|
||||
--hostname ws.example.com \
|
||||
--backend realtime:9000 \
|
||||
--tls auto
|
||||
|
||||
# Start the proxy
|
||||
systemctl enable --now volt-ingress.service
|
||||
|
||||
# Verify
|
||||
volt ingress list
|
||||
volt ingress status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bridge Management
|
||||
|
||||
### Listing Bridges
|
||||
|
||||
```bash
|
||||
volt net bridge list
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
NAME SUBNET MTU CONNECTED STATUS
|
||||
volt0 10.0.0.0/24 1500 8 up
|
||||
backend 10.30.0.0/24 1500 3 up
|
||||
```
|
||||
|
||||
### Creating a Bridge
|
||||
|
||||
```bash
|
||||
volt net bridge create mybridge --subnet 10.50.0.0/24
|
||||
```
|
||||
|
||||
### Deleting a Bridge
|
||||
|
||||
```bash
|
||||
volt net bridge delete mybridge
|
||||
```
|
||||
|
||||
## Network Configuration
|
||||
|
||||
### Config File
|
||||
|
||||
Network settings in `/etc/volt/config.yaml`:
|
||||
|
||||
```yaml
|
||||
network:
|
||||
default_bridge: volt0
|
||||
default_subnet: 10.0.0.0/24
|
||||
dns:
|
||||
enabled: true
|
||||
upstream:
|
||||
- 1.1.1.1
|
||||
- 8.8.8.8
|
||||
search_domains:
|
||||
- volt.local
|
||||
mtu: 1500
|
||||
```
|
||||
|
||||
### Per-Network Settings in Compose
|
||||
|
||||
```yaml
|
||||
networks:
|
||||
frontend:
|
||||
driver: bridge
|
||||
subnet: 10.20.0.0/24
|
||||
options:
|
||||
mtu: 9000
|
||||
|
||||
backend:
|
||||
driver: bridge
|
||||
subnet: 10.30.0.0/24
|
||||
internal: true # No external access
|
||||
```
|
||||
|
||||
## Network Tuning
|
||||
|
||||
For high-throughput workloads, tune network buffer sizes and offloading:
|
||||
|
||||
```bash
|
||||
# Increase buffer sizes
|
||||
volt tune net buffers --rmem-max 16M --wmem-max 16M
|
||||
|
||||
# Show current tuning
|
||||
volt tune net show
|
||||
```
|
||||
|
||||
Relevant sysctls:
|
||||
|
||||
```bash
|
||||
volt tune sysctl set net.core.somaxconn 65535
|
||||
volt tune sysctl set net.ipv4.ip_forward 1
|
||||
volt tune sysctl set net.core.rmem_max 16777216
|
||||
volt tune sysctl set net.core.wmem_max 16777216
|
||||
```
|
||||
|
||||
## Troubleshooting Network Issues
|
||||
|
||||
### Container Can't Reach the Internet
|
||||
|
||||
1. Check bridge exists: `volt net bridge list`
|
||||
2. Check NAT is configured: `volt net firewall list`
|
||||
3. Check IP forwarding: `volt tune sysctl get net.ipv4.ip_forward`
|
||||
4. Verify the container has an IP: `volt container inspect <name>`
|
||||
|
||||
### Workloads Can't Reach Each Other
|
||||
|
||||
1. Verify both are on the same network: `volt net inspect <network>`
|
||||
2. Check firewall rules aren't blocking: `volt net firewall list`
|
||||
3. Check network policies: `volt net policy list`
|
||||
4. Test connectivity: `volt net policy test --from <src> --to <dst> --port <port>`
|
||||
|
||||
### DNS Not Resolving
|
||||
|
||||
1. Check DNS service: `volt net dns list`
|
||||
2. Flush DNS cache: `volt net dns flush`
|
||||
3. Verify upstream DNS: check `/etc/volt/config.yaml` network.dns.upstream
|
||||
|
||||
### Port Forward Not Working
|
||||
|
||||
1. List active forwards: `volt net port list`
|
||||
2. Check the target workload is running: `volt ps`
|
||||
3. Verify the target port is listening inside the workload
|
||||
4. Check firewall rules aren't blocking inbound traffic
|
||||
|
||||
See [troubleshooting.md](troubleshooting.md) for more.
|
||||
229
docs/registry.md
Normal file
229
docs/registry.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# Volt Registry
|
||||
|
||||
Volt includes a built-in **OCI Distribution Spec compliant container registry** backed by Stellarium CAS. Any OCI-compliant client — ORAS, Helm, Podman, Buildah, or Skopeo — can push and pull artifacts.
|
||||
|
||||
## How It Works
|
||||
|
||||
The registry maps OCI concepts directly to Stellarium CAS:
|
||||
|
||||
- **Blobs** — The SHA-256 digest from the OCI spec IS the CAS address. No translation layer, no indirection.
|
||||
- **Manifests** — Stored and indexed alongside the CAS store, referenced by digest and optionally by tag.
|
||||
- **Tags** — Named pointers to manifest digests, enabling human-readable versioning.
|
||||
|
||||
This design means every blob is automatically deduplicated across repositories, verified on every read, and eligible for CAS-wide garbage collection.
|
||||
|
||||
## Licensing
|
||||
|
||||
| Operation | License Required |
|
||||
|-----------|-----------------|
|
||||
| Pull (read) | Free — all tiers |
|
||||
| Push (write) | Pro license required |
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Start the Registry
|
||||
|
||||
```bash
|
||||
# Start on default port 5000
|
||||
volt registry serve --port 5000
|
||||
```
|
||||
|
||||
The registry is now available at `http://localhost:5000`.
|
||||
|
||||
### Push an Artifact
|
||||
|
||||
Use [ORAS](https://oras.land/) or any OCI-compliant client to push artifacts:
|
||||
|
||||
```bash
|
||||
# Push a file as an OCI artifact
|
||||
oras push localhost:5000/myapp:v1 ./artifact.tar.gz
|
||||
|
||||
# Push multiple files
|
||||
oras push localhost:5000/myapp:v1 ./binary:application/octet-stream ./config.yaml:text/yaml
|
||||
```
|
||||
|
||||
### Pull an Artifact
|
||||
|
||||
```bash
|
||||
# Pull with ORAS
|
||||
oras pull localhost:5000/myapp:v1
|
||||
|
||||
# Pull with any OCI-compliant tool
|
||||
# The registry speaks standard OCI Distribution Spec
|
||||
```
|
||||
|
||||
### List Repositories
|
||||
|
||||
```bash
|
||||
volt registry list
|
||||
```
|
||||
|
||||
### Check Registry Status
|
||||
|
||||
```bash
|
||||
volt registry status
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
The registry uses bearer tokens for authentication. Generate tokens with `volt registry token`.
|
||||
|
||||
### Generate a Pull Token (Read-Only)
|
||||
|
||||
```bash
|
||||
volt registry token
|
||||
```
|
||||
|
||||
### Generate a Push Token (Read-Write)
|
||||
|
||||
```bash
|
||||
volt registry token --push
|
||||
```
|
||||
|
||||
### Custom Expiry
|
||||
|
||||
```bash
|
||||
volt registry token --push --expiry 7d
|
||||
volt registry token --expiry 1h
|
||||
```
|
||||
|
||||
Tokens are HMAC-SHA256 signed and include an expiration time. Pass the token to clients via the `Authorization: Bearer <token>` header or the client's authentication mechanism.
|
||||
|
||||
### Using Tokens with ORAS
|
||||
|
||||
```bash
|
||||
# Generate a push token
|
||||
TOKEN=$(volt registry token --push)
|
||||
|
||||
# Use it with ORAS
|
||||
oras push --registry-config <(echo '{"auths":{"localhost:5000":{"auth":"'$(echo -n ":$TOKEN" | base64)'"}}}') \
|
||||
localhost:5000/myapp:v1 ./artifact
|
||||
```
|
||||
|
||||
### Anonymous Pull
|
||||
|
||||
By default, the registry allows anonymous pull (`--public` is enabled). To require authentication for all operations:
|
||||
|
||||
```bash
|
||||
volt registry serve --port 5000 --public=false
|
||||
```
|
||||
|
||||
## TLS Configuration
|
||||
|
||||
For production deployments, enable TLS:
|
||||
|
||||
```bash
|
||||
volt registry serve --port 5000 \
|
||||
--tls \
|
||||
--cert /etc/volt/certs/registry.pem \
|
||||
--key /etc/volt/certs/registry.key
|
||||
```
|
||||
|
||||
With TLS enabled, clients connect via `https://your-host:5000`.
|
||||
|
||||
## Read-Only Mode
|
||||
|
||||
Run the registry in read-only mode to serve as a pull-only mirror:
|
||||
|
||||
```bash
|
||||
volt registry serve --port 5000 --read-only
|
||||
```
|
||||
|
||||
In this mode, all push operations return `405 Method Not Allowed`.
|
||||
|
||||
## Garbage Collection
|
||||
|
||||
Over time, unreferenced blobs accumulate as tags are updated or deleted. Use garbage collection to reclaim space:
|
||||
|
||||
### Dry Run
|
||||
|
||||
See what would be deleted without actually deleting:
|
||||
|
||||
```bash
|
||||
volt registry gc --dry-run
|
||||
```
|
||||
|
||||
### Run GC
|
||||
|
||||
```bash
|
||||
volt registry gc
|
||||
```
|
||||
|
||||
Garbage collection is safe to run while the registry is serving traffic. Blobs that are currently referenced by any manifest or tag will never be collected.
|
||||
|
||||
Since registry blobs are stored in Stellarium CAS, you may also want to run `volt cas gc` to clean up CAS objects that are no longer referenced by any registry manifest, image, or snapshot.
|
||||
|
||||
## Production Deployment
|
||||
|
||||
For production use, run the registry as a systemd service instead of in the foreground:
|
||||
|
||||
```bash
|
||||
# Enable and start the registry service
|
||||
systemctl enable --now volt-registry.service
|
||||
```
|
||||
|
||||
The systemd service is pre-configured to start the registry on port 5000. To customize the port or TLS settings, edit the service configuration:
|
||||
|
||||
```bash
|
||||
volt service edit volt-registry
|
||||
```
|
||||
|
||||
## CDN Integration (Pro)
|
||||
|
||||
Pro license holders can configure CDN integration for globally distributed blob serving. When enabled, pull requests for large blobs are redirected to CDN edge nodes, reducing origin load and improving download speeds for geographically distributed clients.
|
||||
|
||||
Configure CDN integration in `/etc/volt/config.yaml`:
|
||||
|
||||
```yaml
|
||||
registry:
|
||||
cdn:
|
||||
enabled: true
|
||||
provider: bunny # CDN provider
|
||||
origin: https://registry.example.com:5000
|
||||
pull_zone: volt-registry
|
||||
```
|
||||
|
||||
## CAS Integration
|
||||
|
||||
The registry's storage is fully integrated with Stellarium CAS:
|
||||
|
||||
```
|
||||
OCI Blob (sha256:abc123...) ──→ CAS Object (/var/lib/volt/cas/objects/ab/abc123...)
|
||||
↑
|
||||
Same object used by:
|
||||
• Container images
|
||||
• VM disk layers
|
||||
• Snapshots
|
||||
• Bundles
|
||||
```
|
||||
|
||||
This means:
|
||||
- **Zero-copy** — pushing an image that shares layers with existing images stores no new data
|
||||
- **Cross-system dedup** — a blob shared between a container image and a registry artifact is stored once
|
||||
- **Unified GC** — `volt cas gc` cleans up unreferenced objects across the entire system
|
||||
|
||||
## API Endpoints
|
||||
|
||||
The registry implements the [OCI Distribution Spec](https://github.com/opencontainers/distribution-spec/blob/main/spec.md):
|
||||
|
||||
| Method | Path | Description |
|
||||
|--------|------|-------------|
|
||||
| `GET` | `/v2/` | API version check |
|
||||
| `GET` | `/v2/_catalog` | List repositories |
|
||||
| `GET` | `/v2/<name>/tags/list` | List tags |
|
||||
| `HEAD` | `/v2/<name>/manifests/<ref>` | Check manifest exists |
|
||||
| `GET` | `/v2/<name>/manifests/<ref>` | Get manifest |
|
||||
| `PUT` | `/v2/<name>/manifests/<ref>` | Push manifest (Pro) |
|
||||
| `DELETE` | `/v2/<name>/manifests/<ref>` | Delete manifest (Pro) |
|
||||
| `HEAD` | `/v2/<name>/blobs/<digest>` | Check blob exists |
|
||||
| `GET` | `/v2/<name>/blobs/<digest>` | Get blob |
|
||||
| `POST` | `/v2/<name>/blobs/uploads/` | Start blob upload (Pro) |
|
||||
| `PATCH` | `/v2/<name>/blobs/uploads/<id>` | Upload blob chunk (Pro) |
|
||||
| `PUT` | `/v2/<name>/blobs/uploads/<id>` | Complete blob upload (Pro) |
|
||||
| `DELETE` | `/v2/<name>/blobs/<digest>` | Delete blob (Pro) |
|
||||
|
||||
## See Also
|
||||
|
||||
- [CLI Reference — Registry Commands](cli-reference.md#volt-registry--oci-container-registry)
|
||||
- [Architecture — ORAS Registry](architecture.md#oras-registry)
|
||||
- [Stellarium CAS](architecture.md#stellarium--content-addressed-storage)
|
||||
631
docs/troubleshooting.md
Normal file
631
docs/troubleshooting.md
Normal file
@@ -0,0 +1,631 @@
|
||||
# Troubleshooting
|
||||
|
||||
Common issues and solutions for the Volt Platform.
|
||||
|
||||
## Quick Diagnostics
|
||||
|
||||
Run these first to understand the state of your system:
|
||||
|
||||
```bash
|
||||
# Platform health check
|
||||
volt system health
|
||||
|
||||
# Platform info
|
||||
volt system info
|
||||
|
||||
# What's running?
|
||||
volt ps --all
|
||||
|
||||
# Daemon status
|
||||
volt daemon status
|
||||
|
||||
# Network status
|
||||
volt net status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Container Issues
|
||||
|
||||
### Container Won't Start
|
||||
|
||||
**Symptom**: `volt container start <name>` fails or returns an error.
|
||||
|
||||
**Check the logs first**:
|
||||
```bash
|
||||
volt container logs <name>
|
||||
volt logs <name>
|
||||
```
|
||||
|
||||
**Common causes**:
|
||||
|
||||
1. **Image not found**
|
||||
```
|
||||
Error: image "ubuntu:24.04" not found
|
||||
```
|
||||
Pull the image first:
|
||||
```bash
|
||||
sudo volt image pull ubuntu:24.04
|
||||
volt image list
|
||||
```
|
||||
|
||||
2. **Name conflict**
|
||||
```
|
||||
Error: container "web" already exists
|
||||
```
|
||||
Delete the existing container or use a different name:
|
||||
```bash
|
||||
volt container delete web
|
||||
```
|
||||
|
||||
3. **systemd-nspawn not installed**
|
||||
```
|
||||
Error: systemd-nspawn not found
|
||||
```
|
||||
Install the systemd-container package:
|
||||
```bash
|
||||
# Debian/Ubuntu
|
||||
sudo apt install systemd-container
|
||||
|
||||
# Fedora/Rocky
|
||||
sudo dnf install systemd-container
|
||||
```
|
||||
|
||||
4. **Rootfs directory missing or corrupt**
|
||||
```bash
|
||||
ls -la /var/lib/volt/containers/<name>/rootfs/
|
||||
```
|
||||
If empty or missing, recreate the container:
|
||||
```bash
|
||||
volt container delete <name>
|
||||
volt container create --name <name> --image <image> --start
|
||||
```
|
||||
|
||||
5. **Resource limits too restrictive**
|
||||
Try creating without limits, then add them:
|
||||
```bash
|
||||
volt container create --name test --image ubuntu:24.04 --start
|
||||
volt container update test --memory 512M
|
||||
```
|
||||
|
||||
### Container Starts But Process Exits Immediately
|
||||
|
||||
**Check the main process**:
|
||||
```bash
|
||||
volt container logs <name>
|
||||
volt container inspect <name>
|
||||
```
|
||||
|
||||
Common cause: the container has no init process or the specified command doesn't exist in the image.
|
||||
|
||||
```bash
|
||||
# Try interactive shell to debug
|
||||
volt container shell <name>
|
||||
```
|
||||
|
||||
### Can't Exec Into Container
|
||||
|
||||
**Symptom**: `volt container exec` fails.
|
||||
|
||||
1. **Container not running**:
|
||||
```bash
|
||||
volt ps --all | grep <name>
|
||||
volt container start <name>
|
||||
```
|
||||
|
||||
2. **Shell not available in image**:
|
||||
The default shell (`/bin/sh`) might not exist in minimal images. Check:
|
||||
```bash
|
||||
volt container exec <name> -- /bin/bash
|
||||
volt container exec <name> -- /bin/busybox sh
|
||||
```
|
||||
|
||||
### Container Resource Limits Not Working
|
||||
|
||||
Verify cgroup v2 is enabled:
|
||||
```bash
|
||||
mount | grep cgroup2
|
||||
# Should show: cgroup2 on /sys/fs/cgroup type cgroup2
|
||||
```
|
||||
|
||||
Check the cgroup settings:
|
||||
```bash
|
||||
volt container inspect <name> -o json | grep -i memory
|
||||
cat /sys/fs/cgroup/system.slice/volt-container@<name>.service/memory.max
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## VM Issues
|
||||
|
||||
### VM Won't Start
|
||||
|
||||
**Check prerequisites**:
|
||||
```bash
|
||||
# KVM available?
|
||||
ls -la /dev/kvm
|
||||
|
||||
# QEMU installed?
|
||||
which qemu-system-x86_64
|
||||
|
||||
# Kernel modules loaded?
|
||||
lsmod | grep kvm
|
||||
```
|
||||
|
||||
**If `/dev/kvm` doesn't exist**:
|
||||
```bash
|
||||
# Load KVM modules
|
||||
sudo modprobe kvm
|
||||
sudo modprobe kvm_intel # or kvm_amd
|
||||
|
||||
# Check BIOS: virtualization must be enabled (VT-x / AMD-V)
|
||||
dmesg | grep -i kvm
|
||||
```
|
||||
|
||||
**If permission denied on `/dev/kvm`**:
|
||||
```bash
|
||||
# Add user to kvm group
|
||||
sudo usermod -aG kvm $USER
|
||||
# Log out and back in
|
||||
|
||||
# Or check group ownership
|
||||
ls -la /dev/kvm
|
||||
# Should be: crw-rw---- 1 root kvm
|
||||
```
|
||||
|
||||
### VM Starts But No SSH Access
|
||||
|
||||
1. **VM might still be booting**. Wait 30-60 seconds for first boot.
|
||||
|
||||
2. **Check VM has an IP**:
|
||||
```bash
|
||||
volt vm list -o wide
|
||||
```
|
||||
|
||||
3. **SSH might not be installed/running in the VM**:
|
||||
```bash
|
||||
volt vm exec <name> -- systemctl status sshd
|
||||
```
|
||||
|
||||
4. **Network connectivity**:
|
||||
```bash
|
||||
# From host, ping the VM's IP
|
||||
ping <vm-ip>
|
||||
```
|
||||
|
||||
### VM Performance Issues
|
||||
|
||||
Apply a tuning profile:
|
||||
```bash
|
||||
volt tune profile apply <vm-name> --profile database
|
||||
```
|
||||
|
||||
Or tune individually:
|
||||
```bash
|
||||
# Pin CPUs
|
||||
volt tune cpu pin <vm-name> --cpus 4,5,6,7
|
||||
|
||||
# Enable hugepages
|
||||
volt tune memory hugepages --enable --size 2M --count 4096
|
||||
|
||||
# Set I/O scheduler
|
||||
volt tune io scheduler /dev/sda --scheduler none
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Service Issues
|
||||
|
||||
### Service Won't Start
|
||||
|
||||
```bash
|
||||
# Check status
|
||||
volt service status <name>
|
||||
|
||||
# View logs
|
||||
volt service logs <name>
|
||||
|
||||
# View the unit file for issues
|
||||
volt service show <name>
|
||||
```
|
||||
|
||||
Common causes:
|
||||
|
||||
1. **ExecStart path doesn't exist**:
|
||||
```bash
|
||||
which <binary-path>
|
||||
```
|
||||
|
||||
2. **User/group doesn't exist**:
|
||||
```bash
|
||||
id <service-user>
|
||||
# Create if missing
|
||||
sudo useradd -r -s /bin/false <service-user>
|
||||
```
|
||||
|
||||
3. **Working directory doesn't exist**:
|
||||
```bash
|
||||
ls -la <workdir-path>
|
||||
sudo mkdir -p <workdir-path>
|
||||
```
|
||||
|
||||
4. **Port already in use**:
|
||||
```bash
|
||||
ss -tlnp | grep <port>
|
||||
```
|
||||
|
||||
### Service Keeps Restarting
|
||||
|
||||
Check the restart loop:
|
||||
```bash
|
||||
volt service status <name>
|
||||
volt service logs <name> --tail 50
|
||||
```
|
||||
|
||||
If the service fails immediately on start, systemd may hit the start rate limit. Check:
|
||||
```bash
|
||||
# View full systemd status
|
||||
systemctl status <name>.service
|
||||
```
|
||||
|
||||
Temporarily adjust restart behavior:
|
||||
```bash
|
||||
volt service edit <name> --inline "RestartSec=10"
|
||||
```
|
||||
|
||||
### Can't Delete a Service
|
||||
|
||||
```bash
|
||||
# If it says "refusing to delete system unit"
|
||||
# Volt protects system services. Only user-created services can be deleted.
|
||||
|
||||
# If stuck, manually:
|
||||
volt service stop <name>
|
||||
volt service disable <name>
|
||||
volt service delete <name>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Networking Issues
|
||||
|
||||
### No Network Connectivity from Container
|
||||
|
||||
1. **Check bridge exists**:
|
||||
```bash
|
||||
volt net bridge list
|
||||
```
|
||||
If `volt0` is missing:
|
||||
```bash
|
||||
sudo volt net bridge create volt0 --subnet 10.0.0.0/24
|
||||
```
|
||||
|
||||
2. **Check IP forwarding**:
|
||||
```bash
|
||||
volt tune sysctl get net.ipv4.ip_forward
|
||||
# Should be 1. If not:
|
||||
sudo volt tune sysctl set net.ipv4.ip_forward 1 --persist
|
||||
```
|
||||
|
||||
3. **Check NAT/masquerade rules**:
|
||||
```bash
|
||||
sudo nft list ruleset | grep masquerade
|
||||
```
|
||||
|
||||
4. **Check container has an IP**:
|
||||
```bash
|
||||
volt container inspect <name>
|
||||
```
|
||||
|
||||
### Workloads Can't Resolve Names
|
||||
|
||||
1. **Check internal DNS**:
|
||||
```bash
|
||||
volt net dns list
|
||||
```
|
||||
|
||||
2. **Flush DNS cache**:
|
||||
```bash
|
||||
volt net dns flush
|
||||
```
|
||||
|
||||
3. **Check upstream DNS in config**:
|
||||
```bash
|
||||
volt config get network.dns.upstream
|
||||
```
|
||||
|
||||
### Port Forward Not Working
|
||||
|
||||
1. **Verify the forward exists**:
|
||||
```bash
|
||||
volt net port list
|
||||
```
|
||||
|
||||
2. **Check the target is running and listening**:
|
||||
```bash
|
||||
volt ps | grep <target>
|
||||
volt container exec <target> -- ss -tlnp
|
||||
```
|
||||
|
||||
3. **Check firewall rules**:
|
||||
```bash
|
||||
volt net firewall list
|
||||
```
|
||||
|
||||
4. **Check for host-level firewall conflicts**:
|
||||
```bash
|
||||
sudo nft list ruleset
|
||||
sudo iptables -L -n # if iptables is also in use
|
||||
```
|
||||
|
||||
### Firewall Rule Not Taking Effect
|
||||
|
||||
1. **List current rules**:
|
||||
```bash
|
||||
volt net firewall list
|
||||
```
|
||||
|
||||
2. **Rule ordering matters**. More specific rules should come first. If a broad `deny` rule precedes your `accept` rule, traffic will be blocked.
|
||||
|
||||
3. **Flush and recreate if confused**:
|
||||
```bash
|
||||
volt net firewall flush
|
||||
# Re-add rules in the correct order
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Daemon Issues
|
||||
|
||||
### Daemon Not Running
|
||||
|
||||
```bash
|
||||
volt daemon status
|
||||
# If not running:
|
||||
sudo volt daemon start
|
||||
```
|
||||
|
||||
Check systemd:
|
||||
```bash
|
||||
systemctl status volt.service
|
||||
journalctl -u volt.service --no-pager -n 50
|
||||
```
|
||||
|
||||
### Daemon Won't Start
|
||||
|
||||
1. **Socket in use**:
|
||||
```bash
|
||||
ls -la /var/run/volt/volt.sock
|
||||
# Remove stale socket
|
||||
sudo rm /var/run/volt/volt.sock
|
||||
sudo volt daemon start
|
||||
```
|
||||
|
||||
2. **Config file invalid**:
|
||||
```bash
|
||||
volt config validate
|
||||
```
|
||||
|
||||
3. **Missing directories**:
|
||||
```bash
|
||||
sudo mkdir -p /var/lib/volt /var/run/volt /var/log/volt /var/cache/volt /etc/volt
|
||||
```
|
||||
|
||||
4. **PID file stale**:
|
||||
```bash
|
||||
cat /var/run/volt/volt.pid
|
||||
# Check if that PID exists
|
||||
ps -p $(cat /var/run/volt/volt.pid)
|
||||
# If no process, remove it
|
||||
sudo rm /var/run/volt/volt.pid
|
||||
sudo volt daemon start
|
||||
```
|
||||
|
||||
### Commands Timeout
|
||||
|
||||
```bash
|
||||
# Increase timeout
|
||||
volt --timeout 120 <command>
|
||||
|
||||
# Or check if daemon is overloaded
|
||||
volt daemon status
|
||||
volt top
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Permission Issues
|
||||
|
||||
### "Permission denied" Errors
|
||||
|
||||
Most state-changing operations require root or `volt` group membership:
|
||||
|
||||
```bash
|
||||
# Add user to volt group
|
||||
sudo usermod -aG volt $USER
|
||||
# Log out and back in for group change to take effect
|
||||
|
||||
# Or use sudo
|
||||
sudo volt container create --name web --image ubuntu:24.04 --start
|
||||
```
|
||||
|
||||
### Read-Only Operations Work, Write Operations Fail
|
||||
|
||||
This is expected for non-root, non-`volt-group` users. These commands always work:
|
||||
|
||||
```bash
|
||||
volt ps # Read-only
|
||||
volt top # Read-only
|
||||
volt logs <name> # Read-only
|
||||
volt service list # Read-only
|
||||
volt config show # Read-only
|
||||
```
|
||||
|
||||
These require privileges:
|
||||
|
||||
```bash
|
||||
volt container create # Needs root/volt group
|
||||
volt service create # Needs root
|
||||
volt net firewall add # Needs root
|
||||
volt tune sysctl set # Needs root
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Storage Issues
|
||||
|
||||
### Disk Space Full
|
||||
|
||||
```bash
|
||||
# Check disk usage
|
||||
volt system info
|
||||
|
||||
# Clean up unused images
|
||||
volt image list
|
||||
volt image delete <unused-image>
|
||||
|
||||
# Clean CAS garbage
|
||||
volt cas gc --dry-run
|
||||
volt cas gc
|
||||
|
||||
# Clear cache (safe to delete)
|
||||
sudo rm -rf /var/cache/volt/*
|
||||
|
||||
# Check container sizes
|
||||
du -sh /var/lib/volt/containers/*/
|
||||
```
|
||||
|
||||
### CAS Integrity Errors
|
||||
|
||||
```bash
|
||||
# Verify CAS store
|
||||
volt cas verify
|
||||
|
||||
# If corrupted objects are found, re-pull affected images
|
||||
volt image delete <affected-image>
|
||||
volt image pull <image>
|
||||
```
|
||||
|
||||
### Volume Won't Attach
|
||||
|
||||
1. **Volume exists?**
|
||||
```bash
|
||||
volt volume list
|
||||
```
|
||||
|
||||
2. **Already attached?**
|
||||
```bash
|
||||
volt volume inspect <name>
|
||||
```
|
||||
|
||||
3. **Target workload running?**
|
||||
Volumes can typically only be attached to running workloads.
|
||||
|
||||
---
|
||||
|
||||
## Compose Issues
|
||||
|
||||
### `volt compose up` Fails
|
||||
|
||||
1. **Validate the compose file**:
|
||||
```bash
|
||||
volt compose config
|
||||
```
|
||||
|
||||
2. **Missing images**:
|
||||
```bash
|
||||
volt compose pull
|
||||
```
|
||||
|
||||
3. **Dependency issues**: Check that `depends_on` targets exist in the file and their conditions can be met.
|
||||
|
||||
4. **Network conflicts**: If subnets overlap with existing networks:
|
||||
```bash
|
||||
volt net list
|
||||
```
|
||||
|
||||
### Environment Variables Not Resolving
|
||||
|
||||
```bash
|
||||
# Check .env file exists in same directory as compose file
|
||||
cat .env
|
||||
|
||||
# Variables must be set in the host environment or .env file
|
||||
export DB_PASSWORD=mysecret
|
||||
volt compose up
|
||||
```
|
||||
|
||||
Undefined variables with no default cause an error. Use default syntax:
|
||||
```yaml
|
||||
environment:
|
||||
DB_PASSWORD: "${DB_PASSWORD:-defaultpass}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
Use exit codes in scripts for error handling:
|
||||
|
||||
| Code | Meaning | Action |
|
||||
|------|---------|--------|
|
||||
| 0 | Success | Continue |
|
||||
| 2 | Bad arguments | Fix command syntax |
|
||||
| 3 | Not found | Resource doesn't exist |
|
||||
| 4 | Already exists | Resource name taken |
|
||||
| 5 | Permission denied | Use sudo or join `volt` group |
|
||||
| 6 | Daemon down | `sudo volt daemon start` |
|
||||
| 7 | Timeout | Retry with `--timeout` |
|
||||
| 9 | Conflict | Resource in wrong state |
|
||||
|
||||
```bash
|
||||
volt container start web
|
||||
case $? in
|
||||
0) echo "Started" ;;
|
||||
3) echo "Container not found" ;;
|
||||
5) echo "Permission denied — try sudo" ;;
|
||||
6) echo "Daemon not running — sudo volt daemon start" ;;
|
||||
9) echo "Already running" ;;
|
||||
*) echo "Error: $?" ;;
|
||||
esac
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Collecting Debug Info
|
||||
|
||||
When reporting issues, gather:
|
||||
|
||||
```bash
|
||||
# Version
|
||||
volt --version
|
||||
|
||||
# System info
|
||||
volt system info -o json
|
||||
|
||||
# Health check
|
||||
volt system health
|
||||
|
||||
# Daemon logs
|
||||
journalctl -u volt.service --no-pager -n 100
|
||||
|
||||
# Run the failing command with debug
|
||||
volt --debug <failing-command>
|
||||
|
||||
# Audit log
|
||||
tail -50 /var/log/volt/audit.log
|
||||
```
|
||||
|
||||
## Factory Reset
|
||||
|
||||
If all else fails, reset Volt to defaults. **This is destructive** — it stops all workloads and removes all configuration.
|
||||
|
||||
```bash
|
||||
sudo volt system reset --confirm
|
||||
```
|
||||
|
||||
After reset, reinitialize:
|
||||
```bash
|
||||
sudo volt daemon start
|
||||
volt system health
|
||||
```
|
||||
Reference in New Issue
Block a user