Complete infrastructure platform CLI: - Container runtime (systemd-nspawn) - VoltVisor VMs (Neutron Stardust / QEMU) - Stellarium CAS (content-addressed storage) - ORAS Registry - GitOps integration - Landlock LSM security - Compose orchestration - Mesh networking Copyright (c) Armored Gates LLC. All rights reserved. Licensed under AGPSL v5.0
558 lines
14 KiB
Markdown
558 lines
14 KiB
Markdown
# Volt Networking
|
|
|
|
Volt networking provides a unified interface for all workload connectivity. It is built on Linux bridge interfaces and nftables, supporting containers and VMs on the same L2 network.
|
|
|
|
## Architecture Overview
|
|
|
|
```
|
|
┌──────────────────────────────┐
|
|
│ Host Network │
|
|
│ (eth0, etc.) │
|
|
└──────────────┬────────────────┘
|
|
│ NAT / routing
|
|
┌──────────────┴────────────────┐
|
|
│ volt0 (bridge) │
|
|
│ 10.0.0.1/24 │
|
|
├───────┬───────┬───────┬───────┤
|
|
│ veth │ veth │ tap │ veth │
|
|
│ ↓ │ ↓ │ ↓ │ ↓ │
|
|
│ web │ api │ db │ cache │
|
|
│(con) │(con) │ (vm) │(con) │
|
|
└───────┴───────┴───────┴───────┘
|
|
```
|
|
|
|
### Key Concepts
|
|
|
|
- **Bridges**: Linux bridge interfaces that act as virtual switches
|
|
- **veth pairs**: Virtual ethernet pairs connecting containers to bridges
|
|
- **TAP interfaces**: Virtual network interfaces connecting VMs to bridges
|
|
- **L2 peers**: Containers and VMs on the same bridge communicate directly at Layer 2
|
|
|
|
## Default Bridge: volt0
|
|
|
|
When Volt initializes, it creates the `volt0` bridge with a default subnet of `10.0.0.0/24`. All workloads connect here unless assigned to a different network.
|
|
|
|
The bridge IP (`10.0.0.1`) serves as the default gateway for workloads. NAT rules handle outbound traffic to the host network and beyond.
|
|
|
|
```bash
|
|
# View bridge status
|
|
volt net bridge list
|
|
|
|
# View all network status
|
|
volt net status
|
|
```
|
|
|
|
## Creating Networks
|
|
|
|
### Basic Network
|
|
|
|
```bash
|
|
volt net create --name backend --subnet 10.30.0.0/24
|
|
```
|
|
|
|
This creates:
|
|
1. A Linux bridge named `volt-backend`
|
|
2. Assigns `10.30.0.1/24` to the bridge interface
|
|
3. Configures NAT for outbound connectivity
|
|
4. Updates internal DNS for name resolution
|
|
|
|
### Internal (Isolated) Network
|
|
|
|
```bash
|
|
volt net create --name internal --subnet 10.50.0.0/24 --no-nat
|
|
```
|
|
|
|
Internal networks have no NAT rules and no outbound connectivity. Workloads on internal networks can only communicate with each other.
|
|
|
|
### Inspecting Networks
|
|
|
|
```bash
|
|
volt net inspect backend
|
|
volt net list
|
|
volt net list -o json
|
|
```
|
|
|
|
## Connecting Workloads
|
|
|
|
### Connect to a Network
|
|
|
|
```bash
|
|
# Connect a container
|
|
volt net connect backend api-server
|
|
|
|
# Connect a VM
|
|
volt net connect backend db-primary
|
|
```
|
|
|
|
When connected, the workload gets:
|
|
- A veth pair (container) or TAP interface (VM) attached to the bridge
|
|
- An IP address from the network's subnet via DHCP or static assignment
|
|
- DNS resolution for all other workloads on the same network
|
|
|
|
### Disconnect
|
|
|
|
```bash
|
|
volt net disconnect api-server
|
|
```
|
|
|
|
### Cross-Type Communication
|
|
|
|
A key feature of Volt networking: containers and VMs on the same network are L2 peers. There is no translation layer.
|
|
|
|
```bash
|
|
# Both on "backend" network
|
|
volt net connect backend api-server # container
|
|
volt net connect backend db-primary # VM
|
|
|
|
# From inside api-server container:
|
|
psql -h db-primary -U app -d myapp # just works
|
|
```
|
|
|
|
This works because:
|
|
- The container's veth and the VM's TAP are both bridge ports on the same bridge
|
|
- Frames flow directly between them at L2
|
|
- Internal DNS resolves `db-primary` to its bridge IP
|
|
|
|
## Firewall Rules
|
|
|
|
Volt firewall wraps `nftables` with a workload-aware interface. Rules can reference workloads by name.
|
|
|
|
### Listing Rules
|
|
|
|
```bash
|
|
volt net firewall list
|
|
```
|
|
|
|
### Adding Rules
|
|
|
|
```bash
|
|
# Allow HTTP to a workload
|
|
volt net firewall add --name allow-http \
|
|
--source any --dest 10.0.0.5 --port 80,443 --proto tcp --action accept
|
|
|
|
# Allow DB access from specific subnet
|
|
volt net firewall add --name db-access \
|
|
--source 10.0.0.0/24 --dest 10.30.0.10 --port 5432 --proto tcp --action accept
|
|
|
|
# Block SSH from everywhere
|
|
volt net firewall add --name block-ssh \
|
|
--source any --dest 10.0.0.5 --port 22 --proto tcp --action drop
|
|
```
|
|
|
|
### Deleting Rules
|
|
|
|
```bash
|
|
volt net firewall delete --name allow-http
|
|
```
|
|
|
|
### Flushing All Rules
|
|
|
|
```bash
|
|
volt net firewall flush
|
|
```
|
|
|
|
### How It Works Internally
|
|
|
|
Volt manages a dedicated nftables table called `volt` with chains for:
|
|
|
|
| Chain | Purpose |
|
|
|-------|---------|
|
|
| `volt-input` | Traffic destined for the host |
|
|
| `volt-forward` | Traffic between workloads (inter-bridge) |
|
|
| `volt-nat-pre` | DNAT rules (port forwarding inbound) |
|
|
| `volt-nat-post` | SNAT rules (masquerade for outbound) |
|
|
|
|
Rules added via `volt net firewall add` are inserted into the appropriate chain based on source/destination. The chain is determined automatically — you don't need to know whether traffic is "input" or "forward".
|
|
|
|
### Default Policy
|
|
|
|
- **Inbound to host**: deny all (except established connections)
|
|
- **Inter-workload (same network)**: allow
|
|
- **Inter-workload (different network)**: deny
|
|
- **Outbound from workloads**: allow (via NAT)
|
|
- **Host access from workloads**: deny by default
|
|
|
|
## Port Forwarding
|
|
|
|
Forward host ports to workloads:
|
|
|
|
### Adding Port Forwards
|
|
|
|
```bash
|
|
# Forward host:80 to container web-frontend:80
|
|
volt net port add --host-port 80 --target web-frontend --target-port 80
|
|
|
|
# Forward host:5432 to VM db-primary:5432
|
|
volt net port add --host-port 5432 --target db-primary --target-port 5432
|
|
```
|
|
|
|
### Listing Port Forwards
|
|
|
|
```bash
|
|
volt net port list
|
|
```
|
|
|
|
Output:
|
|
```
|
|
HOST-PORT TARGET TARGET-PORT PROTO STATUS
|
|
80 web-frontend 80 tcp active
|
|
443 web-frontend 443 tcp active
|
|
5432 db-primary 5432 tcp active
|
|
```
|
|
|
|
### How It Works
|
|
|
|
Port forwards create DNAT rules in nftables:
|
|
1. Incoming traffic on `host:port` is DNATed to `workload-ip:target-port`
|
|
2. Return traffic is tracked by conntrack and SNATed back
|
|
|
|
## DNS Resolution
|
|
|
|
Volt runs an internal DNS resolver (`volt-dns.service`) that provides automatic name resolution for all workloads.
|
|
|
|
### How It Works
|
|
|
|
1. When a workload starts, Volt registers its name and IP in the internal DNS
|
|
2. All workloads are configured to use the bridge gateway IP as their DNS server
|
|
3. Lookups for workload names resolve to their bridge IPs
|
|
4. Unknown queries are forwarded to upstream DNS servers
|
|
|
|
### Upstream DNS
|
|
|
|
Configured in `/etc/volt/config.yaml`:
|
|
|
|
```yaml
|
|
network:
|
|
dns:
|
|
enabled: true
|
|
upstream:
|
|
- 1.1.1.1
|
|
- 8.8.8.8
|
|
search_domains:
|
|
- volt.local
|
|
```
|
|
|
|
### DNS Management
|
|
|
|
```bash
|
|
# List DNS entries
|
|
volt net dns list
|
|
|
|
# Flush DNS cache
|
|
volt net dns flush
|
|
```
|
|
|
|
### Name Resolution Examples
|
|
|
|
Within any workload on the same network:
|
|
|
|
```bash
|
|
# Resolve by name
|
|
ping db-primary # resolves to 10.30.0.10
|
|
curl http://api-server:8080/health
|
|
psql -h db-primary -U app -d myapp
|
|
```
|
|
|
|
## Network Policies
|
|
|
|
Policies define allowed communication patterns between specific workloads. They provide finer-grained control than firewall rules.
|
|
|
|
### Creating Policies
|
|
|
|
```bash
|
|
# Only app-server can reach db-primary on port 5432
|
|
volt net policy create --name app-to-db \
|
|
--from app-server --to db-primary --port 5432 --action allow
|
|
```
|
|
|
|
### Listing Policies
|
|
|
|
```bash
|
|
volt net policy list
|
|
```
|
|
|
|
### Testing Connectivity
|
|
|
|
Before deploying, test whether traffic would be allowed:
|
|
|
|
```bash
|
|
# This should succeed
|
|
volt net policy test --from app-server --to db-primary --port 5432
|
|
# ✓ app-server → db-primary:5432 — ALLOWED (policy: app-to-db)
|
|
|
|
# This should fail
|
|
volt net policy test --from web-frontend --to db-primary --port 5432
|
|
# ✗ web-frontend → db-primary:5432 — DENIED
|
|
```
|
|
|
|
### Deleting Policies
|
|
|
|
```bash
|
|
volt net policy delete --name app-to-db
|
|
```
|
|
|
|
## VLANs
|
|
|
|
### Listing VLANs
|
|
|
|
```bash
|
|
volt net vlan list
|
|
```
|
|
|
|
VLAN management is available for advanced network segmentation. VLANs are created on top of physical interfaces and can be used as bridge uplinks.
|
|
|
|
## Ingress Proxy
|
|
|
|
Volt includes a built-in reverse proxy for routing external HTTP/HTTPS traffic to workloads by hostname and path prefix. It supports automatic TLS via ACME (Let's Encrypt), manual certificates, WebSocket passthrough, health checks, and zero-downtime route reloading.
|
|
|
|
### Creating Routes
|
|
|
|
Route external traffic to workloads by hostname:
|
|
|
|
```bash
|
|
# Simple HTTP route
|
|
volt ingress create --name web \
|
|
--hostname app.example.com \
|
|
--backend web:8080
|
|
|
|
# Route with path prefix
|
|
volt ingress create --name api \
|
|
--hostname api.example.com \
|
|
--path /v1 \
|
|
--backend api:3000
|
|
|
|
# Route with automatic TLS (Let's Encrypt)
|
|
volt ingress create --name secure-web \
|
|
--hostname app.example.com \
|
|
--backend web:8080 \
|
|
--tls auto
|
|
|
|
# Route with manual TLS certificate
|
|
volt ingress create --name cdn \
|
|
--hostname cdn.example.com \
|
|
--backend static:80 \
|
|
--tls manual \
|
|
--cert /etc/certs/cdn.pem \
|
|
--key /etc/certs/cdn.key
|
|
```
|
|
|
|
### TLS Termination
|
|
|
|
Three TLS modes are available:
|
|
|
|
| Mode | Description |
|
|
|------|-------------|
|
|
| `auto` | ACME (Let's Encrypt) — automatic certificate issuance, renewal, and storage |
|
|
| `manual` | User-provided certificate and key files |
|
|
| `passthrough` | Forward TLS directly to the backend without termination |
|
|
|
|
```bash
|
|
# Auto ACME — Volt handles everything
|
|
volt ingress create --name web --hostname app.example.com --backend web:8080 --tls auto
|
|
|
|
# Manual certs
|
|
volt ingress create --name web --hostname app.example.com --backend web:8080 \
|
|
--tls manual --cert /etc/certs/app.pem --key /etc/certs/app.key
|
|
|
|
# TLS passthrough — backend handles TLS
|
|
volt ingress create --name web --hostname app.example.com --backend web:443 --tls passthrough
|
|
```
|
|
|
|
For ACME to work, the ingress proxy must be reachable on port 80 from the internet (for HTTP-01 challenges). Ensure your DNS records point to the server running the proxy.
|
|
|
|
### WebSocket Passthrough
|
|
|
|
WebSocket connections are passed through automatically. When a client sends an HTTP Upgrade request, the ingress proxy upgrades the connection and proxies frames bidirectionally to the backend. No additional configuration is needed.
|
|
|
|
### Health Checks
|
|
|
|
The ingress proxy monitors backend health. If a backend becomes unreachable, it is temporarily removed from the routing table until it recovers. Configure backend timeouts per route:
|
|
|
|
```bash
|
|
volt ingress create --name api --hostname api.example.com \
|
|
--backend api:3000 --timeout 60
|
|
```
|
|
|
|
The `--timeout` flag sets the backend timeout in seconds (default: 30).
|
|
|
|
### Hot Reload
|
|
|
|
Update routes without restarting the proxy or dropping active connections:
|
|
|
|
```bash
|
|
volt ingress reload
|
|
```
|
|
|
|
Existing connections are drained gracefully while new connections immediately use the updated routes. This is safe to call from CI/CD pipelines or GitOps workflows.
|
|
|
|
### Managing Routes
|
|
|
|
```bash
|
|
# List all routes
|
|
volt ingress list
|
|
|
|
# Show proxy status
|
|
volt ingress status
|
|
|
|
# Delete a route
|
|
volt ingress delete --name web
|
|
```
|
|
|
|
### Running the Proxy
|
|
|
|
**Foreground (testing):**
|
|
```bash
|
|
volt ingress serve
|
|
volt ingress serve --http-port 8080 --https-port 8443
|
|
```
|
|
|
|
**Production (systemd):**
|
|
```bash
|
|
systemctl enable --now volt-ingress.service
|
|
```
|
|
|
|
### Example: Full Ingress Setup
|
|
|
|
```bash
|
|
# Create routes for a web application
|
|
volt ingress create --name web \
|
|
--hostname app.example.com \
|
|
--backend web:8080 \
|
|
--tls auto
|
|
|
|
volt ingress create --name api \
|
|
--hostname api.example.com \
|
|
--path /v1 \
|
|
--backend api:3000 \
|
|
--tls auto
|
|
|
|
volt ingress create --name ws \
|
|
--hostname ws.example.com \
|
|
--backend realtime:9000 \
|
|
--tls auto
|
|
|
|
# Start the proxy
|
|
systemctl enable --now volt-ingress.service
|
|
|
|
# Verify
|
|
volt ingress list
|
|
volt ingress status
|
|
```
|
|
|
|
---
|
|
|
|
## Bridge Management
|
|
|
|
### Listing Bridges
|
|
|
|
```bash
|
|
volt net bridge list
|
|
```
|
|
|
|
Output:
|
|
```
|
|
NAME SUBNET MTU CONNECTED STATUS
|
|
volt0 10.0.0.0/24 1500 8 up
|
|
backend 10.30.0.0/24 1500 3 up
|
|
```
|
|
|
|
### Creating a Bridge
|
|
|
|
```bash
|
|
volt net bridge create mybridge --subnet 10.50.0.0/24
|
|
```
|
|
|
|
### Deleting a Bridge
|
|
|
|
```bash
|
|
volt net bridge delete mybridge
|
|
```
|
|
|
|
## Network Configuration
|
|
|
|
### Config File
|
|
|
|
Network settings in `/etc/volt/config.yaml`:
|
|
|
|
```yaml
|
|
network:
|
|
default_bridge: volt0
|
|
default_subnet: 10.0.0.0/24
|
|
dns:
|
|
enabled: true
|
|
upstream:
|
|
- 1.1.1.1
|
|
- 8.8.8.8
|
|
search_domains:
|
|
- volt.local
|
|
mtu: 1500
|
|
```
|
|
|
|
### Per-Network Settings in Compose
|
|
|
|
```yaml
|
|
networks:
|
|
frontend:
|
|
driver: bridge
|
|
subnet: 10.20.0.0/24
|
|
options:
|
|
mtu: 9000
|
|
|
|
backend:
|
|
driver: bridge
|
|
subnet: 10.30.0.0/24
|
|
internal: true # No external access
|
|
```
|
|
|
|
## Network Tuning
|
|
|
|
For high-throughput workloads, tune network buffer sizes and offloading:
|
|
|
|
```bash
|
|
# Increase buffer sizes
|
|
volt tune net buffers --rmem-max 16M --wmem-max 16M
|
|
|
|
# Show current tuning
|
|
volt tune net show
|
|
```
|
|
|
|
Relevant sysctls:
|
|
|
|
```bash
|
|
volt tune sysctl set net.core.somaxconn 65535
|
|
volt tune sysctl set net.ipv4.ip_forward 1
|
|
volt tune sysctl set net.core.rmem_max 16777216
|
|
volt tune sysctl set net.core.wmem_max 16777216
|
|
```
|
|
|
|
## Troubleshooting Network Issues
|
|
|
|
### Container Can't Reach the Internet
|
|
|
|
1. Check bridge exists: `volt net bridge list`
|
|
2. Check NAT is configured: `volt net firewall list`
|
|
3. Check IP forwarding: `volt tune sysctl get net.ipv4.ip_forward`
|
|
4. Verify the container has an IP: `volt container inspect <name>`
|
|
|
|
### Workloads Can't Reach Each Other
|
|
|
|
1. Verify both are on the same network: `volt net inspect <network>`
|
|
2. Check firewall rules aren't blocking: `volt net firewall list`
|
|
3. Check network policies: `volt net policy list`
|
|
4. Test connectivity: `volt net policy test --from <src> --to <dst> --port <port>`
|
|
|
|
### DNS Not Resolving
|
|
|
|
1. Check DNS service: `volt net dns list`
|
|
2. Flush DNS cache: `volt net dns flush`
|
|
3. Verify upstream DNS: check `/etc/volt/config.yaml` network.dns.upstream
|
|
|
|
### Port Forward Not Working
|
|
|
|
1. List active forwards: `volt net port list`
|
|
2. Check the target workload is running: `volt ps`
|
|
3. Verify the target port is listening inside the workload
|
|
4. Check firewall rules aren't blocking inbound traffic
|
|
|
|
See [troubleshooting.md](troubleshooting.md) for more.
|