Back to home

Zero-knowledge security

Your API keys never leave the VPS. SSH keys never reach the browser. Secrets are encrypted at rest and absent from every API response. Self-hosted OpenClaw with defense in depth.

0Secrets in API responses
6Isolation layers
AES-256Encryption at rest
HMACSigned sessions
Keys never in containerAgent sees "proxy-managed"
SSH keys never in browserGenerated, stored, and used server-side
Self-hosted, fully auditableYour VPS, your infrastructure, open source

How the LLM auth proxy works

Every LLM API call from the agent container passes through a host-side proxy that injects the real API key. The container never has access to the key at any point in the request lifecycle.

1

Agent container makes LLM API call

apiKey: "proxy-managed" — no real key in container

2

Host proxy intercepts on port 3101

Only accepts requests from Docker bridge IPs (172.16.0.0/12)

3

Proxy reads key from secure file

/root/.model-keys.json (mode 600) — outside container volume

4

Injects provider-specific auth header

x-api-key (Anthropic), x-goog-api-key (Google), Authorization: Bearer (others)

5

Forwards to upstream provider

Container never sees the real API key at any point

SSH key lifecycle

SSH private keys are generated on the server, stored encrypted in the database, and never sent to the browser. API responses return a boolean hasSshKey flag instead of the actual key.

1

Key generated server-side

ssh-keygen runs in Hetzner rescue mode — key never touches a browser

2

Encrypted and stored in database

AES-256-GCM encryption in Neon Postgres — at rest and in transit

3

Read from DB for SSH operations

Server-side API routes read the key directly — client gets a hasSshKey boolean flag

4

Rotated after provisioning

VPS API generates a new keypair server-side, old key is replaced in DB

5

Hetzner token wiped

Cloud API token deleted from DB after setup — minimizes stored credentials

Six layers of isolation

Defense in depth — even if one layer is compromised, the others keep your credentials safe. From container sandboxing to encrypted storage, every boundary is enforced independently.

Container isolation

The agent runs inside a Docker container with no access to the host filesystem. Volume mounts are strictly limited — the key file lives outside the mounted path.

Network restriction

The LLM proxy only accepts requests from Docker bridge IPs (172.16.0.0/12). External requests are rejected. No port exposure to the public internet.

File permissions

API keys stored in /root/.model-keys.json with mode 600. Only root on the host can read them. The container runs as uid 1000 (node) — no access.

Zero-knowledge container

The agent never learns real API keys. It only knows "proxy-managed" as its key value. Even if the container is compromised, keys remain safe.

Encrypted secrets at rest

SSH keys and sensitive data are encrypted with AES-256-GCM before storage in Neon Postgres. API responses never include raw secrets — only boolean flags.

Browser-direct delivery

LLM API keys travel directly from your browser to the VPS via Tailscale HTTPS. They never transit through Vercel or any third-party server.

Automatic key rotation

After provisioning completes, SSH keys are automatically rotated server-side. The VPS generates a fresh keypair, the old key is replaced in the database, and no manual intervention is needed. Zero downtime, zero browser involvement.

POST /api/rotate-ssh-key · server-side only · new key saved to encrypted DB

Credential cleanup

Temporary credentials are wiped as soon as they're no longer needed. Hetzner API tokens are deleted from the database after server setup completes. The principle: store the minimum secrets for the minimum time.

clearAgentHetznerToken() · runs after setup completion · irreversible delete

Architecture overview

The proxy runs as a lightweight Node.js process on the host, separate from the Docker container. SSH keys stay in the encrypted database. Here's a simplified view of the setup.

architecture.yml
# Docker Compose (simplified)
services:
  openclaw-gateway:
    image: ghcr.io/.../openclaw:latest
    extra_hosts:
      - "host.docker.internal:host-gateway"
    # Volume does NOT include /root/.model-keys.json

# Gateway config (generated during provisioning)
# models.providers[].baseUrl: "http://host.docker.internal:3101/proxy/<id>"
# models.providers[].apiKey: "proxy-managed"
# ↑ Proxy URL baked into config — no env var needed

# Host-side proxy (vps-api/server.mjs, port 3101)
# Reads /root/.model-keys.json (mode 600) → injects auth headers
# Accepts only Docker bridge IPs (172.16.0.0/12)

# SSH keys:
# Generated via ssh-keygen on server → encrypted → stored in Neon Postgres
# Server-side API routes read from DB — never sent to browser
# Rotated automatically after provisioning completes

Typical setup vs. Cannes

Most AI agent platforms pass API keys directly to the container and return secrets in API responses. Cannes takes a fundamentally different approach.

Typical
Cannes
API key storage
Environment variable in container
Host-only file, mode 600
Key visibility to agent
Fully visible (env var)
Never visible (proxy-managed)
SSH key handling
Sent to browser, stored in sessionStorage
Never leaves server — DB only
Container compromise
All keys exposed
No keys accessible
Secrets in API responses
Tokens and keys returned in JSON
Zero secrets — boolean flags only
Key rotation
Redeploy container
Automatic, server-side, zero downtime
Key delivery path
Via platform cloud (vendor sees keys)
Browser-direct via Tailscale HTTPS
Infrastructure ownership
Vendor cloud (shared)
Your VPS (dedicated)

Security you can verify

Fully open source. Self-hosted on your own VPS. Zero secrets in API responses. Audit the code, control the keys, own the data.

Zero-Knowledge Security — Your Keys, Your Control | Cannes