Your API keys never leave the VPS. SSH keys never reach the browser. Secrets are encrypted at rest and absent from every API response. Self-hosted OpenClaw with defense in depth.
Every LLM API call from the agent container passes through a host-side proxy that injects the real API key. The container never has access to the key at any point in the request lifecycle.
apiKey: "proxy-managed" — no real key in container
Only accepts requests from Docker bridge IPs (172.16.0.0/12)
/root/.model-keys.json (mode 600) — outside container volume
x-api-key (Anthropic), x-goog-api-key (Google), Authorization: Bearer (others)
Container never sees the real API key at any point
SSH private keys are generated on the server, stored encrypted in the database, and never sent to the browser. API responses return a boolean hasSshKey flag instead of the actual key.
ssh-keygen runs in Hetzner rescue mode — key never touches a browser
AES-256-GCM encryption in Neon Postgres — at rest and in transit
Server-side API routes read the key directly — client gets a hasSshKey boolean flag
VPS API generates a new keypair server-side, old key is replaced in DB
Cloud API token deleted from DB after setup — minimizes stored credentials
Defense in depth — even if one layer is compromised, the others keep your credentials safe. From container sandboxing to encrypted storage, every boundary is enforced independently.
The agent runs inside a Docker container with no access to the host filesystem. Volume mounts are strictly limited — the key file lives outside the mounted path.
The LLM proxy only accepts requests from Docker bridge IPs (172.16.0.0/12). External requests are rejected. No port exposure to the public internet.
API keys stored in /root/.model-keys.json with mode 600. Only root on the host can read them. The container runs as uid 1000 (node) — no access.
The agent never learns real API keys. It only knows "proxy-managed" as its key value. Even if the container is compromised, keys remain safe.
SSH keys and sensitive data are encrypted with AES-256-GCM before storage in Neon Postgres. API responses never include raw secrets — only boolean flags.
LLM API keys travel directly from your browser to the VPS via Tailscale HTTPS. They never transit through Vercel or any third-party server.
After provisioning completes, SSH keys are automatically rotated server-side. The VPS generates a fresh keypair, the old key is replaced in the database, and no manual intervention is needed. Zero downtime, zero browser involvement.
Temporary credentials are wiped as soon as they're no longer needed. Hetzner API tokens are deleted from the database after server setup completes. The principle: store the minimum secrets for the minimum time.
The proxy runs as a lightweight Node.js process on the host, separate from the Docker container. SSH keys stay in the encrypted database. Here's a simplified view of the setup.
# Docker Compose (simplified)
services:
openclaw-gateway:
image: ghcr.io/.../openclaw:latest
extra_hosts:
- "host.docker.internal:host-gateway"
# Volume does NOT include /root/.model-keys.json
# Gateway config (generated during provisioning)
# models.providers[].baseUrl: "http://host.docker.internal:3101/proxy/<id>"
# models.providers[].apiKey: "proxy-managed"
# ↑ Proxy URL baked into config — no env var needed
# Host-side proxy (vps-api/server.mjs, port 3101)
# Reads /root/.model-keys.json (mode 600) → injects auth headers
# Accepts only Docker bridge IPs (172.16.0.0/12)
# SSH keys:
# Generated via ssh-keygen on server → encrypted → stored in Neon Postgres
# Server-side API routes read from DB — never sent to browser
# Rotated automatically after provisioning completesMost AI agent platforms pass API keys directly to the container and return secrets in API responses. Cannes takes a fundamentally different approach.
Fully open source. Self-hosted on your own VPS. Zero secrets in API responses. Audit the code, control the keys, own the data.