Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nanoclaw.dev/llms.txt

Use this file to discover all available pages before exploring further.

NanoClaw runs all agents inside containers (lightweight Linux VMs) to provide true OS-level isolation. This is the primary security boundary that makes Bash access and code execution safe.

Why containers?

Containers provide:
  • Process isolation — agent processes can’t affect the host system
  • Filesystem isolation — agents only see explicitly mounted directories
  • Resource limits — CPU/memory can be constrained (future)
  • Non-root execution — agents run as unprivileged user
  • Signal forwardingtini as PID 1 ensures clean shutdown
NanoClaw uses Docker by default for cross-platform compatibility. On macOS, you can use Apple Container instead (via /convert-to-apple-container skill).

Container architecture

Base image

The container is built from container/Dockerfile: Key components:
  • Base: node:22-slim (Debian-based, minimal)
  • Runtime: Bun (runs agent-runner TypeScript directly — no compilation step)
  • Browser: Chromium with all dependencies
  • Tools: agent-browser for browser automation, vercel CLI, curl, git
  • SDK: @anthropic-ai/claude-code installed globally via pnpm
  • PID 1: tini for proper signal forwarding so outbound.db writes finalize on SIGTERM
  • User: node (uid 1000, non-root)
  • Working directory: /workspace/group
Source code is NOT baked into the image. /app/src is a read-only bind mount from the host. Source-only changes never require an image rebuild.

Entrypoint flow

The entrypoint uses tini for signal forwarding:
  1. tini starts as PID 1 (forwards signals cleanly)
  2. entrypoint.sh runs setup scripts
  3. Bun executes agent-runner: exec bun run /app/src/index.ts
  4. Agent-runner polls inbound.db for messages and writes responses to outbound.db
  5. Container exits when stopped by the host

Two-database IO model

In v2, all communication between host and container uses two SQLite databases per session. There is no stdin/stdout piping, no IPC files, and no output markers.
  • inbound.db — host writes, container reads (messages, routing, destinations)
  • outbound.db — container writes, host reads (responses, acknowledgments, state)
This eliminates cross-mount SQLite write contention by ensuring each database has exactly one writer.
The two-DB model requires journal_mode=DELETE (not WAL). WAL mode’s memory-mapped shared memory file doesn’t refresh across Docker bind mounts, which would cause the container to miss messages.

Volume mounts

Containers only see what’s explicitly mounted. The v2 mount structure is different from v1:

Per-session mounts

MountContainer pathModePurpose
Session folder/workspaceRead-writeinbound.db, outbound.db, outbox/, .claude/
Agent group folder/workspace/agentRead-writeWorking files, CLAUDE.local.md
Container config/workspace/agent/container.jsonRead-onlyNested RO mount (agent can’t modify config)
Composed CLAUDE.md/workspace/agent/CLAUDE.mdRead-onlyRegenerated each spawn
CLAUDE.md fragments/workspace/agent/.claude-fragmentsRead-onlyFragment files for composition
Global memory/workspace/globalRead-onlygroups/global/ directory
Shared CLAUDE.md/app/CLAUDE.mdRead-onlyBase CLAUDE.md
Agent-runner source/app/srcRead-onlyShared source (bind mount from host)
Container skills/app/skillsRead-onlyShared skill definitions
Claude SDK state/home/node/.claudeRead-writeSDK state + skill symlinks
Additional mounts/workspace/extra/{name}Per-configFrom container.json (validated against allowlist)
The container.json file is mounted read-only as a nested mount inside the read-write agent group folder. This prevents the agent from modifying its own container configuration.

Mount security

All additional mounts are validated against the allowlist at ~/.config/nanoclaw/mount-allowlist.json:
{
  "allowedRoots": [
    {
      "path": "~/projects",
      "allowReadWrite": true,
      "description": "Development projects"
    }
  ],
  "blockedPatterns": ["password", "secret", "token"]
}
Validation rules:
  1. If no allowlist file exists, all additional mounts are blocked
  2. Blocked patterns are merged with hardcoded defaults (.ssh, .gnupg, .aws, .kube, .docker, credentials, .env, .netrc, .npmrc, id_rsa, id_ed25519, private_key, .secret, etc.)
  3. Paths are resolved through symlinks before validation
  4. A path must be under an allowedRoot and not match any blocked pattern
  5. Read-write access requires both the mount and the allowed root to permit it; otherwise forced read-only
  6. Container paths must be relative, non-empty, no .., no : (prevents Docker option injection)
Missing-file state is NOT cached — you can create the allowlist without restarting. However, parse errors are permanently cached for the process lifetime.

Container lifecycle

Wake and spawn

Container spawning is deduplicated — concurrent wake calls for the same session share a single in-flight promise:
  1. wakeContainer(session) checks for existing running container or in-flight spawn
  2. spawnContainer(session) reads agent group config, resolves provider contributions, builds mounts
  3. Skill symlinks are synced before every spawn
  4. Docker container is spawned with tini as PID 1

Execution

  1. Agent-runner starts: bun run /app/src/index.ts
  2. Polls inbound.db: discovers new messages via the poll loop
  3. Processes messages: runs through the configured provider (Claude by default)
  4. Writes outbound.db: responses, acknowledgments, task operations
  5. Heartbeat: .heartbeat file mtime updated periodically

Shutdown

  • Host-initiated: docker stop sends SIGTERM; tini forwards to Bun process
  • Stale detection: host sweep detects containers with old heartbeats or stuck processing_ack
  • Fallback: SIGKILL if graceful stop fails
Even if the container crashes, all data in session databases and mounted directories persists. Only the container process itself is ephemeral.

Per-agent-group images

Agent groups can specify custom packages in container.json. The host builds a derived Docker image with additional apt and npm packages:
  • Image tag: derived from the checkout-scoped base image and agent group
  • Built on top of the base nanoclaw-agent-v2-<slug>:latest image
  • Cached — only rebuilt when package lists change

Timeouts

Container timeout

  • Default: 30 minutes (CONTAINER_TIMEOUT)
  • Configurable: per-agent-group via container config
  • Enforcement: docker stop (SIGTERM), falls back to SIGKILL

Stale detection

The host sweep detects stuck containers by checking:
  • .heartbeat file modification time
  • Age of unacknowledged processing_ack entries
  • Container state tracking in the session table

Skills and MCP servers

Skills are synced to each container via symlinks:
  1. Symlink sync: before every spawn, skill symlinks at /home/node/.claude/skills/ are updated to point to /app/skills/{name}
  2. Container-valid paths: symlinks are dangling on the host but valid inside the container
  3. Available to agent: Claude Agent SDK loads from .claude/skills/

Built-in MCP server

The nanoclaw MCP server provides tools for container-to-host communication via the database:
  • send_message — write outbound messages
  • schedule_task, cancel_task, pause_task, resume_task, update_task — task management
  • list_tasks — view scheduled tasks
Additional MCP servers can be configured in container.json.

Global memory injection

For non-main groups, if /workspace/global/CLAUDE.md exists, its contents are appended to the system prompt. This provides shared instructions across all agent groups.

Additional directory auto-discovery

Directories mounted at /workspace/extra/* are automatically passed to the SDK as additionalDirectories, so any CLAUDE.md files in those directories are loaded automatically.

Browser automation

Chromium runs inside the container:
  • Executable: /usr/bin/chromium
  • CLI: agent-browser (installed globally)
  • Headless: always (no display in container)
  • User data: stored in group folder (persists across runs)
  • Network: full access (same as host, no restrictions)
  • Optional CJK fonts: install via INSTALL_CJK_FONTS=true build arg (~200 MB)

Security implications

What containers protect against

  • Filesystem access — agents can’t read ~/.ssh or other sensitive paths
  • Process interference — agents can’t kill host processes or inject code
  • Persistence — containers are ephemeral, no state survives unless mounted
  • Privilege escalation — non-root execution limits kernel attack surface

What containers DON’T protect against

  • Network access — agents have full network access (can exfiltrate data)
  • Mounted directory tampering — agents can modify anything in mounted read-write directories
  • Vault-based API access — containers can make authenticated API requests through the OneCLI vault (though they cannot extract real credentials)
  • Resource exhaustion — no CPU/memory limits enforced (can DoS host)
Containers provide filesystem isolation, not network isolation. Agents can make arbitrary HTTP requests and exfiltrate data over the network.

Troubleshooting

Container won’t start

  1. Check Docker is running: docker ps
  2. Check image exists: docker images | grep nanoclaw-agent
  3. Rebuild image: ./container/build.sh

Container timeout

  1. Check timeout setting: CONTAINER_TIMEOUT in .env
  2. Check if task is legitimately slow (increase timeout)
  3. Review host sweep logs for stale detection

Permission errors

  1. Check mount paths are readable by host user
  2. Check uid/gid mapping
  3. Verify allowlist includes path (for additional mounts)
  4. Check symlink resolution didn’t change path
Last modified on April 28, 2026