NanoClaw is a lightweight AI assistant that runs agents in isolated containers. The v2 architecture is a ground-up rewrite built on a two-database session model, a pluggable module system, and a new entity model that separates users, agent groups, messaging groups, and wirings.Documentation Index
Fetch the complete documentation index at: https://docs.nanoclaw.dev/llms.txt
Use this file to discover all available pages before exploring further.
High-level overview
NanoClaw consists of a single Node.js host process that orchestrates everything:Core components
Channel adapters
NanoClaw uses a self-registering adapter pattern for messaging channels. Installed adapters such as Telegram, Discord, WhatsApp, Signal, Slack, Teams, iMessage, and the local CLI register at startup. Channels with missing credentials are skipped with a warning. All adapters implement a common interface withonInbound, onInboundEvent, onMetadata, and onAction callbacks, allowing the rest of the system to be channel-agnostic.
Inbound router
The router (src/router.ts) is the central message routing pipeline:
- Thread policy — non-threaded adapters (Telegram, WhatsApp, iMessage) collapse
threadIdto null - Messaging group lookup — finds or auto-creates messaging groups on mentions/DMs
- Unwired channel handling — if no agents are wired, the channel-request gate escalates to the owner for approval
- Sender resolution — the permissions module extracts namespaced user IDs and upserts users
- Fan-out — each wired agent is evaluated independently against engage mode, sender scope, and access gates
- Engage evaluation — per-agent decision based on
pattern,mention, ormention-stickymode - Delivery — engaging agents get a session wake and container spawn; non-engaging agents with
accumulatepolicy store the message for future context
Message IDs are namespaced by agent group ID to prevent collisions during fan-out to multiple agents.
Session manager
The session manager (src/session-manager.ts) manages the two-database session model:
- Session resolution —
shared(one per messaging group),per-thread(one per thread), oragent-shared(one per agent group) - Two-DB split — each session has an
inbound.db(host writes, container reads) andoutbound.db(container writes, host reads) - Cross-mount invariants —
journal_mode=DELETE(WAL doesn’t refresh across Docker mounts), host opens-writes-closes per operation, one writer per file
Container runner
The container runner (src/container-runner.ts) spawns and manages isolated agent execution:
- Wake deduplication — concurrent wake calls for the same session are deduplicated via an in-flight promise map
- Per-agent-group images — custom Docker images with additional apt/npm packages can be built per agent group
- Shared source —
/app/srcis a read-only bind mount from the host; source changes never require an image rebuild - No stdin/stdout — all IO is via the two-DB split; no stdin piping or output markers
Delivery system
The delivery system (src/delivery.ts) uses a two-poll architecture:
- Active poll (1s) — polls
outbound.dbfor all running-container sessions - Sweep poll (60s) — polls all active sessions, catching messages from containers that exited between active polls
- Delivery pipeline — routes based on message kind (
system,agent, or channel), with permission checks for cross-channel delivery - Retry — 3 attempts per message, then permanently failed
Deduplication via an in-flight set prevents double-delivery when both polls hit the same session simultaneously.
Host sweep
The host sweep (src/host-sweep.ts) runs every 60 seconds:
- Syncs
processing_ackfromoutbound.dbto update inbound message status - Detects stale containers via heartbeat mtime and processing_ack age
- Wakes containers for due messages (scheduled tasks with
process_after <= now) - Advances recurring task series (creates next-occurrence rows)
Database
Central database (data/v2.db) stores the entity model:
- agent_groups — workspaces with folder, name, and optional provider
- messaging_groups — platform chats with
unknown_sender_policy(strict,request_approval, orpublic) - messaging_group_agents — many-to-many wirings with engage mode, pattern, sender scope, ignored message policy, session mode, and priority
- users — namespaced platform identifiers (e.g.,
phone:+1555...,tg:123,discord:456) - user_roles — owner (always global) or admin (global or scoped to agent group)
- sessions — status tracking for both session and container
- messages_in — inbound messages with status,
process_after, recurrence,series_id, and trigger flag - delivered — tracks delivery outcomes
- destinations — live destination map (channels and other agents)
- session_routing — default reply routing
- messages_out — outbound messages with
deliver_afterand recurrence - processing_ack — tracks which inbound messages the container has processed
- session_state — persistent key/value store (e.g., SDK session ID for resume)
- container_state — tool-in-flight state for stuck-detection
Module system
Modules self-register via barrel imports and provide optional hooks into the router and delivery pipeline:- Permissions — sender resolution, access gating, channel approval, sender approval
- Scheduling — task creation, pause/resume/cancel/update via delivery actions
- Agent-to-agent — cross-agent message routing
- Approvals — interactive question cards
- Self-mod — agent self-modification requests
- Typing — typing indicator management
Data flow
Incoming message flow
Outbound delivery flow
File system layout
nanoclaw
src
container
Dockerfile
agent-runner
skills
groups
main
CLAUDE.md
global
CLAUDE.md
{agent-group}
data
v2.db
v2-sessions
{agent_group_id}
{session_id}
inbound.db
outbound.db
outbox
inbox
.heartbeat
Container image
The agent container (container/Dockerfile) includes:
- Base:
node:22-slim - Runtime: Bun (runs agent-runner TypeScript directly, no compilation)
- Browser: Chromium with all required dependencies
- Tools:
agent-browser,vercelCLI,curl,git - SDK:
@anthropic-ai/claude-code(Claude Agent SDK) - PID 1:
tinifor proper signal forwarding - User: Runs as
nodeuser (uid 1000, non-root) - Working directory:
/workspace/group
Source code is NOT baked into the image.
/app/src is a read-only bind mount from the host, so source-only changes never require an image rebuild.Startup sequence
- Central DB initialization — opens
data/v2.db, runs migrations, performs one-time filesystem cutover - Container runtime check — ensures Docker is running, cleans up orphan containers
- Channel adapter initialization — registers all channel adapters with
onInbound,onInboundEvent,onMetadata, andonActioncallbacks - Delivery adapter bridge — bridges the delivery system to channel adapters for
deliver()andsetTyping()calls - Delivery polls — starts active poll (1s for running containers) and sweep poll (60s for all active sessions)
- Host sweep — 60s sweep for processing_ack sync, stale detection, due-message wake, and recurrence advancement
Graceful shutdown
OnSIGTERM or SIGINT:
- Registered shutdown callbacks execute in order
- Delivery polls stop
- Channel adapters tear down gracefully
- Process exits