In v2, NanoClaw replaced the filesystem-based IPC system with a two-database model. All communication between the host process and containerized agents flows through two SQLite databases per session —Documentation Index
Fetch the complete documentation index at: https://docs.nanoclaw.dev/llms.txt
Use this file to discover all available pages before exploring further.
inbound.db and outbound.db.
Architecture overview
The v2 IPC model is built on three principles:- Single writer per file —
inbound.dbis only written by the host;outbound.dbis only written by the container - No filesystem polling — the container polls
inbound.dbdirectly; the host pollsoutbound.dbdirectly - No stdin/stdout — no piped input, no output markers, no IPC files
How it works
Inbound flow (host to container)
Session resolution
The session manager resolves or creates a session based on the wiring’s
session_mode (shared, per-thread, or agent-shared).Write to inbound.db
The message is written to the
messages_in table in inbound.db. File attachments are extracted to inbox/{messageId}/.Container wakes
The container runner wakes the container (or spawns a new one). The agent-runner’s poll loop discovers the new message.
Outbound flow (container to host)
Agent writes response
The agent writes to
outbound.db’s messages_out table. File attachments go to outbox/{messageId}/.Routing
Messages are routed by kind:
system— dispatched to registered delivery action handlers (scheduling, approvals, etc.)channel_type='agent'— routed to the agent-to-agent module- Normal messages — permission check, then channel adapter delivery
Database tables
inbound.db (host writes, container reads)
| Table | Purpose |
|---|---|
messages_in | Inbound messages with status, process_after, recurrence, series_id, trigger flag |
delivered | Tracks delivery outcomes for outbound message IDs |
destinations | Live destination map for the agent (channels and other agents). Overwritten on every wake |
session_routing | Single-row default reply routing (channel_type, platform_id, thread_id) |
outbound.db (container writes, host reads)
| Table | Purpose |
|---|---|
messages_out | Outbound messages with deliver_after and recurrence |
processing_ack | Tracks which inbound messages the container has processed |
session_state | Persistent key/value store (e.g., SDK session ID for resume across restarts) |
container_state | Tool-in-flight state (tool name, declared timeout, start time) for stuck-detection |
MCP tools (container-side)
Instead of writing IPC files, the container uses MCP tools that write tooutbound.db:
| Tool | Purpose |
|---|---|
send_message | Write an outbound message for delivery |
schedule_task | Request task creation (host creates in inbound.db) |
cancel_task | Cancel a recurring task series |
pause_task | Pause a pending task |
resume_task | Resume a paused task |
update_task | Update task content or schedule |
kind='system' messages to outbound.db with action fields. The host’s delivery pipeline routes them to the appropriate module handler.
Delivery action registry
Modules register handlers viaregisterDeliveryAction(action, handler):
dispatchResponse encounters a system message, it iterates registered handlers until one claims the response.
Cross-mount SQLite invariants
Three invariants are critical for the two-DB model to work correctly across Docker bind mounts:journal_mode=DELETE— WAL mode’s memory-mapped-shmfile doesn’t refresh host-to-guest; the container would miss messages- Host opens-writes-closes per operation — closing the database connection invalidates the container’s page cache, forcing it to see new data
- One writer per file — DELETE-mode journal unlink isn’t atomic across the mount boundary
Delivery polls
The host uses two poll loops to read fromoutbound.db:
| Poll | Interval | Scope |
|---|---|---|
| Active | 1 second | Sessions with running containers |
| Sweep | 60 seconds | All active sessions |
Host sweep
The host sweep (60s) performs four operations:- processing_ack sync — reads
processing_ackfromoutbound.dband updatesmessages_instatus ininbound.db - Stale detection — identifies containers with old
.heartbeatmtime or stuckprocessing_ack - Due-message wake — finds tasks where
process_after <= nowand wakes containers - Recurrence advancement — creates next-occurrence rows for completed recurring tasks
Security model
Identity verification
Container identity is determined by mount paths — each container can only write to its own session’soutbound.db. This is enforced at mount time, not by file contents.
Delivery authorization
The delivery system enforces per-agent permissions:- Messages to the origin chat are always permitted
- Cross-channel delivery requires an explicit
agent_destinationsrow - Unauthorized delivery attempts are logged and rejected
Error handling
Failed deliveries are retried up to 3 times, then permanently marked as failed. The attempt counter resets on process restart.Debugging
Inspect session databases
Inspect session databases
Check processing acknowledgments
Check processing acknowledgments
View container state
View container state
Check delivery status
Check delivery status
Related pages
- Security model — authorization and trust boundaries
- Container runtime — how agents execute in containers
- Troubleshooting — common issues