Skip to main content
This guide covers common issues you may encounter when running NanoClaw and how to resolve them.

Quick status check

Run these commands to get a quick overview of system health:
# 1. Is the service running?
launchctl list | grep nanoclaw
# Expected: PID  0  com.nanoclaw (PID = running, "-" = not running, non-zero exit = crashed)

# 2. Any running containers?
docker ps --format '{{.Names}} {{.Status}}' 2>/dev/null | grep nanoclaw

# 3. Any stopped/orphaned containers?
docker ps -a --format '{{.Names}} {{.Status}}' 2>/dev/null | grep nanoclaw

# 4. Recent errors in service log?
grep -E 'ERROR|WARN' logs/nanoclaw.log | tail -20

# 5. Are channels connected? (look for last connection event)
grep -E 'Connected|Connection closed|connection.*close' logs/nanoclaw.log | tail -5

# 6. Are groups loaded?
grep 'groupCount' logs/nanoclaw.log | tail -3

Agent not responding

Symptoms

Messages sent to the agent receive no response.

Diagnosis

1

Check if messages are being received

grep 'New messages' logs/nanoclaw.log | tail -10
If no messages appear, the issue is with channel connectivity.
2

Check if containers are being spawned

grep -E 'Processing messages|Spawning container' logs/nanoclaw.log | tail -10
If messages are received but no containers spawn, check trigger patterns.
3

Check for active containers

docker ps --filter name=nanoclaw-
If containers are stuck running, they may have hit an infinite loop.
4

Check container logs

docker logs nanoclaw-{group}-{timestamp}
Look for errors in the agent execution.

Solutions

Verify the trigger word in your message matches the configured trigger:
sqlite3 store/messages.db "SELECT name, trigger FROM registered_groups;"
The trigger is case-insensitive and must appear at the start of the message.
Ensure Docker is running:
docker info
If Docker is not running, start it:
  • macOS: Open Docker Desktop
  • Linux: sudo systemctl start docker
Check if the queue is blocked:
grep 'concurrency limit' logs/nanoclaw.log | tail -5
Stop stuck containers:
docker stop -t 1 $(docker ps -q --filter name=nanoclaw-)

Container timeout

Symptoms

  • Logs show Container timeout or timed out
  • Exit code 137 (SIGKILL) in container logs
  • Messages are lost after timeout

Diagnosis

# Check for recent timeouts
grep -E 'Container timeout|timed out' logs/nanoclaw.log | tail -10

# Check container log files
ls -lt groups/*/logs/container-*.log | head -10

# Read the most recent container log
cat groups/{group}/logs/container-{timestamp}.log

Solutions

Modify the group’s containerConfig in the database:
sqlite3 store/messages.db
UPDATE registered_groups 
SET container_config = json_set(
  COALESCE(container_config, '{}'),
  '$.timeout',
  3600000  -- 1 hour in milliseconds
)
WHERE name = 'Family Chat';
Review the container log to see if the agent is stuck:
cat groups/{group}/logs/container-{timestamp}.log
Look for repeated patterns or lack of progress. If the agent is stuck, consider:
  • Simplifying the prompt
  • Adding constraints to the task
  • Reviewing recent changes to CLAUDE.md
The idle timeout (for graceful shutdown between messages) is currently equal to the hard timeout. This is a known issue. To fix:Edit src/config.ts and set:
export const IDLE_TIMEOUT = 5 * 60 * 1000; // 5 minutes
export const CONTAINER_TIMEOUT = 30 * 60 * 1000; // 30 minutes
Then rebuild:
npm run build && launchctl kickstart -k gui/$(id -u)/com.nanoclaw

Known issues

Cursor rollback on error

The message cursor (lastAgentTimestamp) is now rolled back on container errors, provided no output was already sent to the user. If the agent fails after sending partial output, the cursor is not rolled back to prevent duplicate messages. If you need to manually reset the cursor, update it in router_state:
-- The cursor is stored as a JSON object mapping JIDs to timestamps
SELECT value FROM router_state WHERE key = 'last_agent_timestamp';

IDLE_TIMEOUT and CONTAINER_TIMEOUT

The hard timeout is calculated as Math.max(configTimeout, IDLE_TIMEOUT + 30_000), ensuring a 30-second gap for graceful _close sentinel shutdown before the hard kill fires. With default settings (both 30 minutes), the hard timeout is 30 minutes 30 seconds. If containers are consistently hitting the hard timeout, consider lowering IDLE_TIMEOUT in src/config.ts to a shorter value (e.g., 5 minutes) so idle containers exit more promptly.

Resume branches from stale tree position

Issue: When agent teams spawns subagent CLI processes, they write to the same session JSONL. On subsequent query() resumes, the CLI reads the JSONL but may pick a stale branch tip, causing responses to land on a branch the host never receives. Fix: Already fixed. The agent runner now passes resumeSessionAt with the last assistant message UUID to explicitly anchor each resume.

WhatsApp authentication issues

Symptoms

  • Logs show “QR” or “authentication required”
  • No messages are being received
  • Connection repeatedly drops

Diagnosis

# Check for QR code requests
grep 'QR\|authentication required\|qr' logs/nanoclaw.log | tail -5

# Check auth files exist
ls -la store/auth/

Solution

Re-authenticate with WhatsApp:
claude
/add-whatsapp
Scan the QR code with your phone, then restart the service:
launchctl kickstart -k gui/$(id -u)/com.nanoclaw

Container mount issues

Symptoms

  • Agent cannot access expected files or directories
  • Logs show “Mount validated” or “Mount REJECTED”
  • Permission errors in container logs

Diagnosis

# Check mount validation logs
grep -E 'Mount validated|Mount.*REJECTED|mount' logs/nanoclaw.log | tail -10

# Verify the mount allowlist is readable
cat ~/.config/nanoclaw/mount-allowlist.json

# Check group's container_config in DB
sqlite3 store/messages.db "SELECT name, container_config FROM registered_groups;"

Solutions

Edit ~/.config/nanoclaw/mount-allowlist.json:
{
  "allowedRoots": [
    {
      "path": "/Users/you/Documents/project",
      "allowReadWrite": true,
      "description": "Project files"
    }
  ],
  "blockedPatterns": [".ssh", ".gnupg", ".aws", ".env", "credentials"],
  "nonMainReadOnly": true
}
Then restart the service.
Containers run as the host user (or uid 1000 on some systems). Ensure the host user can read the mounted directories:
ls -la /path/to/mount
Run a test container to verify mounts:
docker run -i --rm \
  -v /path/to/host:/workspace/test:ro \
  nanoclaw-agent:latest \
  ls -la /workspace/test

IPC issues

Symptoms

  • Messages sent via IPC don’t appear in the chat
  • Tasks don’t get scheduled
  • IPC files accumulate in data/ipc/{group}/

Diagnosis

# Check for failed IPC operations
ls -la data/ipc/errors/
cat data/ipc/errors/main-message-*.json

# Monitor IPC activity
grep 'IPC' logs/nanoclaw.log | tail -20

# Verify IPC directory permissions
ls -la data/ipc/main/
ls -la data/ipc/family-chat/

Solutions

Non-main groups can only send messages to their own chat. Verify the chatJid in the IPC file matches the group’s registered JID:
sqlite3 store/messages.db "SELECT jid, name, folder FROM registered_groups;"
IPC files must be valid JSON. Check for syntax errors:
cat data/ipc/errors/{group}-message-*.json | jq .
Write a test message:
echo '{"type":"message","chatJid":"me","text":"test"}' > \
  data/ipc/main/messages/test.json

# Watch logs for processing
tail -f logs/nanoclaw.log | grep IPC

Session branching issues

Symptoms

  • Agent responses don’t appear in the conversation
  • Session transcript shows multiple branches
  • Concurrent CLI processes in session debug logs

Diagnosis

# Check for concurrent CLI processes
ls -la data/sessions/{group}/.claude/debug/

# Count unique SDK processes (each .txt file = one CLI subprocess)
# Multiple files = concurrent queries

# Check parentUuid branching in transcript
python3 -c "
import json, sys
lines = open('data/sessions/{group}/.claude/projects/-workspace-group/{session}.jsonl').read().strip().split('\n')
for i, line in enumerate(lines):
  try:
    d = json.loads(line)
    if d.get('type') == 'user' and d.get('message'):
      parent = d.get('parentUuid', 'ROOT')[:8]
      content = str(d['message'].get('content', ''))[:60]
      print(f'L{i+1} parent={parent} {content}')
  except: pass
"

Solution

This issue was fixed by passing resumeSessionAt with the last assistant message UUID. If you’re still experiencing it, ensure you’re running the latest version of the agent runner.

Service management

Restart the service

launchctl kickstart -k gui/$(id -u)/com.nanoclaw

View live logs

tail -f logs/nanoclaw.log

Stop the service

Stopping the service does NOT kill running containers. They will continue running as orphaned processes.
launchctl bootout gui/$(id -u)/com.nanoclaw

Start the service

launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.nanoclaw.plist

Rebuild after code changes

npm run build && launchctl kickstart -k gui/$(id -u)/com.nanoclaw

Getting help

If you’re still experiencing issues:
  1. Ask Claude Code directly: Run claude and describe the problem. Claude can read logs, check database state, and diagnose issues.
  2. Run the debug skill: claude/debug for guided troubleshooting.
  3. Check the Discord: Join the community for help from other users.
  4. Review recent changes: If the issue started after a customization, review what changed:
    git diff
    git log --oneline -10
    
Last modified on March 23, 2026