Back to Nanoclaw

NanoClaw Architecture

docs/architecture-diagram.html

2.0.637.0 KB
Original Source

1System Overview

Inbound messages land at the Chat SDK bridge, which hands off to the router. The router resolves the messaging group → agent group → session and writes to the session's inbound.db. The container runner spawns a per-session container (auth via OneCLI), and the agent-runner polls its DB, calls Claude, and writes responses to outbound.db. Delivery polls the outbound DB, re-validates destinations, and ships messages back through the same bridge.

flowchart TB
  subgraph Platforms["Messaging Platforms"]
    P1[Discord]
    P2[Telegram]
    P3[Slack]
    P4[GitHub / Linear]
    P5[WhatsApp / iMessage / Teams / GChat / Matrix / Webex / Email]
  end

  subgraph Host["Host Process (Node)"]
    direction TB
    Bridge["Chat SDK Bridge
src/channels/chat-sdk-bridge.ts"]
    Router["Router
src/router.ts
platformId + threadId → session"]
    SessMgr["Session Manager
src/session-manager.ts"]
    Runner["Container Runner
src/container-runner.ts
OneCLI ensureAgent + spawn"]
    Delivery["Delivery Poller
src/delivery.ts
1s active / 60s sweep"]
    Sweep["Host Sweep
src/host-sweep.ts"]
    Central[("Central DB · data/v2.db
agent_groups · messaging_groups
messaging_group_agents · sessions
pending_approvals")]
  end

  subgraph OneCLI["OneCLI Gateway (0.3.1)"]
    Vault["Agent Vault
secrets + OAuth"]
    Approvals["configureManualApproval"]
  end

  subgraph Session["Per-Session Container"]
    direction TB
    PollLoop["Poll Loop
container/agent-runner"]
    Provider["Claude Agent SDK
(codex / opencode planned)"]
    MCP["MCP Tools
send_message · send_file · edit_message
send_card · ask_user_question · schedule_task
create_agent · install_packages · add_mcp_server
request_rebuild"]
    InDB[("inbound.db
host writes · even seq")]
    OutDB[("outbound.db
container writes · odd seq")]
  end

  Folder["Agent Group FS
groups/*
CLAUDE.md · memory · skills"]

  P1 & P2 & P3 & P4 & P5 --> Bridge
  Bridge --> Router
  Router --> Central
  Router --> SessMgr
  SessMgr --> InDB
  SessMgr --> Runner
  Runner --> OneCLI
  Runner --> PollLoop
  PollLoop --> InDB
  PollLoop --> Provider
  Provider --> MCP
  MCP --> OutDB
  OutDB --> Delivery
  Delivery --> Central
  Delivery --> Bridge
  Bridge --> P1 & P2 & P3 & P4 & P5
  Sweep --> InDB
  Sweep --> OutDB
  Sweep --> Central
  Runner -.mounts.-> Folder
  MCP -.approval.-> Approvals
  Approvals --> Central
  Provider -.API calls.-> Vault

2Message Flow

End-to-end path of a single message. The host and container never write to the same SQLite file — the split between inbound and outbound DBs is what makes this lock-free under concurrent activity.

sequenceDiagram
  participant P as Platform (Telegram)
  participant B as Chat SDK Bridge
  participant R as Router
  participant SM as Session Manager
  participant IDB as inbound.db
  participant C as Container (agent-runner)
  participant ODB as outbound.db
  participant D as Delivery Poller

  P->>B: new message
  B->>R: routeInbound(platformId, threadId, msg)
  R->>R: resolve messaging_group → agent_group → session
(agent-shared · shared · per-thread)
  R->>SM: ensure session + DBs exist
  R->>IDB: INSERT messages_in (even seq)
  R->>C: wake container (spawn or signal)
  C->>IDB: poll messages_in
  C->>C: format xml → Claude SDK stream
  C->>ODB: INSERT messages_out (odd seq)
parse <message to='name'> blocks
  D->>ODB: 1s active poll / 60s sweep
  D->>D: hasDestination() re-validate
  D->>B: deliver via adapter
  B->>P: send · edit · react · file · card

3Named Destinations & Agent-to-Agent

Agents address outputs by local name. The host looks up each name against the agent's destinations table at delivery time — dropping anything unauthorized. The same table routes agent-to-agent messages to a sibling agent's inbound.db with bidirectional permission rows.

flowchart LR
  subgraph AgentA["Agent Group A (main)"]
    A_out["<message to='slack'>...</message>
<message to='browser-agent'>...</message>
<internal>scratchpad</internal>"]
  end

  subgraph Dests["inbound.db.destinations (per agent)"]
    D1["slack → messaging_group 42"]
    D2["browser-agent → agent_group 7
(bidirectional)"]
    D3["github → messaging_group 13"]
  end

  subgraph AgentB["Agent Group B (browser sub-agent)"]
    B_session["own inbound.db / outbound.db
inherited destination back to A"]
  end

  Slack[Slack]
  GitHub[GitHub PR]

  A_out -->|parse + lookup| Dests
  D1 -->|deliver| Slack
  D2 -->|write to B's inbound.db| B_session
  D3 -->|deliver| GitHub
  B_session -.reply via 'parent'.-> Dests

4Entity Model

Messaging groups and agent groups are many-to-many, joined via messaging_group_agents. The session_mode column selects one of three isolation levels.

erDiagram
  agent_groups ||--o{ messaging_group_agents : wired
  messaging_groups ||--o{ messaging_group_agents : wired
  agent_groups ||--o{ sessions : runs
  messaging_groups ||--o{ sessions : context
  agent_groups ||--o{ agent_destinations : owns
  agent_groups ||--o{ pending_approvals : requests

  agent_groups {
    int id
    string name
    string folder
    string agent_provider
    json container_config
  }
  messaging_groups {
    int id
    string channel_type
    string platform_id
    string name
    bool is_group
    string unknown_sender_policy "strict | request_approval | public"
  }
  users {
    string id PK "namespaced <channel>:<handle>"
    string kind
    string display_name
  }
  user_roles {
    string user_id FK
    string role "owner | admin"
    string agent_group_id FK "null = global"
  }
  agent_group_members {
    string user_id FK
    string agent_group_id FK
  }
  user_dms {
    string user_id FK
    string channel_type
    string messaging_group_id FK
  }
  messaging_group_agents {
    int messaging_group_id
    int agent_group_id
    string session_mode
    json trigger_rules
    int priority
  }
  sessions {
    int id
    int agent_group_id
    int messaging_group_id
    string sdk_session_id
    string status
  }
Levelsession_modeSharedExample
1 · Shared sessionagent-sharedWorkspace + memory + conversationSlack + GitHub webhooks in one thread
2 · Same agent, separate sessionsshared / per-threadWorkspace + memory onlyOne agent across 3 Telegram chats
3 · Separate agent groups— (different agent_group_id)NothingPersonal vs work channels

5Two-DB Split

Each SQLite file has exactly one writer. The container touches a heartbeat file instead of UPDATE-ing a liveness row, so host sweep can detect staleness via stat(mtime) without opening the DB. Host uses even seq numbers, container uses odd — collision-free.

flowchart LR
  subgraph Mount["/workspace (volume mount)"]
    In[("inbound.db")]
    Out[("outbound.db")]
    HB["/.heartbeat (file touch)"]
  end

  Host[Host process] -->|writes · even seq| In
  Host -->|reads| Out
  Container[agent-runner] -->|reads| In
  Container -->|writes · odd seq| Out
  Container -->|touch every poll| HB
  HostSweep[Host sweep] -->|stat mtime| HB
  HostSweep -->|reads processing_ack| In