docs/reference/architecture.md
NemoClaw has two main components: a TypeScript plugin that integrates with the OpenClaw CLI, and a Python blueprint that orchestrates OpenShell resources.
The plugin is a thin TypeScript package that registers commands under openclaw nemoclaw.
It runs in-process with the OpenClaw gateway and handles user-facing CLI interactions.
nemoclaw/
├── src/
│ ├── index.ts Plugin entry — registers all commands
│ ├── cli.ts Commander.js subcommand wiring
│ ├── commands/
│ │ ├── launch.ts Fresh install into OpenShell
│ │ ├── connect.ts Interactive shell into sandbox
│ │ ├── status.ts Blueprint run state + sandbox health
│ │ ├── logs.ts Stream blueprint and sandbox logs
│ │ └── slash.ts /nemoclaw chat command handler
│ └── blueprint/
│ ├── resolve.ts Version resolution, cache management
│ ├── fetch.ts Download blueprint from OCI registry
│ ├── verify.ts Digest verification, compatibility checks
│ ├── exec.ts Subprocess execution of blueprint runner
│ └── state.ts Persistent state (run IDs)
├── openclaw.plugin.json Plugin manifest
└── package.json Commands declared under openclaw.extensions
The blueprint is a versioned Python artifact with its own release stream. The plugin resolves, verifies, and executes the blueprint as a subprocess. The blueprint drives all interactions with the OpenShell CLI.
nemoclaw-blueprint/
├── blueprint.yaml Manifest — version, profiles, compatibility
├── orchestrator/
│ └── runner.py CLI runner — plan / apply / status
├── policies/
│ └── openclaw-sandbox.yaml Strict baseline network + filesystem policy
flowchart LR
A[resolve] --> B[verify digest]
B --> C[plan]
C --> D[apply]
D --> E[status]
min_openshell_version and min_openclaw_version constraints in blueprint.yaml.openshell CLI commands.The sandbox runs the
ghcr.io/nvidia/openshell-community/sandboxes/openclaw
container image. Inside the sandbox:
openclaw-sandbox.yaml./sandbox and /tmp for read-write access, with system paths read-only.Inference requests from the agent never leave the sandbox directly. OpenShell intercepts them and routes to the configured provider:
Agent (sandbox) ──▶ OpenShell gateway ──▶ NVIDIA cloud (build.nvidia.com)
Refer to Inference Profiles for provider configuration details.