docs/integrations/nemoclaw.mdx
NemoClaw is NVIDIA's open source security stack for OpenClaw. It wraps OpenClaw with the NVIDIA OpenShell runtime to provide kernel-level sandboxing, network policy controls, and audit trails for AI agents.
Pull a model:
ollama pull nemotron-3-nano:30b
Run the installer:
curl -fsSL https://www.nvidia.com/nemoclaw.sh | \
NEMOCLAW_NON_INTERACTIVE=1 \
NEMOCLAW_PROVIDER=ollama \
NEMOCLAW_MODEL=nemotron-3-nano:30b \
bash
Connect to your sandbox:
nemoclaw my-assistant connect
Open the TUI:
openclaw tui
<Note>Ollama support in NemoClaw is still experimental.</Note>
| Platform | Runtime | Status |
|---|---|---|
| Linux (Ubuntu 22.04+) | Docker | Primary |
| macOS (Apple Silicon) | Colima or Docker Desktop | Supported |
| Windows | WSL2 with Docker Desktop | Supported |
CMD and PowerShell are not supported on Windows — WSL2 is required.
<Note>Ollama must be installed and running before the installer runs. When running inside WSL2 or a container, ensure Ollama is reachable from the sandbox (e.g. OLLAMA_HOST=0.0.0.0).</Note>
nemotron-3-super:cloud — Strong reasoning and codingqwen3.5:cloud — 397B; reasoning and code generationnemotron-3-nano:30b — Recommended local model; fits in 24 GB VRAMqwen3.5:27b — Fast local reasoning (~18 GB VRAM)glm-4.7-flash — Reasoning and code generation (~25 GB VRAM)More models at ollama.com/search.