Back to Ollama

NemoClaw

docs/integrations/nemoclaw.mdx

0.23.11.7 KB
Original Source

NemoClaw is NVIDIA's open source security stack for OpenClaw. It wraps OpenClaw with the NVIDIA OpenShell runtime to provide kernel-level sandboxing, network policy controls, and audit trails for AI agents.

Quick start

Pull a model:

bash
ollama pull nemotron-3-nano:30b

Run the installer:

bash
curl -fsSL https://www.nvidia.com/nemoclaw.sh | \
  NEMOCLAW_NON_INTERACTIVE=1 \
  NEMOCLAW_PROVIDER=ollama \
  NEMOCLAW_MODEL=nemotron-3-nano:30b \
  bash

Connect to your sandbox:

bash
nemoclaw my-assistant connect

Open the TUI:

bash
openclaw tui

<Note>Ollama support in NemoClaw is still experimental.</Note>

Platform support

PlatformRuntimeStatus
Linux (Ubuntu 22.04+)DockerPrimary
macOS (Apple Silicon)Colima or Docker DesktopSupported
WindowsWSL2 with Docker DesktopSupported

CMD and PowerShell are not supported on Windows — WSL2 is required.

<Note>Ollama must be installed and running before the installer runs. When running inside WSL2 or a container, ensure Ollama is reachable from the sandbox (e.g. OLLAMA_HOST=0.0.0.0).</Note>

System requirements

  • CPU: 4 vCPU minimum
  • RAM: 8 GB minimum (16 GB recommended)
  • Disk: 20 GB free (40 GB recommended for local models)
  • Node.js 20+ and npm 10+
  • Container runtime (Docker preferred)
  • nemotron-3-super:cloud — Strong reasoning and coding
  • qwen3.5:cloud — 397B; reasoning and code generation
  • nemotron-3-nano:30b — Recommended local model; fits in 24 GB VRAM
  • qwen3.5:27b — Fast local reasoning (~18 GB VRAM)
  • glm-4.7-flash — Reasoning and code generation (~25 GB VRAM)

More models at ollama.com/search.