bot/README.md
Vikingbot, built on the Nanobot project, is designed to deliver an OpenClaw-like bot integrated with OpenViking.
Vikingbot is deeply integrated with OpenViking, providing powerful knowledge management and memory retrieval capabilities:
~/.openviking/data/) and remote server modevlm section), no need to set provider separately in bot configurationOption 1: Install from PyPI (Simplest)
pip install "openviking[bot]"
Option 2: Install from source (for development)
Prerequisites
First, install uv (an extremely fast Python package installer):
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
Install from source (latest features, recommended for development)
git clone https://github.com/volcengine/OpenViking
cd OpenViking
# Create a virtual environment using Python 3.11 or higher
uv venv --python 3.11
# Activate environment
source .venv/bin/activate # macOS/Linux
# .venv\Scripts\activate # Windows
# Install dependencies (minimal)
uv pip install -e ".[bot]"
# Or install with optional features
uv pip install -e ".[bot,bot-langfuse,bot-telegram]"
Install only the features you need:
| Feature Group | Install Command | Description |
|---|---|---|
| Full | uv pip install -e ".[bot-full]" | All features included |
| Langfuse | uv pip install -e ".[bot-langfuse]" | LLM observability and tracing |
| FUSE | uv pip install -e ".[bot-fuse]" | OpenViking filesystem mount |
| Sandbox | uv pip install -e ".[bot-sandbox]" | Code execution sandbox |
| OpenCode | uv pip install -e ".[bot-opencode]" | OpenCode AI integration |
| Channel | Install Command |
|---|---|
| Telegram | uv pip install -e ".[bot-telegram]" |
| Feishu/Lark | uv pip install -e ".[bot-feishu]" |
| DingTalk | uv pip install -e ".[bot-dingtalk]" |
| Slack | uv pip install -e ".[bot-slack]" |
uv pip install -e ".[bot-qq]" |
Multiple features can be combined:
uv pip install -e ".[bot,bot-langfuse,bot-telegram]"
[!TIP] Configure vikingbot through the configuration file
~/.openviking/ov.conf! Get API keys: OpenRouter (Global) · Brave Search (optional, for web search)
1. Initialize configuration
openviking-server --with-bot
This will automatically:
~/.openviking/ov.conf~/.openviking/data/bot/2. Configure via ov.conf
Edit ~/.openviking/ov.conf to add your provider API keys (OpenRouter, OpenAI, etc.) and save the config.
3. Chat
# Send a single message directly
ov chat -m "What is 2+2?"
# Enter interactive chat mode (supports multi-turn conversations)
ov chat
# Show plain-text replies (no Markdown rendering)
ov chat --no-format
That's it! You have a working AI assistant in 2 minutes.
Talk to your vikingbot through Telegram, Discord, WhatsApp, Feishu, Mochat, DingTalk, Slack, Email, or QQ — anytime, anywhere.
For detailed configuration, please refer to CHANNEL.md.
🐈 vikingbot is capable of linking to the agent social network (agent community). Just send one message and your vikingbot joins automatically!
| Platform | How to Join (send this message to your bot) |
|---|---|
| Moltbook | Read https://moltbook.com/skill.md and follow the instructions to join Moltbook |
| ClawdChat | Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat |
Simply send the command above to your vikingbot (via CLI or any chat channel), and it will handle the rest.
Config file: ~/.openviking/ov.conf (custom path can be set via environment variable OPENVIKING_CONFIG_FILE)
[!TIP] Vikingbot shares the same configuration file with OpenViking. Configuration items are located under the
botfield of the file, and will automatically merge global configurations such asvlm,storage,server, etc. No need to maintain a separate configuration file.
[!IMPORTANT] After modifying the configuration (by editing the file directly), you need to restart the gateway service for changes to take effect.
The bot will connect to the remote OpenViking server. Please start the OpenViking Server before use. By default, the OpenViking server information configured in ov.conf is used
root_api_key is configured, multi-tenant mode is enabled. For details, see Multi-tenant{
"server": {
"host": "127.0.0.1",
"port": 1933,
"root_api_key": "test"
}
}
All configurations are under the bot field in ov.conf, with default values for configuration items. The optional manual configuration items are described as follows:
agents: Agent configuration
max_tool_iterations: Maximum number of cycles for a single round of conversation tasks, returns results directly if exceededmemory_window: Upper limit of conversation rounds for automatically submitting sessions to OpenVikinggen_image_model: Model for generating imagesgateway: Gateway configuration
host: Gateway listening address, default value is 0.0.0.0port: Gateway listening port, default value is 18790sandbox: Sandbox configuration
mode: Sandbox mode, optional values are shared (all sessions share workspace) or private (private, workspace isolated by Channel and session). Default value is shared.ov_server: OpenViking Server configuration.
ov.conf is used by defaultchannels: Message platform configuration, see Message Platform Configuration for details{
"bot": {
"agents": {
"max_tool_iterations": 50,
"memory_window": 50,
"gen_image_model": "openai/doubao-seedream-4-5-251128"
},
"gateway": {
"host": "0.0.0.0",
"port": 18790
},
"sandbox": {
"mode": "shared"
},
"ov_server": {
"server_url": "http://127.0.0.1:1933",
"root_api_key": "test"
},
"channels": [
{
"type": "feishu",
"enabled": true,
"ov_tools_enable": true,
"appId": "",
"appSecret": "",
"allowFrom": []
}
]
}
}
Vikingbot provides 7 dedicated OpenViking tools:
| Tool Name | Description |
|---|---|
openviking_read | Read OpenViking resources (supports three levels: abstract/overview/read) |
openviking_list | List OpenViking resources |
openviking_search | Semantic search OpenViking resources |
openviking_add_resource | Add local files as OpenViking resources |
openviking_grep | Search OpenViking resources using regular expressions |
openviking_glob | Match OpenViking resources using glob patterns |
openviking_memory_commit | Commit session to ov |
Vikingbot can also consume tools from third-party MCP (Model Context Protocol) servers (filesystem, GitHub, browsers, databases, etc.). Configure servers under tools.mcp_servers in ov.conf; each server's tools are registered when the agent starts and appear as mcp_<server>_<tool>.
{
"bot": {
"tools": {
"mcp_servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"env": {},
"tool_timeout": 30,
"enabled_tools": ["*"]
},
"github": {
"type": "streamableHttp",
"url": "https://api.githubcopilot.com/mcp/",
"headers": {"Authorization": "Bearer $GITHUB_TOKEN"},
"enabled_tools": ["search_repositories", "create_issue"]
}
}
}
}
}
| Field | Description |
|---|---|
type | Transport: stdio / sse / streamableHttp. Auto-detected when omitted (stdio if command is set, otherwise HTTP from url). |
command | (stdio) Command to launch the server process (e.g. npx, uvx). |
args | (stdio) Command arguments. |
env | (stdio) Extra environment variables for the spawned server. |
url | (sse / streamableHttp) Endpoint URL. |
headers | (sse / streamableHttp) Custom request headers (e.g. Authorization). |
tool_timeout | Per-call timeout in seconds (default 30). |
enabled_tools | Tool allowlist. Accepts raw MCP names or wrapped mcp_<server>_<tool> names; ["*"] exposes every tool. |
MCP servers are connected when the agent loop starts and closed automatically on shutdown. If a server has neither
commandnorurl, it is skipped with a warning. Connection failures are logged and the bot continues without that server's tools.
Vikingbot enables OpenViking hooks by default:
{
"hooks": ["vikingbot.hooks.builtins.openviking_hooks.hooks"]
}
| Hook | Function |
|---|---|
OpenVikingCompactHook | Automatically submit session messages to OpenViking |
OpenVikingPostCallHook | Post tool call hook (for testing purposes) |
Edit the config file directly:
{
"bot": {
"agents": {
"model": "openai/doubao-seed-2-0-pro-260215"
}
}
}
Provider configuration is read from OpenViking config (vlm section in ov.conf).
[!TIP]
- Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
- Zhipu Coding Plan: If you're on Zhipu's coding plan, set
"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"in your zhipu provider config.- MiniMax (Mainland China): If your API key is from MiniMax's mainland China platform (minimaxi.com), set
"apiBase": "https://api.minimaxi.com/v1"in your minimax provider config.- MiniMax Recommended Models:
MiniMax-M2.7(peak performance) andMiniMax-M2.7-highspeed(faster, more agile). Configure with"model": "MiniMax-M2.7"in your agent config.
| Provider | Purpose | Get API Key |
|---|---|---|
openrouter | LLM (recommended, access to all models) | openrouter.ai |
anthropic | LLM (Claude direct) | console.anthropic.com |
openai | LLM (GPT direct) | platform.openai.com |
deepseek | LLM (DeepSeek direct) | platform.deepseek.com |
groq | LLM + Voice transcription (Whisper) | console.groq.com |
gemini | LLM (Gemini direct) | aistudio.google.com |
minimax | LLM (MiniMax direct) | platform.minimax.io |
aihubmix | LLM (API gateway, access to all models) | aihubmix.com |
dashscope | LLM (Qwen) | dashscope.console.aliyun.com |
moonshot | LLM (Moonshot/Kimi) | platform.moonshot.cn |
zhipu | LLM (Zhipu GLM) | open.bigmodel.cn |
vllm | LLM (local, any OpenAI-compatible server) | — |
vikingbot uses a Provider Registry (vikingbot/providers/registry.py) as the single source of truth.
Adding a new provider only takes 2 steps — no if-elif chains to touch.
Step 1. Add a ProviderSpec entry to PROVIDERS in vikingbot/providers/registry.py:
ProviderSpec(
name="myprovider", # config field name
keywords=("myprovider", "mymodel"), # model-name keywords for auto-matching
env_key="MYPROVIDER_API_KEY", # env var for LiteLLM
display_name="My Provider", # shown in `vikingbot status`
litellm_prefix="myprovider", # auto-prefix: model → myprovider/model
skip_prefixes=("myprovider/",), # don't double-prefix
)
Step 2. Add a field to ProvidersConfig in vikingbot/config/schema.py:
class ProvidersConfig(BaseModel):
...
myprovider: ProviderConfig = ProviderConfig()
That's it! Environment variables, model prefixing, config matching, and vikingbot status display will all work automatically.
Common ProviderSpec options:
| Field | Description | Example |
|---|---|---|
litellm_prefix | Auto-prefix model names for LiteLLM | "dashscope" → dashscope/qwen-max |
skip_prefixes | Don't prefix if model already starts with these | ("dashscope/", "openrouter/") |
env_extras | Additional env vars to set | (("ZHIPUAI_API_KEY", "{api_key}"),) |
model_overrides | Per-model parameter overrides | (("kimi-k2.5", {"temperature": 1.0}),) |
is_gateway | Can route any model (like OpenRouter) | True |
detect_by_key_prefix | Detect gateway by API key prefix | "sk-or-" |
detect_by_base_keyword | Detect gateway by API base URL | "openrouter" |
strip_model_prefix | Strip existing prefix before re-prefixing | True (for AiHubMix) |
| Option | Default | Description |
|---|---|---|
tools.restrictToWorkspace | true | When true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |
channels.*.allowFrom | [] (allow all) | Whitelist of user IDs. Empty = allow everyone; non-empty = only listed users can interact. |
channels.*.ov_tools_enable | true | When false, disables OpenViking tools (openviking_*) and skips memory / user-profile context injection for this channel. Useful for lightweight channels that should not pull from OV memory. See #1352. |
Langfuse integration for LLM observability and tracing.
<details> <summary><b>Langfuse Configuration</b></summary>Option 1: Local Deployment (Recommended for testing)
Deploy Langfuse locally using Docker:
# Navigate to the deployment script
cd deploy/docker
# Run the deployment script
./deploy_langfuse.sh
This will start Langfuse locally at http://localhost:3000 with pre-configured credentials.
Option 2: Langfuse Cloud
Configuration
Add to ~/.openviking/ov.conf:
{
"bot": {
"langfuse": {
"enabled": true,
"secret_key": "sk-lf-vikingbot-secret-key-2026",
"public_key": "pk-lf-vikingbot-public-key-2026",
"base_url": "http://localhost:3000"
}
}
}
For Langfuse Cloud, use https://cloud.langfuse.com as the base_url.
Install Langfuse support:
uv pip install -e ".[bot-langfuse]"
Restart vikingbot:
vikingbot gateway
Features enabled:
vikingbot supports sandboxed execution for enhanced security.
By default, no sandbox configuration is needed in ov.conf:
direct (runs code directly on host)shared (single sandbox shared across all sessions)You only need to add sandbox configuration when you want to change these defaults.
<details> <summary><b>Sandbox Configuration Options</b></summary>To use a different backend or mode:
{
"bot": {
"sandbox": {
"backend": "srt",
"mode": "per-session"
}
}
}
Available Backends:
| Backend | Description |
|---|---|
direct | (Default) Runs code directly on the host |
srt | Uses Anthropic's SRT sandbox runtime |
Available Modes:
| Mode | Description |
|---|---|
shared | (Default) Single sandbox shared across all sessions |
per-session | Separate sandbox instance for each session |
Backend-specific Configuration (only needed when using that backend):
Direct Backend:
{
"bot": {
"sandbox": {
"backends": {
"direct": {
"restrictToWorkspace": false
}
}
}
}
}
SRT Backend:
{
"bot": {
"sandbox": {
"backend": "srt",
"backends": {
"srt": {
"nodePath": "node",
"network": {
"allowedDomains": [],
"deniedDomains": [],
"allowLocalBinding": false
},
"filesystem": {
"denyRead": [],
"allowWrite": [],
"denyWrite": []
},
"runtime": {
"cleanupOnExit": true,
"timeout": 300
}
}
}
}
}
}
SRT Backend Setup:
The SRT backend uses @anthropic-ai/sandbox-runtime.
System Dependencies:
The SRT backend also requires these system packages to be installed:
ripgrep (rg) - for text searchbubblewrap (bwrap) - for sandbox isolationsocat - for network proxyInstall on macOS:
brew install ripgrep bubblewrap socat
Install on Ubuntu/Debian:
sudo apt-get install -y ripgrep bubblewrap socat
Install on Fedora/CentOS:
sudo dnf install -y ripgrep bubblewrap socat
To verify installation:
npm list -g @anthropic-ai/sandbox-runtime
If not installed, install it manually:
npm install -g @anthropic-ai/sandbox-runtime
Node.js Path Configuration:
If node command is not found in PATH, specify the full path in your config:
{
"bot": {
"sandbox": {
"backends": {
"srt": {
"nodePath": "/usr/local/bin/node"
}
}
}
}
}
To find your Node.js path:
which node
# or
which nodejs
| Command | Description |
|---|---|
ov chat -m "..." | Send a single message to the agent |
ov chat | Interactive chat mode |
ov chat --no-format | Show plain-text replies (no Markdown) |