Back to Openviking

Vikingbot

bot/README.md

0.3.1419.5 KB
Original Source

Vikingbot

Vikingbot, built on the Nanobot project, is designed to deliver an OpenClaw-like bot integrated with OpenViking.

✨ Core Features of OpenViking

Vikingbot is deeply integrated with OpenViking, providing powerful knowledge management and memory retrieval capabilities:

  • Dual local/remote modes: Supports local storage (~/.openviking/data/) and remote server mode
  • 7 dedicated Agent tools: Resource management, semantic search, regex search, glob search, memory search
  • Three-level content access: L0 (summary), L1 (overview), L2 (full content)
  • Automatic session memory submission: Conversation history is automatically saved to OpenViking
  • Model configuration: Read from OpenViking configuration (vlm section), no need to set provider separately in bot configuration

📦 Install

Option 1: Install from PyPI (Simplest)

bash
pip install "openviking[bot]"

Option 2: Install from source (for development)

Prerequisites

First, install uv (an extremely fast Python package installer):

bash
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

Install from source (latest features, recommended for development)

bash
git clone https://github.com/volcengine/OpenViking
cd OpenViking

# Create a virtual environment using Python 3.11 or higher
uv venv --python 3.11

# Activate environment
source .venv/bin/activate  # macOS/Linux
# .venv\Scripts\activate   # Windows

# Install dependencies (minimal)
uv pip install -e ".[bot]"

# Or install with optional features
uv pip install -e ".[bot,bot-langfuse,bot-telegram]"

Optional Dependencies

Install only the features you need:

Feature GroupInstall CommandDescription
Fulluv pip install -e ".[bot-full]"All features included
Langfuseuv pip install -e ".[bot-langfuse]"LLM observability and tracing
FUSEuv pip install -e ".[bot-fuse]"OpenViking filesystem mount
Sandboxuv pip install -e ".[bot-sandbox]"Code execution sandbox
OpenCodeuv pip install -e ".[bot-opencode]"OpenCode AI integration

Channels (chat apps)

ChannelInstall Command
Telegramuv pip install -e ".[bot-telegram]"
Feishu/Larkuv pip install -e ".[bot-feishu]"
DingTalkuv pip install -e ".[bot-dingtalk]"
Slackuv pip install -e ".[bot-slack]"
QQuv pip install -e ".[bot-qq]"

Multiple features can be combined:

bash
uv pip install -e ".[bot,bot-langfuse,bot-telegram]"

🚀 Quick Start

[!TIP] Configure vikingbot through the configuration file ~/.openviking/ov.conf! Get API keys: OpenRouter (Global) · Brave Search (optional, for web search)

1. Initialize configuration

bash
openviking-server --with-bot

This will automatically:

  • Create a default config at ~/.openviking/ov.conf
  • Create bot startup files in the OpenViking workspace, default path is ~/.openviking/data/bot/
  • Start the OpenViking server with bot integration

2. Configure via ov.conf

Edit ~/.openviking/ov.conf to add your provider API keys (OpenRouter, OpenAI, etc.) and save the config.

3. Chat

bash
# Send a single message directly
ov chat -m "What is 2+2?"

# Enter interactive chat mode (supports multi-turn conversations)
ov chat

# Show plain-text replies (no Markdown rendering)
ov chat --no-format

That's it! You have a working AI assistant in 2 minutes.

Talk to your vikingbot through Telegram, Discord, WhatsApp, Feishu, Mochat, DingTalk, Slack, Email, or QQ — anytime, anywhere.

For detailed configuration, please refer to CHANNEL.md.

🌐 Agent Social Network

🐈 vikingbot is capable of linking to the agent social network (agent community). Just send one message and your vikingbot joins automatically!

PlatformHow to Join (send this message to your bot)
MoltbookRead https://moltbook.com/skill.md and follow the instructions to join Moltbook
ClawdChatRead https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat

Simply send the command above to your vikingbot (via CLI or any chat channel), and it will handle the rest.

⚙️ Configuration

Config file: ~/.openviking/ov.conf (custom path can be set via environment variable OPENVIKING_CONFIG_FILE)

[!TIP] Vikingbot shares the same configuration file with OpenViking. Configuration items are located under the bot field of the file, and will automatically merge global configurations such as vlm, storage, server, etc. No need to maintain a separate configuration file.

[!IMPORTANT] After modifying the configuration (by editing the file directly), you need to restart the gateway service for changes to take effect.

OpenViking Server Configuration

The bot will connect to the remote OpenViking server. Please start the OpenViking Server before use. By default, the OpenViking server information configured in ov.conf is used

  • OpenViking default startup address is 127.0.0.1:1933
  • If root_api_key is configured, multi-tenant mode is enabled. For details, see Multi-tenant
  • OpenViking Server configuration example
json
{
  "server": {
    "host": "127.0.0.1",
    "port": 1933,
    "root_api_key": "test"
  }
}

Bot Configuration

All configurations are under the bot field in ov.conf, with default values for configuration items. The optional manual configuration items are described as follows:

  • agents: Agent configuration
    • max_tool_iterations: Maximum number of cycles for a single round of conversation tasks, returns results directly if exceeded
    • memory_window: Upper limit of conversation rounds for automatically submitting sessions to OpenViking
    • gen_image_model: Model for generating images
  • gateway: Gateway configuration
    • host: Gateway listening address, default value is 0.0.0.0
    • port: Gateway listening port, default value is 18790
  • sandbox: Sandbox configuration
    • mode: Sandbox mode, optional values are shared (all sessions share workspace) or private (private, workspace isolated by Channel and session). Default value is shared.
  • ov_server: OpenViking Server configuration.
    • If not configured, the OpenViking server information configured in ov.conf is used by default
    • If you don't use the locally started OpenViking Server, you can configure the url and the corresponding root user's API Key here
      • root_api_key: In a multi-tenant scenario, the API KEY must have root privileges; otherwise, the bot cannot automatically register multiple OpenViking users, which is used to implement memory isolation.
      • account_id: Defaults to default, which is the account ID of OpenViking. All users under an OpenViking account share resources.
  • channels: Message platform configuration, see Message Platform Configuration for details
json
{
  "bot": {
    "agents": {
      "max_tool_iterations": 50,
      "memory_window": 50,
      "gen_image_model": "openai/doubao-seedream-4-5-251128"
    },
    "gateway": {
      "host": "0.0.0.0",
      "port": 18790
    },
    "sandbox": {
      "mode": "shared"
    },
    "ov_server": {
      "server_url": "http://127.0.0.1:1933",
      "root_api_key": "test"
    },
    "channels": [
      {
        "type": "feishu",
        "enabled": true,
        "ov_tools_enable": true,
        "appId": "",
        "appSecret": "",
        "allowFrom": []
      }
    ]
  }
}

OpenViking Agent Tools

Vikingbot provides 7 dedicated OpenViking tools:

Tool NameDescription
openviking_readRead OpenViking resources (supports three levels: abstract/overview/read)
openviking_listList OpenViking resources
openviking_searchSemantic search OpenViking resources
openviking_add_resourceAdd local files as OpenViking resources
openviking_grepSearch OpenViking resources using regular expressions
openviking_globMatch OpenViking resources using glob patterns
openviking_memory_commitCommit session to ov

External MCP Servers

Vikingbot can also consume tools from third-party MCP (Model Context Protocol) servers (filesystem, GitHub, browsers, databases, etc.). Configure servers under tools.mcp_servers in ov.conf; each server's tools are registered when the agent starts and appear as mcp_<server>_<tool>.

json
{
  "bot": {
    "tools": {
      "mcp_servers": {
        "filesystem": {
          "command": "npx",
          "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
          "env": {},
          "tool_timeout": 30,
          "enabled_tools": ["*"]
        },
        "github": {
          "type": "streamableHttp",
          "url": "https://api.githubcopilot.com/mcp/",
          "headers": {"Authorization": "Bearer $GITHUB_TOKEN"},
          "enabled_tools": ["search_repositories", "create_issue"]
        }
      }
    }
  }
}
FieldDescription
typeTransport: stdio / sse / streamableHttp. Auto-detected when omitted (stdio if command is set, otherwise HTTP from url).
command(stdio) Command to launch the server process (e.g. npx, uvx).
args(stdio) Command arguments.
env(stdio) Extra environment variables for the spawned server.
url(sse / streamableHttp) Endpoint URL.
headers(sse / streamableHttp) Custom request headers (e.g. Authorization).
tool_timeoutPer-call timeout in seconds (default 30).
enabled_toolsTool allowlist. Accepts raw MCP names or wrapped mcp_<server>_<tool> names; ["*"] exposes every tool.

MCP servers are connected when the agent loop starts and closed automatically on shutdown. If a server has neither command nor url, it is skipped with a warning. Connection failures are logged and the bot continues without that server's tools.

OpenViking Hooks

Vikingbot enables OpenViking hooks by default:

json
{
  "hooks": ["vikingbot.hooks.builtins.openviking_hooks.hooks"]
}
HookFunction
OpenVikingCompactHookAutomatically submit session messages to OpenViking
OpenVikingPostCallHookPost tool call hook (for testing purposes)

Manual Configuration (Advanced)

Edit the config file directly:

json
{
  "bot": {
    "agents": {
      "model": "openai/doubao-seed-2-0-pro-260215"
    }
  }
}

Provider configuration is read from OpenViking config (vlm section in ov.conf).

Providers

[!TIP]

  • Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
  • Zhipu Coding Plan: If you're on Zhipu's coding plan, set "apiBase": "https://open.bigmodel.cn/api/coding/paas/v4" in your zhipu provider config.
  • MiniMax (Mainland China): If your API key is from MiniMax's mainland China platform (minimaxi.com), set "apiBase": "https://api.minimaxi.com/v1" in your minimax provider config.
  • MiniMax Recommended Models: MiniMax-M2.7 (peak performance) and MiniMax-M2.7-highspeed (faster, more agile). Configure with "model": "MiniMax-M2.7" in your agent config.
ProviderPurposeGet API Key
openrouterLLM (recommended, access to all models)openrouter.ai
anthropicLLM (Claude direct)console.anthropic.com
openaiLLM (GPT direct)platform.openai.com
deepseekLLM (DeepSeek direct)platform.deepseek.com
groqLLM + Voice transcription (Whisper)console.groq.com
geminiLLM (Gemini direct)aistudio.google.com
minimaxLLM (MiniMax direct)platform.minimax.io
aihubmixLLM (API gateway, access to all models)aihubmix.com
dashscopeLLM (Qwen)dashscope.console.aliyun.com
moonshotLLM (Moonshot/Kimi)platform.moonshot.cn
zhipuLLM (Zhipu GLM)open.bigmodel.cn
vllmLLM (local, any OpenAI-compatible server)
<details> <summary><b>Adding a New Provider (Developer Guide)</b></summary>

vikingbot uses a Provider Registry (vikingbot/providers/registry.py) as the single source of truth. Adding a new provider only takes 2 steps — no if-elif chains to touch.

Step 1. Add a ProviderSpec entry to PROVIDERS in vikingbot/providers/registry.py:

python
ProviderSpec(
    name="myprovider",                   # config field name
    keywords=("myprovider", "mymodel"),  # model-name keywords for auto-matching
    env_key="MYPROVIDER_API_KEY",        # env var for LiteLLM
    display_name="My Provider",          # shown in `vikingbot status`
    litellm_prefix="myprovider",         # auto-prefix: model → myprovider/model
    skip_prefixes=("myprovider/",),      # don't double-prefix
)

Step 2. Add a field to ProvidersConfig in vikingbot/config/schema.py:

python
class ProvidersConfig(BaseModel):
    ...
    myprovider: ProviderConfig = ProviderConfig()

That's it! Environment variables, model prefixing, config matching, and vikingbot status display will all work automatically.

Common ProviderSpec options:

FieldDescriptionExample
litellm_prefixAuto-prefix model names for LiteLLM"dashscope"dashscope/qwen-max
skip_prefixesDon't prefix if model already starts with these("dashscope/", "openrouter/")
env_extrasAdditional env vars to set(("ZHIPUAI_API_KEY", "{api_key}"),)
model_overridesPer-model parameter overrides(("kimi-k2.5", {"temperature": 1.0}),)
is_gatewayCan route any model (like OpenRouter)True
detect_by_key_prefixDetect gateway by API key prefix"sk-or-"
detect_by_base_keywordDetect gateway by API base URL"openrouter"
strip_model_prefixStrip existing prefix before re-prefixingTrue (for AiHubMix)
</details>

Security

OptionDefaultDescription
tools.restrictToWorkspacetrueWhen true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access.
channels.*.allowFrom[] (allow all)Whitelist of user IDs. Empty = allow everyone; non-empty = only listed users can interact.
channels.*.ov_tools_enabletrueWhen false, disables OpenViking tools (openviking_*) and skips memory / user-profile context injection for this channel. Useful for lightweight channels that should not pull from OV memory. See #1352.

Observability (Optional)

Langfuse integration for LLM observability and tracing.

<details> <summary><b>Langfuse Configuration</b></summary>

Option 1: Local Deployment (Recommended for testing)

Deploy Langfuse locally using Docker:

bash
# Navigate to the deployment script
cd deploy/docker

# Run the deployment script
./deploy_langfuse.sh

This will start Langfuse locally at http://localhost:3000 with pre-configured credentials.

Option 2: Langfuse Cloud

  1. Sign up at langfuse.com
  2. Create a new project
  3. Copy the Secret Key and Public Key from project settings

Configuration

Add to ~/.openviking/ov.conf:

json
{
  "bot": {
    "langfuse": {
      "enabled": true,
      "secret_key": "sk-lf-vikingbot-secret-key-2026",
      "public_key": "pk-lf-vikingbot-public-key-2026",
      "base_url": "http://localhost:3000"
    }
  }
}

For Langfuse Cloud, use https://cloud.langfuse.com as the base_url.

Install Langfuse support:

bash
uv pip install -e ".[bot-langfuse]"

Restart vikingbot:

bash
vikingbot gateway

Features enabled:

  • Automatic trace creation for each conversation
  • Session and user tracking
  • LLM call monitoring
  • Token usage tracking
</details>

Sandbox

vikingbot supports sandboxed execution for enhanced security.

By default, no sandbox configuration is needed in ov.conf:

  • Default backend: direct (runs code directly on host)
  • Default mode: shared (single sandbox shared across all sessions)

You only need to add sandbox configuration when you want to change these defaults.

<details> <summary><b>Sandbox Configuration Options</b></summary>

To use a different backend or mode:

json
{
  "bot": {
    "sandbox": {
      "backend": "srt",
      "mode": "per-session"
    }
  }
}

Available Backends:

BackendDescription
direct(Default) Runs code directly on the host
srtUses Anthropic's SRT sandbox runtime

Available Modes:

ModeDescription
shared(Default) Single sandbox shared across all sessions
per-sessionSeparate sandbox instance for each session

Backend-specific Configuration (only needed when using that backend):

Direct Backend:

json
{
  "bot": {
    "sandbox": {
      "backends": {
        "direct": {
          "restrictToWorkspace": false
        }
      }
    }
  }
}

SRT Backend:

json
{
  "bot": {
    "sandbox": {
      "backend": "srt",
      "backends": {
        "srt": {
          "nodePath": "node",
          "network": {
            "allowedDomains": [],
            "deniedDomains": [],
            "allowLocalBinding": false
          },
          "filesystem": {
            "denyRead": [],
            "allowWrite": [],
            "denyWrite": []
          },
          "runtime": {
            "cleanupOnExit": true,
            "timeout": 300
          }
        }
      }
    }
  }
}

SRT Backend Setup:

The SRT backend uses @anthropic-ai/sandbox-runtime.

System Dependencies:

The SRT backend also requires these system packages to be installed:

  • ripgrep (rg) - for text search
  • bubblewrap (bwrap) - for sandbox isolation
  • socat - for network proxy

Install on macOS:

bash
brew install ripgrep bubblewrap socat

Install on Ubuntu/Debian:

bash
sudo apt-get install -y ripgrep bubblewrap socat

Install on Fedora/CentOS:

bash
sudo dnf install -y ripgrep bubblewrap socat

To verify installation:

bash
npm list -g @anthropic-ai/sandbox-runtime

If not installed, install it manually:

bash
npm install -g @anthropic-ai/sandbox-runtime

Node.js Path Configuration:

If node command is not found in PATH, specify the full path in your config:

json
{
  "bot": {
    "sandbox": {
      "backends": {
        "srt": {
          "nodePath": "/usr/local/bin/node"
        }
      }
    }
  }
}

To find your Node.js path:

bash
which node
# or
which nodejs
</details>

CLI Reference

CommandDescription
ov chat -m "..."Send a single message to the agent
ov chatInteractive chat mode
ov chat --no-formatShow plain-text replies (no Markdown)