examples/opencode/plugin/README.md
OpenViking plugin for OpenCode. Injects your indexed code repos into the AI's context and auto-starts the OpenViking server when needed.
Install the latest OpenViking and configure ~/.openviking/ov.conf:
pip install openviking --upgrade
{
"storage": {
"workspace": "/path/to/your/workspace"
},
"embedding": {
"dense": {
"provider": "openai",
"model": "your-embedding-model",
"api_key": "your-api-key",
"api_base": "https://your-provider/v1",
"dimension": 1024
},
"max_concurrent": 100
},
"vlm": {
"provider": "openai",
"model": "your-vlm-model",
"api_key": "your-api-key",
"api_base": "https://your-provider/v1"
}
}
For other providers (Volcengine, Anthropic, DeepSeek, Ollama, etc.) see the OpenViking docs.
Before starting OpenCode, make sure the OpenViking server is running. If it's not already started:
openviking-server > /tmp/openviking.log 2>&1 &
Add the plugin to ~/.config/opencode/opencode.json:
{
"plugin": ["openviking-opencode"]
}
Restart OpenCode — the skill is installed automatically.
Index a repo (just ask in chat):
"Add https://github.com/tiangolo/fastapi to OpenViking"
Search — once repos are indexed, the AI searches them automatically when relevant. You can also trigger it explicitly:
"How does fastapi handle dependency injection?"
"Use openviking to find how JWT tokens are verified"