docs/en/getting-started/04-setup-for-agent.md
Help the user install, configure, validate, and start OpenViking with the smallest viable path.
First determine which category the user belongs to.
Use this path if any of the following is true:
Execution path:
~/.openviking/ov.confopenviking-server doctoropenviking-serverUse this path if any of the following is true:
Execution path:
openviking-server initopenviking-server doctoropenviking-serverUse this path if any of the following is true:
Execution path:
ov.confopenviking-server init inside the containerdocker-compose.yml/healthUse this path if any of the following is true:
Execution path:
OPENVIKING_CONFIG_FILE with Windows shell syntaxopenviking-server doctoropenviking-serverOnly use this path if one of the following is true:
Only then explain the required toolchain:
If the user has not provided a complete model configuration, ask first. Do not write the config file yet.
Which model provider do you want to use?
openaiazurevolcengineopenai-codexollamaHave you already decided on:
Which directory should storage.workspace use?
https://api.openai.com/v1api_version = 2025-01-01-previewhttps://ark.cn-beijing.volces.com/api/v3openviking-server initopenviking-server init directlyIf the user chooses Docker, also confirm:
docker run or docker compose~/.openviking/ov.conf~/.openviking to container /app/.openvikingOPENVIKING_CONF_CONTENTIf the user is on Windows, also confirm:
Only write ~/.openviking/ov.conf after the user has confirmed all required values.
{
"storage": {
"workspace": "..."
},
"embedding": {
"dense": {
"provider": "...",
"api_base": "...",
"api_key": "...",
"model": "..."
}
},
"vlm": {
"provider": "...",
"api_base": "...",
"api_key": "...",
"model": "..."
}
}
Only add these when the provider requires them, when the README examples explicitly include them, or when the user explicitly asks for them:
dimensionapi_versionmax_concurrenttemperaturemax_retriespip install openviking --upgrade --force-reinstall
After the user confirms the configuration, write ~/.openviking/ov.conf, then run:
openviking-server doctor
openviking-server
openviking-server init
openviking-server doctor
openviking-server
If the user already has a local config directory, prefer:
docker run --rm \
-p 1933:1933 \
-p 8020:8020 \
-v ~/.openviking:/app/.openviking \
ghcr.io/volcengine/openviking:latest
Notes:
/app/.openviking/ov.confHOME=/app~/.openviking to container /app/.openviking so config, CLI config, and workspace data persistdocker-compose.ymlIf the user prefers compose, the repo already includes an example with:
ghcr.io/volcengine/openviking:latest1933:1933, 8020:8020~/.openviking:/app/.openvikingIn that case, have the user run this from the repo root:
docker compose up -d
If the user does not yet have ov.conf, there are two options:
docker exec -it openviking openviking-server init
The Dockerfile also supports injecting the full JSON config via OPENVIKING_CONF_CONTENT on first start. Only use that if the user explicitly wants it and all config values have already been confirmed.
After startup, verify:
curl http://localhost:1933/health
Prefer the prebuilt-wheel path first:
pip install openviking --upgrade --force-reinstall
After the config file is ready, set environment variables using the user’s shell.
$env:OPENVIKING_CONFIG_FILE = "$HOME/.openviking/ov.conf"
set "OPENVIKING_CONFIG_FILE=%USERPROFILE%\.openviking\ov.conf"
Then run:
openviking-server doctor
openviking-server
If the user also wants CLI config:
$env:OPENVIKING_CLI_CONFIG_FILE = "$HOME/.openviking/ovcli.conf"
set "OPENVIKING_CLI_CONFIG_FILE=%USERPROFILE%\.openviking\ovcli.conf"
Only after you have confirmed that the source-build path is necessary should you ask the user to prepare Go / Rust / C++ / CMake.
Check first:
~/.openviking/ov.conf exists--config points to the wrong pathHandling rule:
openviking-server doctorTypical signs:
provider / model / api_keyopenai-codex is configured only for VLM, while embedding is still missingHandling rule:
openai-codex, remind the user that it mainly covers the VLM side and embedding still needs separate confirmationCheck first:
openai-codex, whether OAuth has been completed via openviking-server initHandling rule:
openviking-server init
openviking-server doctor
Typical signs:
Handling rule:
pip install openviking --upgrade --force-reinstall
Confirm first whether this is because:
Handling rule:
Check in this order:
Handling rule:
Check first:
~/.openviking is correctly mounted to /app/.openviking/app/.openviking/ov.conf exists inside the containercurl http://localhost:1933/health succeedsopenviking-server initHandling rule:
Do not write the config yet.
Guidance rules:
openviking-server initopenai-codex, remind them that it mainly solves the VLM side and embedding still needs to be configured separately