examples/basic-usage/README.md
This example is the shortest path to understanding OpenViking's core Python SDK workflow:
initialize a client, ingest a resource, browse the viking:// filesystem, retrieve context,
and create a session that can later be committed into long-term memory.
It is intentionally SDK-first. If you want production deployment, shared access, or MCP client integration, use this example as the foundation and then move to the server and MCP guides linked below.
ls, tree, and readfind, abstract, overview, and grepOpenViking currently has three common integration paths:
| Mode | Best for | Recommended? |
|---|---|---|
| Embedded SDK | Single-process local experimentation | Yes, for first contact |
| HTTP server + SDK/CLI | Shared service, multi-session, multi-agent workloads | Yes, preferred for real deployments |
| MCP | Claude Code, Cursor, Claude Desktop, OpenClaw, and other MCP hosts | Yes, for tool-based client integration |
If you are building anything beyond a one-process local demo, prefer HTTP server mode over spawning isolated local processes repeatedly. For MCP specifically, follow the dedicated MCP Integration Guide.
pip install openviking --upgrade --force-reinstall
~/.openviking/ov.confgit clone https://github.com/volcengine/OpenViking.git
cd OpenViking/examples/basic-usage
python basic_usage.py
The script uses embedded mode by default:
import openviking as ov
client = ov.OpenViking(path="./data")
client.initialize()
To point the same flow at a running server instead, switch to:
import openviking as ov
client = ov.SyncHTTPClient(url="http://localhost:1933")
client.initialize()
See the dedicated Server Mode Quick Start for the recommended shared-service setup.
basic_usage.py walks through the same sequence most applications need:
viking://resources/... tree.abstract, overview, and read.find.grep.Use embedded mode for a local first run:
import openviking as ov
client = ov.OpenViking(path="./data")
client.initialize()
Use HTTP client mode when OpenViking runs as a separate service:
import openviking as ov
client = ov.SyncHTTPClient(url="http://localhost:1933")
client.initialize()
If server authentication is enabled, use a user_key for normal data access:
client = ov.SyncHTTPClient(
url="http://localhost:1933",
api_key="<user-key>",
agent_id="my-agent",
)
root_key is for administrative access. It does not directly work with tenant-scoped APIs such as
add_resource, find, or ls unless you also pass account and user.
See Authentication and Server Mode Quick Start.
Add a URL, local file, or directory:
result = client.add_resource(
path="https://example.com/docs",
wait=False,
)
result = client.add_resource(path="/path/to/manual.pdf")
result = client.add_resource(
path="/path/to/repo",
instruction="This is a Python web application",
)
For scripts and demos, wait=True is fine. In long-running applications, it is often better to ingest
asynchronously and call wait_processed() when you actually need the indexed result.
OpenViking organizes context as a virtual filesystem:
files = client.ls("viking://resources/")
tree = client.tree("viking://resources/my-project", level_limit=3)
content = client.read("viking://resources/my-project/README.md")
This same URI model applies to memories and skills as well:
viking://resources/viking://user/memories/viking://agent/memories/viking://agent/skills/Use find for fast semantic search and search for more advanced retrieval:
results = client.find(
query="how does authentication work",
target_uri="viking://resources/my-project",
limit=5,
)
results = client.search(
query="database configuration and failure handling",
target_uri="viking://resources/",
limit=10,
)
Use tiered loading after retrieval:
uri = "viking://resources/my-project/docs/api.md"
abstract = client.abstract(uri)
overview = client.overview(uri)
content = client.read(uri)
Use grep when you need literal text matching instead of semantic retrieval:
result = client.grep("viking://resources/my-project", "Agent", case_insensitive=True)
matches = result.get("matches", [])
The example script creates a session and appends messages:
session_info = client.create_session()
session_id = session_info["session_id"]
client.add_message(session_id, "user", "I prefer TypeScript over JavaScript")
client.add_message(session_id, "assistant", "Understood. I will use TypeScript where appropriate.")
To extract durable memories from that conversation, commit the session:
client.commit_session(session_id)
After commit, you can retrieve those memories through normal search APIs:
memories = client.find(
query="user programming preferences",
target_uri="viking://user/memories/",
)
Create ~/.openviking/ov.conf with storage, embedding, and VLM settings. A minimal local setup looks like this:
{
"server": { "host": "127.0.0.1", "port": 1933 },
"storage": {
"workspace": "~/.openviking/data"
},
"embedding": {
"dense": {
"provider": "openai",
"api_key": "your-api-key",
"model": "text-embedding-3-large",
"dimension": 3072
}
},
"vlm": {
"provider": "openai",
"api_key": "your-api-key",
"model": "gpt-4o"
}
}
You can also use Volcengine or Azure OpenAI. For current provider-specific examples, check the main README and the Configuration Guide.
openviking-server properly.| Issue | What to check |
|---|---|
ImportError or local extension issues | Reinstall openviking; if developing from source, ensure local build dependencies are available. |
Connection refused in HTTP mode | Start openviking-server and verify http://localhost:1933/health. |
| Tenant/auth errors | Prefer user_key for normal data APIs; use root_key only with explicit tenant headers. |
| Slow or empty search results right after ingestion | Wait for wait_processed() or ingest with wait=True. |
| Multiple clients or sessions competing for local storage | Use HTTP server mode instead of spinning up separate local processes. |
Apache License 2.0