README.md
<a href="./COMMUNICATION.md"></a>
<a href="./COMMUNICATION.md"></a>
<a href="https://discord.gg/MnCvHqpUGB"></a>
๐ nanobot is an ultra-lightweight personal AI agent inspired by OpenClaw.
โก๏ธ Delivers core agent functionality with 99% fewer lines of code.
๐ Real-time line count: run bash core_agent_lines.sh to verify anytime.
/status command.litellm with native openai + anthropic SDKs. Please see commit./restart, and sturdier memory.๐ nanobot is for educational, research, and technical exchange purposes only. It is unrelated to crypto and does not involve any official token or coin.
๐ชถ Ultra-Lightweight: A lightweight implementation built for stable, long-running AI agents.
๐ฌ Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.
โก๏ธ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.
๐ Easy-to-Use: One-click to deploy and you're ready to go.
[!IMPORTANT] This README may describe features that are available first in the latest source code. If you want the newest features and experiments, install from source. If you want the most stable day-to-day experience, install from PyPI or with
uv.
Install from source (latest features, experimental changes may land here first; recommended for development)
git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .
Install with uv (stable release, fast)
uv tool install nanobot-ai
Install from PyPI (stable release)
pip install nanobot-ai
PyPI / pip
pip install -U nanobot-ai
nanobot --version
uv
uv tool upgrade nanobot-ai
nanobot --version
Using WhatsApp? Rebuild the local bridge after upgrading:
rm -rf ~/.nanobot/bridge
nanobot channels login whatsapp
[!TIP] Set your API key in
~/.nanobot/config.json. Get API keys: OpenRouter (Global)For other LLM providers, please see the Providers section.
For web search capability setup, please see Web Search.
1. Initialize
nanobot onboard
Use nanobot onboard --wizard if you want the interactive setup wizard.
2. Configure (~/.nanobot/config.json)
Configure these two parts in your config (other options have defaults).
Set your API key (e.g. OpenRouter, recommended for global users):
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
}
}
Set your model (optionally pin a provider โ defaults to auto-detection):
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5",
"provider": "openrouter"
}
}
}
3. Chat
nanobot agent
That's it! You have a working AI agent in 2 minutes.
Connect nanobot to your favorite chat platform. Want to build your own? See the Channel Plugin Guide.
| Channel | What you need |
|---|---|
| Telegram | Bot token from @BotFather |
| Discord | Bot token + Message Content intent |
QR code scan (nanobot channels login whatsapp) | |
| WeChat (Weixin) | QR code scan (nanobot channels login weixin) |
| Feishu | App ID + App Secret |
| DingTalk | App Key + App Secret |
| Slack | Bot token + App-Level token |
| Matrix | Homeserver URL + Access token |
| IMAP/SMTP credentials | |
| App ID + App Secret | |
| Wecom | Bot ID + Bot Secret |
| Mochat | Claw token (auto-setup available) |
1. Create a bot
@BotFather/newbot, follow prompts2. Configure
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
You can find your User ID in Telegram settings. It is shown as
@yourUserId. Copy this value without the@symbol and paste it into the config file.
3. Run
nanobot gateway
Uses Socket.IO WebSocket by default, with HTTP polling fallback.
1. Ask nanobot to set up Mochat for you
Simply send this message to nanobot (replace xxx@xxx with your real email):
Read https://raw.githubusercontent.com/HKUDS/MoChat/refs/heads/main/skills/nanobot/skill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat.
nanobot will automatically register, configure ~/.nanobot/config.json, and connect to Mochat.
2. Restart gateway
nanobot gateway
That's it โ nanobot handles the rest!
<details> <summary>Manual configuration (advanced)</summary>If you prefer to configure manually, add the following to ~/.nanobot/config.json:
Keep
claw_tokenprivate. It should only be sent inX-Claw-Tokenheader to your Mochat API endpoint.
{
"channels": {
"mochat": {
"enabled": true,
"base_url": "https://mochat.io",
"socket_url": "https://mochat.io",
"socket_path": "/socket.io",
"claw_token": "claw_xxx",
"agent_user_id": "6982abcdef",
"sessions": ["*"],
"panels": ["*"],
"reply_delay_mode": "non-mention",
"reply_delay_ms": 120000
}
}
}
1. Create a bot
2. Enable intents
3. Get your User ID
4. Configure
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"],
"groupPolicy": "mention"
}
}
}
groupPolicycontrols how the bot responds in group channels:
"mention"(default) โ Only respond when @mentioned"open"โ Respond to all messages DMs always respond when the sender is inallowFrom.- If you set group policy to open create new threads as private threads and then @ the bot into it. Otherwise the thread itself and the channel in which you spawned it will spawn a bot session.
5. Invite the bot
botSend Messages, Read Message History6. Run
nanobot gateway
Install Matrix dependencies first:
pip install nanobot-ai[matrix]
1. Create/choose a Matrix account
matrix.org).2. Get credentials
userId (example: @nanobot:matrix.org)password(Note: accessToken and deviceId are still supported for legacy reasons, but
for reliable encryption, password login is recommended instead. If the
password is provided, accessToken and deviceId will be ignored.)
3. Configure
{
"channels": {
"matrix": {
"enabled": true,
"homeserver": "https://matrix.org",
"userId": "@nanobot:matrix.org",
"password": "mypasswordhere",
"e2eeEnabled": true,
"allowFrom": ["@your_user:matrix.org"],
"groupPolicy": "open",
"groupAllowFrom": [],
"allowRoomMentions": false,
"maxMediaBytes": 20971520
}
}
}
Keep a persistent
matrix-storeโ encrypted session state is lost if these change across restarts.
| Option | Description |
|---|---|
allowFrom | User IDs allowed to interact. Empty denies all; use ["*"] to allow everyone. |
groupPolicy | open (default), mention, or allowlist. |
groupAllowFrom | Room allowlist (used when policy is allowlist). |
allowRoomMentions | Accept @room mentions in mention mode. |
e2eeEnabled | E2EE support (default true). Set false for plaintext-only. |
maxMediaBytes | Max attachment size (default 20MB). Set 0 to block all media. |
4. Run
nanobot gateway
Requires Node.js โฅ18.
1. Link device
nanobot channels login whatsapp
# Scan QR with WhatsApp โ Settings โ Linked Devices
2. Configure
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}
3. Run (two terminals)
# Terminal 1
nanobot channels login whatsapp
# Terminal 2
nanobot gateway
</details> <details> <summary><b>Feishu</b></summary>WhatsApp bridge updates are not applied automatically for existing installations. After upgrading nanobot, rebuild the local bridge with:
rm -rf ~/.nanobot/bridge && nanobot channels login whatsapp
Uses WebSocket long connection โ no public IP required.
1. Create a Feishu bot
im:message (send messages) and im:message.p2p_msg:readonly (receive messages)cardkit:card:write (often labeled Create and update cards in the Feishu developer console). Required for CardKit entities and streamed assistant text. Older apps may not have it yet โ open Permission management, enable the scope, then publish a new app version if the console requires it.cardkit:card:write, set "streaming": false under channels.feishu (see below). The bot still works; replies use normal interactive cards without token-by-token streaming.im.message.receive_v1 (receive messages)
2. Configure
{
"channels": {
"feishu": {
"enabled": true,
"appId": "cli_xxx",
"appSecret": "xxx",
"encryptKey": "",
"verificationToken": "",
"allowFrom": ["ou_YOUR_OPEN_ID"],
"groupPolicy": "mention",
"streaming": true
}
}
}
streamingdefaults totrue. Usefalseif your app does not havecardkit:card:write(see permissions above).encryptKeyandverificationTokenare optional for Long Connection mode.allowFrom: Add your open_id (find it in nanobot logs when you message the bot). Use["*"]to allow all users.groupPolicy:"mention"(default โ respond only when @mentioned),"open"(respond to all group messages). Private chats always respond.
3. Run
nanobot gateway
</details> <details> <summary><b>QQ (QQๅ่)</b></summary>[!TIP] Feishu uses WebSocket to receive messages โ no webhook or public IP needed!
Uses botpy SDK with WebSocket โ no public IP required. Currently supports private messages only.
1. Register & create bot
2. Set up sandbox for testing
3. Configure
allowFrom: Add your openid (find it in nanobot logs when you message the bot). Use["*"]for public access.msgFormat: Optional. Use"plain"(default) for maximum compatibility with legacy QQ clients, or"markdown"for richer formatting on newer clients.- For production: submit a review in the bot console and publish. See QQ Bot Docs for the full publishing flow.
{
"channels": {
"qq": {
"enabled": true,
"appId": "YOUR_APP_ID",
"secret": "YOUR_APP_SECRET",
"allowFrom": ["YOUR_OPENID"],
"msgFormat": "plain"
}
}
}
4. Run
nanobot gateway
Now send a message to the bot from QQ โ it should respond!
</details> <details> <summary><b>DingTalk (้้)</b></summary>Uses Stream Mode โ no public IP required.
1. Create a DingTalk bot
2. Configure
{
"channels": {
"dingtalk": {
"enabled": true,
"clientId": "YOUR_APP_KEY",
"clientSecret": "YOUR_APP_SECRET",
"allowFrom": ["YOUR_STAFF_ID"]
}
}
}
allowFrom: Add your staff ID. Use["*"]to allow all users.
3. Run
nanobot gateway
Uses Socket Mode โ no public URL required.
1. Create a Slack app
2. Configure the app
connections:write scope โ copy it (xapp-...)chat:write, reactions:write, app_mentions:readmessage.im, message.channels, app_mention โ Save Changesxoxb-...)3. Configure nanobot
{
"channels": {
"slack": {
"enabled": true,
"botToken": "xoxb-...",
"appToken": "xapp-...",
"allowFrom": ["YOUR_SLACK_USER_ID"],
"groupPolicy": "mention"
}
}
}
4. Run
nanobot gateway
DM the bot directly or @mention it in a channel โ it should respond!
</details> <details> <summary><b>Email</b></summary>[!TIP]
groupPolicy:"mention"(default โ respond only when @mentioned),"open"(respond to all channel messages), or"allowlist"(restrict to specific channels).- DM policy defaults to open. Set
"dm": {"enabled": false}to disable DMs.
Give nanobot its own email account. It polls IMAP for incoming mail and replies via SMTP โ like a personal email assistant.
1. Get credentials (Gmail example)
[email protected])2. Configure
consentGrantedmust betrueto allow mailbox access. This is a safety gate โ setfalseto fully disable.allowFrom: Add your email address. Use["*"]to accept emails from anyone.smtpUseTlsandsmtpUseSsldefault totrue/falserespectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly.- Set
"autoReplyEnabled": falseif you only want to read/analyze emails without sending automatic replies.allowedAttachmentTypes: Save inbound attachments matching these MIME types โ["*"]for all, e.g.["application/pdf", "image/*"](default[]= disabled).maxAttachmentSize: Max size per attachment in bytes (default2000000/ 2MB).maxAttachmentsPerEmail: Max attachments to save per email (default5).
{
"channels": {
"email": {
"enabled": true,
"consentGranted": true,
"imapHost": "imap.gmail.com",
"imapPort": 993,
"imapUsername": "[email protected]",
"imapPassword": "your-app-password",
"smtpHost": "smtp.gmail.com",
"smtpPort": 587,
"smtpUsername": "[email protected]",
"smtpPassword": "your-app-password",
"fromAddress": "[email protected]",
"allowFrom": ["[email protected]"],
"allowedAttachmentTypes": ["application/pdf", "image/*"]
}
}
}
3. Run
nanobot gateway
Uses HTTP long-poll with QR-code login via the ilinkai personal WeChat API. No local WeChat desktop client is required.
1. Install with WeChat support
pip install "nanobot-ai[weixin]"
2. Configure
{
"channels": {
"weixin": {
"enabled": true,
"allowFrom": ["YOUR_WECHAT_USER_ID"]
}
}
}
allowFrom: Add the sender ID you see in nanobot logs for your WeChat account. Use["*"]to allow all users.token: Optional. If omitted, log in interactively and nanobot will save the token for you.routeTag: Optional. When your upstream Weixin deployment requires request routing, nanobot will send it as theSKRouteTagheader.stateDir: Optional. Defaults to nanobot's runtime directory for Weixin state.pollTimeout: Optional long-poll timeout in seconds.
3. Login
nanobot channels login weixin
Use --force to re-authenticate and ignore any saved token:
nanobot channels login weixin --force
4. Run
nanobot gateway
Here we use wecom-aibot-sdk-python (community Python version of the official @wecom/aibot-node-sdk).
Uses WebSocket long connection โ no public IP required.
1. Install the optional dependency
pip install nanobot-ai[wecom]
2. Create a WeCom AI Bot
Go to the WeCom admin console โ Intelligent Robot โ Create Robot โ select API mode with long connection. Copy the Bot ID and Secret.
3. Configure
{
"channels": {
"wecom": {
"enabled": true,
"botId": "your_bot_id",
"secret": "your_bot_secret",
"allowFrom": ["your_id"]
}
}
}
4. Run
nanobot gateway
๐ nanobot is capable of linking to the agent social network (agent community). Just send one message and your nanobot joins automatically!
| Platform | How to Join (send this message to your bot) |
|---|---|
| Moltbook | Read https://moltbook.com/skill.md and follow the instructions to join Moltbook |
| ClawdChat | Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat |
Simply send the command above to your nanobot (via CLI or any chat channel), and it will handle the rest.
Config file: ~/.nanobot/config.json
[!NOTE] If your config file is older than the current schema, you can refresh it without overwriting your existing values: run
nanobot onboard, then answerNwhen asked whether to overwrite the config. nanobot will merge in missing default fields and keep your current settings.
Instead of storing secrets directly in config.json, you can use ${VAR_NAME} references that are resolved from environment variables at startup:
{
"channels": {
"telegram": { "token": "${TELEGRAM_TOKEN}" },
"email": {
"imapPassword": "${IMAP_PASSWORD}",
"smtpPassword": "${SMTP_PASSWORD}"
}
},
"providers": {
"groq": { "apiKey": "${GROQ_API_KEY}" }
}
}
For systemd deployments, use EnvironmentFile= in the service unit to load variables from a file that only the deploying user can read:
# /etc/systemd/system/nanobot.service (excerpt)
[Service]
EnvironmentFile=/home/youruser/nanobot_secrets.env
User=nanobot
ExecStart=...
# /home/youruser/nanobot_secrets.env (mode 600, owned by youruser)
TELEGRAM_TOKEN=your-token-here
IMAP_PASSWORD=your-password-here
[!TIP]
- Voice transcription: Voice messages (Telegram, WhatsApp) are automatically transcribed using Whisper. By default Groq is used (free tier). Set
"transcriptionProvider": "openai"underchannelsto use OpenAI Whisper instead โ the API key is picked from the matching provider config.- MiniMax Coding Plan: Exclusive discount links for the nanobot community: Overseas ยท Mainland China
- MiniMax (Mainland China): If your API key is from MiniMax's mainland China platform (minimaxi.com), set
"apiBase": "https://api.minimaxi.com/v1"in your minimax provider config.- VolcEngine / BytePlus Coding Plan: Use dedicated providers
volcengineCodingPlanorbyteplusCodingPlaninstead of the pay-per-usevolcengine/byteplusproviders.- Zhipu Coding Plan: If you're on Zhipu's coding plan, set
"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"in your zhipu provider config.- Alibaba Cloud BaiLian: If you're using Alibaba Cloud BaiLian's OpenAI-compatible endpoint, set
"apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1"in your dashscope provider config.- Step Fun (Mainland China): If your API key is from Step Fun's mainland China platform (stepfun.com), set
"apiBase": "https://api.stepfun.com/v1"in your stepfun provider config.
| Provider | Purpose | Get API Key |
|---|---|---|
custom | Any OpenAI-compatible endpoint | โ |
openrouter | LLM (recommended, access to all models) | openrouter.ai |
volcengine | LLM (VolcEngine, pay-per-use) | Coding Plan ยท volcengine.com |
byteplus | LLM (VolcEngine international, pay-per-use) | Coding Plan ยท byteplus.com |
anthropic | LLM (Claude direct) | console.anthropic.com |
azure_openai | LLM (Azure OpenAI) | portal.azure.com |
openai | LLM + Voice transcription (Whisper) | platform.openai.com |
deepseek | LLM (DeepSeek direct) | platform.deepseek.com |
groq | LLM + Voice transcription (Whisper, default) | console.groq.com |
minimax | LLM (MiniMax direct) | platform.minimaxi.com |
gemini | LLM (Gemini direct) | aistudio.google.com |
aihubmix | LLM (API gateway, access to all models) | aihubmix.com |
siliconflow | LLM (SiliconFlow/็ก ๅบๆตๅจ) | siliconflow.cn |
dashscope | LLM (Qwen) | dashscope.console.aliyun.com |
moonshot | LLM (Moonshot/Kimi) | platform.moonshot.cn |
zhipu | LLM (Zhipu GLM) | open.bigmodel.cn |
mimo | LLM (MiMo) | platform.xiaomimimo.com |
ollama | LLM (local, Ollama) | โ |
mistral | LLM | docs.mistral.ai |
stepfun | LLM (Step Fun/้ถ่ทๆ่พฐ) | platform.stepfun.com |
ovms | LLM (local, OpenVINO Model Server) | docs.openvino.ai |
vllm | LLM (local, any OpenAI-compatible server) | โ |
openai_codex | LLM (Codex, OAuth) | nanobot provider login openai-codex |
github_copilot | LLM (GitHub Copilot, OAuth) | nanobot provider login github-copilot |
qianfan | LLM (Baidu Qianfan) | cloud.baidu.com |
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
No providers.openaiCodex block is needed in config.json; nanobot provider login stores the OAuth session outside config.
1. Login:
nanobot provider login openai-codex
2. Set model (merge into ~/.nanobot/config.json):
{
"agents": {
"defaults": {
"model": "openai-codex/gpt-5.1-codex"
}
}
}
3. Chat:
nanobot agent -m "Hello!"
# Target a specific workspace/config locally
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!"
# One-off workspace override on top of that config
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!"
</details> <details> <summary><b>GitHub Copilot (OAuth)</b></summary>Docker users: use
docker run -itfor interactive OAuth login.
GitHub Copilot uses OAuth instead of API keys. Requires a GitHub account with a plan configured.
No providers.githubCopilot block is needed in config.json; nanobot provider login stores the OAuth session outside config.
1. Login:
nanobot provider login github-copilot
2. Set model (merge into ~/.nanobot/config.json):
{
"agents": {
"defaults": {
"model": "github-copilot/gpt-4.1"
}
}
}
3. Chat:
nanobot agent -m "Hello!"
# Target a specific workspace/config locally
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!"
# One-off workspace override on top of that config
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!"
</details> <details> <summary><b>Custom Provider (Any OpenAI-compatible API)</b></summary>Docker users: use
docker run -itfor interactive OAuth login.
Connects directly to any OpenAI-compatible endpoint โ LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Model name is passed as-is.
{
"providers": {
"custom": {
"apiKey": "your-api-key",
"apiBase": "https://api.your-provider.com/v1"
}
},
"agents": {
"defaults": {
"model": "your-model-name"
}
}
}
</details> <details> <summary><b>Ollama (local)</b></summary>For local servers that don't require a key, set
apiKeyto any non-empty string (e.g."no-key").
Run a local model with Ollama, then add to config:
1. Start Ollama (example):
ollama run llama3.2
2. Add to config (partial โ merge into ~/.nanobot/config.json):
{
"providers": {
"ollama": {
"apiBase": "http://localhost:11434"
}
},
"agents": {
"defaults": {
"provider": "ollama",
"model": "llama3.2"
}
}
}
</details> <details> <summary><b>OpenVINO Model Server (local / OpenAI-compatible)</b></summary>
provider: "auto"also works whenproviders.ollama.apiBaseis configured, but setting"provider": "ollama"is the clearest option.
Run LLMs locally on Intel GPUs using OpenVINO Model Server. OVMS exposes an OpenAI-compatible API at /v3.
Requires Docker and an Intel GPU with driver access (
/dev/dri).
1. Pull the model (example):
mkdir -p ov/models && cd ov
docker run -d \
--rm \
--user $(id -u):$(id -g) \
-v $(pwd)/models:/models \
openvino/model_server:latest-gpu \
--pull \
--model_name openai/gpt-oss-20b \
--model_repository_path /models \
--source_model OpenVINO/gpt-oss-20b-int4-ov \
--task text_generation \
--tool_parser gptoss \
--reasoning_parser gptoss \
--enable_prefix_caching true \
--target_device GPU
This downloads the model weights. Wait for the container to finish before proceeding.
2. Start the server (example):
docker run -d \
--rm \
--name ovms \
--user $(id -u):$(id -g) \
-p 8000:8000 \
-v $(pwd)/models:/models \
--device /dev/dri \
--group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) \
openvino/model_server:latest-gpu \
--rest_port 8000 \
--model_name openai/gpt-oss-20b \
--model_repository_path /models \
--source_model OpenVINO/gpt-oss-20b-int4-ov \
--task text_generation \
--tool_parser gptoss \
--reasoning_parser gptoss \
--enable_prefix_caching true \
--target_device GPU
3. Add to config (partial โ merge into ~/.nanobot/config.json):
{
"providers": {
"ovms": {
"apiBase": "http://localhost:8000/v3"
}
},
"agents": {
"defaults": {
"provider": "ovms",
"model": "openai/gpt-oss-20b"
}
}
}
</details> <details> <summary><b>vLLM (local / OpenAI-compatible)</b></summary>OVMS is a local server โ no API key required. Supports tool calling (
--tool_parser gptoss), reasoning (--reasoning_parser gptoss), and streaming. See the official OVMS docs for more details.
Run your own model with vLLM or any OpenAI-compatible server, then add to config:
1. Start the server (example):
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
2. Add to config (partial โ merge into ~/.nanobot/config.json):
Provider (key can be any non-empty string for local):
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
}
}
Model:
{
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}
nanobot uses a Provider Registry (nanobot/providers/registry.py) as the single source of truth.
Adding a new provider only takes 2 steps โ no if-elif chains to touch.
Step 1. Add a ProviderSpec entry to PROVIDERS in nanobot/providers/registry.py:
ProviderSpec(
name="myprovider", # config field name
keywords=("myprovider", "mymodel"), # model-name keywords for auto-matching
env_key="MYPROVIDER_API_KEY", # env var name
display_name="My Provider", # shown in `nanobot status`
default_api_base="https://api.myprovider.com/v1", # OpenAI-compatible endpoint
)
Step 2. Add a field to ProvidersConfig in nanobot/config/schema.py:
class ProvidersConfig(BaseModel):
...
myprovider: ProviderConfig = ProviderConfig()
That's it! Environment variables, model routing, config matching, and nanobot status display will all work automatically.
Common ProviderSpec options:
| Field | Description | Example |
|---|---|---|
default_api_base | OpenAI-compatible base URL | "https://api.deepseek.com" |
env_extras | Additional env vars to set | (("ZHIPUAI_API_KEY", "{api_key}"),) |
model_overrides | Per-model parameter overrides | (("kimi-k2.5", {"temperature": 1.0}),) |
is_gateway | Can route any model (like OpenRouter) | True |
detect_by_key_prefix | Detect gateway by API key prefix | "sk-or-" |
detect_by_base_keyword | Detect gateway by API base URL | "openrouter" |
strip_model_prefix | Strip provider prefix before sending to gateway | True (for AiHubMix) |
supports_max_completion_tokens | Use max_completion_tokens instead of max_tokens; required for providers that reject both being set simultaneously (e.g. VolcEngine) | True |
Global settings that apply to all channels. Configure under the channels section in ~/.nanobot/config.json:
{
"channels": {
"sendProgress": true,
"sendToolHints": false,
"sendMaxRetries": 3,
"transcriptionProvider": "groq",
"telegram": { ... }
}
}
| Setting | Default | Description |
|---|---|---|
sendProgress | true | Stream agent's text progress to the channel |
sendToolHints | false | Stream tool-call hints (e.g. read_file("โฆ")) |
sendMaxRetries | 3 | Max delivery attempts per outbound message, including the initial send (0-10 configured, minimum 1 actual attempt) |
transcriptionProvider | "groq" | Voice transcription backend: "groq" (free tier, default) or "openai". API key is auto-resolved from the matching provider config. |
Retry is intentionally simple.
When a channel send() raises, nanobot retries at the channel-manager layer. By default, channels.sendMaxRetries is 3, and that count includes the initial send.
1s2s1s, 2s, 4s, then stays capped at 4s[!NOTE] This design is deliberate: channel implementations should raise on delivery failure, and the channel manager owns the shared retry policy.
Some channels may still apply small API-specific retries internally. For example, Telegram separately retries timeout and flood-control errors before surfacing a final failure to the manager.
If a channel is completely unreachable, nanobot cannot notify the user through that same channel. Watch logs for
Failed to send to {channel} after N attemptsto spot persistent delivery failures.
[!TIP] Use
proxyintools.webto route all web requests (search + fetch) through a proxy:json{ "tools": { "web": { "proxy": "http://127.0.0.1:7890" } } }
nanobot supports multiple web search providers. Configure in ~/.nanobot/config.json under tools.web.search.
By default, web tools are enabled and web search uses duckduckgo, so search works out of the box without an API key.
If you want to disable all built-in web tools entirely, set tools.web.enable to false. This removes both web_search and web_fetch from the tool list sent to the LLM.
If you need to allow trusted private ranges such as Tailscale / CGNAT addresses, you can explicitly exempt them from SSRF blocking with tools.ssrfWhitelist:
{
"tools": {
"ssrfWhitelist": ["100.64.0.0/10"]
}
}
| Provider | Config fields | Env var fallback | Free |
|---|---|---|---|
brave | apiKey | BRAVE_API_KEY | No |
tavily | apiKey | TAVILY_API_KEY | No |
jina | apiKey | JINA_API_KEY | Free tier (10M tokens) |
searxng | baseUrl | SEARXNG_BASE_URL | Yes (self-hosted) |
duckduckgo (default) | โ | โ | Yes |
Disable all built-in web tools:
{
"tools": {
"web": {
"enable": false
}
}
}
Brave:
{
"tools": {
"web": {
"search": {
"provider": "brave",
"apiKey": "BSA..."
}
}
}
}
Tavily:
{
"tools": {
"web": {
"search": {
"provider": "tavily",
"apiKey": "tvly-..."
}
}
}
}
Jina (free tier with 10M tokens):
{
"tools": {
"web": {
"search": {
"provider": "jina",
"apiKey": "jina_..."
}
}
}
}
SearXNG (self-hosted, no API key needed):
{
"tools": {
"web": {
"search": {
"provider": "searxng",
"baseUrl": "https://searx.example"
}
}
}
}
DuckDuckGo (zero config):
{
"tools": {
"web": {
"search": {
"provider": "duckduckgo"
}
}
}
}
| Option | Type | Default | Description |
|---|---|---|---|
enable | boolean | true | Enable or disable all built-in web tools (web_search + web_fetch) |
proxy | string or null | null | Proxy for all web requests, for example http://127.0.0.1:7890 |
tools.web.search| Option | Type | Default | Description |
|---|---|---|---|
provider | string | "duckduckgo" | Search backend: brave, tavily, jina, searxng, duckduckgo |
apiKey | string | "" | API key for Brave or Tavily |
baseUrl | string | "" | Base URL for SearXNG |
maxResults | integer | 5 | Results per search (1โ10) |
[!TIP] The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
nanobot supports MCP โ connect external tool servers and use them as native agent tools.
Add MCP servers to your config.json:
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
},
"my-remote-mcp": {
"url": "https://example.com/mcp/",
"headers": {
"Authorization": "Bearer xxxxx"
}
}
}
}
}
Two transport modes are supported:
| Mode | Config | Example |
|---|---|---|
| Stdio | command + args | Local process via npx / uvx |
| HTTP | url + headers (optional) | Remote endpoint (https://mcp.example.com/sse) |
Use toolTimeout to override the default 30s per-call timeout for slow servers:
{
"tools": {
"mcpServers": {
"my-slow-server": {
"url": "https://example.com/mcp/",
"toolTimeout": 120
}
}
}
}
Use enabledTools to register only a subset of tools from an MCP server:
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
"enabledTools": ["read_file", "mcp_filesystem_write_file"]
}
}
}
}
enabledTools accepts either the raw MCP tool name (for example read_file) or the wrapped nanobot tool name (for example mcp_filesystem_write_file).
enabledTools, or set it to ["*"], to register all tools.enabledTools to [] to register no tools from that server.enabledTools to a non-empty list of names to register only that subset.MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools โ no extra configuration needed.
[!TIP] For production deployments, set
"restrictToWorkspace": trueand"tools.exec.sandbox": "bwrap"in your config to sandbox the agent. Inv0.1.4.post3and earlier, an emptyallowFromallowed all senders. Sincev0.1.4.post4, emptyallowFromdenies all access by default. To allow all senders, set"allowFrom": ["*"].
| Option | Default | Description |
|---|---|---|
tools.restrictToWorkspace | false | When true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |
tools.exec.sandbox | "" | Sandbox backend for shell commands. Set to "bwrap" to wrap exec calls in a bubblewrap sandbox โ the process can only see the workspace (read-write) and media directory (read-only); config files and API keys are hidden. Automatically enables restrictToWorkspace for file tools. Linux only โ requires bwrap installed (apt install bubblewrap; pre-installed in the Docker image). Not available on macOS or Windows (bwrap depends on Linux kernel namespaces). |
tools.exec.enable | true | When false, the shell exec tool is not registered at all. Use this to completely disable shell command execution. |
tools.exec.pathAppend | "" | Extra directories to append to PATH when running shell commands (e.g. /usr/sbin for ufw). |
channels.*.allowFrom | [] (deny all) | Whitelist of user IDs. Empty denies all; use ["*"] to allow everyone. |
Docker security: The official Docker image runs as a non-root user (nanobot, UID 1000) with bubblewrap pre-installed. When using docker-compose.yml, the container drops all Linux capabilities except SYS_ADMIN (required for bwrap's namespace isolation).
Time is context. Context should be precise.
By default, nanobot uses UTC for runtime time context. If you want the agent to think in your local time, set agents.defaults.timezone to a valid IANA timezone name:
{
"agents": {
"defaults": {
"timezone": "Asia/Shanghai"
}
}
}
This affects runtime time strings shown to the model, such as runtime context and heartbeat prompts. It also becomes the default timezone for cron schedules when a cron expression omits tz, and for one-shot at times when the ISO datetime has no explicit offset.
Common examples: UTC, America/New_York, America/Los_Angeles, Europe/London, Europe/Berlin, Asia/Tokyo, Asia/Shanghai, Asia/Singapore, Australia/Sydney.
Need another timezone? Browse the full IANA Time Zone Database.
Run multiple nanobot instances simultaneously with separate configs and runtime data. Use --config as the main entrypoint. Optionally pass --workspace during onboard when you want to initialize or update the saved workspace for a specific instance.
If you want each instance to have its own dedicated workspace from the start, pass both --config and --workspace during onboarding.
Initialize instances:
# Create separate instance configs and workspaces
nanobot onboard --config ~/.nanobot-telegram/config.json --workspace ~/.nanobot-telegram/workspace
nanobot onboard --config ~/.nanobot-discord/config.json --workspace ~/.nanobot-discord/workspace
nanobot onboard --config ~/.nanobot-feishu/config.json --workspace ~/.nanobot-feishu/workspace
Configure each instance:
Edit ~/.nanobot-telegram/config.json, ~/.nanobot-discord/config.json, etc. with different channel settings. The workspace you passed during onboard is saved into each config as that instance's default workspace.
Run instances:
# Instance A - Telegram bot
nanobot gateway --config ~/.nanobot-telegram/config.json
# Instance B - Discord bot
nanobot gateway --config ~/.nanobot-discord/config.json
# Instance C - Feishu bot with custom port
nanobot gateway --config ~/.nanobot-feishu/config.json --port 18792
When using --config, nanobot derives its runtime data directory from the config file location. The workspace still comes from agents.defaults.workspace unless you override it with --workspace.
To open a CLI session against one of these instances locally:
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello from Telegram instance"
nanobot agent -c ~/.nanobot-discord/config.json -m "Hello from Discord instance"
# Optional one-off workspace override
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test
nanobot agentstarts a local CLI agent using the selected workspace/config. It does not attach to or proxy through an already runningnanobot gatewayprocess.
| Component | Resolved From | Example |
|---|---|---|
| Config | --config path | ~/.nanobot-A/config.json |
| Workspace | --workspace or config | ~/.nanobot-A/workspace/ |
| Cron Jobs | config directory | ~/.nanobot-A/cron/ |
| Media / runtime state | config directory | ~/.nanobot-A/media/ |
--config selects which config file to loadagents.defaults.workspace in that config--workspace, it overrides the workspace from the config fileagents.defaults.workspace for that instance.--config.Example config:
{
"agents": {
"defaults": {
"workspace": "~/.nanobot-telegram/workspace",
"model": "anthropic/claude-sonnet-4-6"
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_TELEGRAM_BOT_TOKEN"
}
},
"gateway": {
"port": 18790
}
}
Start separate instances:
nanobot gateway --config ~/.nanobot-telegram/config.json
nanobot gateway --config ~/.nanobot-discord/config.json
Override workspace for one-off runs when needed:
nanobot gateway --config ~/.nanobot-telegram/config.json --workspace /tmp/nanobot-telegram-test
--workspace overrides the workspace defined in the config filenanobot uses a layered memory system designed to stay light in the moment and durable over time.
memory/history.jsonl stores append-only summarized historySOUL.md, USER.md, and memory/MEMORY.md store long-term knowledge managed by DreamDream runs on a schedule and can also be triggered manuallyIf you want the full design, see docs/MEMORY.md.
| Command | Description |
|---|---|
nanobot onboard | Initialize config & workspace at ~/.nanobot/ |
nanobot onboard --wizard | Launch the interactive onboarding wizard |
nanobot onboard -c <config> -w <workspace> | Initialize or refresh a specific instance config and workspace |
nanobot agent -m "..." | Chat with the agent |
nanobot agent -w <workspace> | Chat against a specific workspace |
nanobot agent -w <workspace> -c <config> | Chat against a specific workspace/config |
nanobot agent | Interactive chat mode |
nanobot agent --no-markdown | Show plain-text replies |
nanobot agent --logs | Show runtime logs during chat |
nanobot serve | Start the OpenAI-compatible API |
nanobot gateway | Start the gateway |
nanobot status | Show status |
nanobot provider login openai-codex | OAuth login for providers |
nanobot channels login <channel> | Authenticate a channel interactively |
nanobot channels status | Show channel status |
Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.
These commands work inside chat channels and interactive agent sessions:
| Command | Description |
|---|---|
/new | Start a new conversation |
/stop | Stop the current task |
/restart | Restart the bot |
/status | Show bot status |
/dream | Run Dream memory consolidation now |
/dream-log | Show the latest Dream memory change |
/dream-log <sha> | Show a specific Dream memory change |
/dream-restore | List recent Dream memory versions |
/dream-restore <sha> | Restore memory to the state before a specific change |
/help | Show available in-chat commands |
The gateway wakes up every 30 minutes and checks HEARTBEAT.md in your workspace (~/.nanobot/workspace/HEARTBEAT.md). If the file has tasks, the agent executes them and delivers results to your most recently active chat channel.
Setup: edit ~/.nanobot/workspace/HEARTBEAT.md (created automatically by nanobot onboard):
## Periodic Tasks
- [ ] Check weather forecast and send a summary
- [ ] Scan inbox for urgent emails
The agent can also manage this file itself โ ask it to "add a periodic task" and it will update HEARTBEAT.md for you.
</details>Note: The gateway must be running (
nanobot gateway) and you must have chatted with the bot at least once so it knows which channel to deliver to.
Use nanobot as a library โ no CLI, no gateway, just Python:
from nanobot import Nanobot
bot = Nanobot.from_config()
result = await bot.run("Summarize the README")
print(result.content)
Each call carries a session_key for conversation isolation โ different keys get independent history:
await bot.run("hi", session_key="user-alice")
await bot.run("hi", session_key="task-42")
Add lifecycle hooks to observe or customize the agent:
from nanobot.agent import AgentHook, AgentHookContext
class AuditHook(AgentHook):
async def before_execute_tools(self, ctx: AgentHookContext) -> None:
for tc in ctx.tool_calls:
print(f"[tool] {tc.name}")
result = await bot.run("Hello", hooks=[AuditHook()])
See docs/PYTHON_SDK.md for the full SDK reference.
nanobot can expose a minimal OpenAI-compatible endpoint for local integrations:
pip install "nanobot-ai[api]"
nanobot serve
By default, the API binds to 127.0.0.1:8900. You can change this in config.json.
"session_id" in the request body to isolate conversations; omit for a shared default session (api:default)user messagemodel, or pass the same model shown by /v1/modelsstream=true is not supportedGET /healthGET /v1/modelsPOST /v1/chat/completionscurl http://127.0.0.1:8900/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "hi"}],
"session_id": "my-session"
}'
requests)import requests
resp = requests.post(
"http://127.0.0.1:8900/v1/chat/completions",
json={
"messages": [{"role": "user", "content": "hi"}],
"session_id": "my-session", # optional: isolate conversation
},
timeout=120,
)
resp.raise_for_status()
print(resp.json()["choices"][0]["message"]["content"])
openai)from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:8900/v1",
api_key="dummy",
)
resp = client.chat.completions.create(
model="MiniMax-M2.7",
messages=[{"role": "user", "content": "hi"}],
extra_body={"session_id": "my-session"}, # optional: isolate conversation
)
print(resp.choices[0].message.content)
[!TIP] The
-v ~/.nanobot:/root/.nanobotflag mounts your local config directory into the container, so your config and workspace persist across container restarts.
docker compose run --rm nanobot-cli onboard # first-time setup
vim ~/.nanobot/config.json # add API keys
docker compose up -d nanobot-gateway # start gateway
docker compose run --rm nanobot-cli agent -m "Hello!" # run CLI
docker compose logs -f nanobot-gateway # view logs
docker compose down # stop
# Build the image
docker build -t nanobot .
# Initialize config (first time only)
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard
# Edit config on host to add API keys
vim ~/.nanobot/config.json
# Run gateway (connects to enabled channels, e.g. Telegram/Discord/Mochat)
docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway
# Or run a single command
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!"
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot status
Run the gateway as a systemd user service so it starts automatically and restarts on failure.
1. Find the nanobot binary path:
which nanobot # e.g. /home/user/.local/bin/nanobot
2. Create the service file at ~/.config/systemd/user/nanobot-gateway.service (replace ExecStart path if needed):
[Unit]
Description=Nanobot Gateway
After=network.target
[Service]
Type=simple
ExecStart=%h/.local/bin/nanobot gateway
Restart=always
RestartSec=10
NoNewPrivileges=yes
ProtectSystem=strict
ReadWritePaths=%h
[Install]
WantedBy=default.target
3. Enable and start:
systemctl --user daemon-reload
systemctl --user enable --now nanobot-gateway
Common operations:
systemctl --user status nanobot-gateway # check status
systemctl --user restart nanobot-gateway # restart after config changes
journalctl --user -u nanobot-gateway -f # follow logs
If you edit the .service file itself, run systemctl --user daemon-reload before restarting.
Note: User services only run while you are logged in. To keep the gateway running after logout, enable lingering:
bashloginctl enable-linger $USER
nanobot/
โโโ agent/ # ๐ง Core agent logic
โ โโโ loop.py # Agent loop (LLM โ tool execution)
โ โโโ context.py # Prompt builder
โ โโโ memory.py # Persistent memory
โ โโโ skills.py # Skills loader
โ โโโ subagent.py # Background task execution
โ โโโ tools/ # Built-in tools (incl. spawn)
โโโ skills/ # ๐ฏ Bundled skills (github, weather, tmux...)
โโโ channels/ # ๐ฑ Chat channel integrations (supports plugins)
โโโ bus/ # ๐ Message routing
๏ฟฝ๏ฟฝโโ cron/ # โฐ Scheduled tasks
โโโ heartbeat/ # ๐ Proactive wake-up
โโโ providers/ # ๐ค LLM providers (OpenRouter, etc.)
โโโ session/ # ๐ฌ Conversation sessions
โโโ config/ # โ๏ธ Configuration
โโโ cli/ # ๐ฅ๏ธ Commands
PRs welcome! The codebase is intentionally small and readable. ๐ค
| Branch | Purpose |
|---|---|
main | Stable releases โ bug fixes and minor improvements |
nightly | Experimental features โ new features and breaking changes |
Unsure which branch to target? See CONTRIBUTING.md for details.
Roadmap โ Pick an item and open a PR!
</picture>