docs/guides/providers.zh.md
返回 README
[!NOTE] 语音转录现在可以通过
voice.model_name指定的多模态模型完成;如果未配置语音模型,Groq Whisper 仍可作为回退方案。
| 提供商 | 用途 | 获取 API Key |
|---|---|---|
gemini | LLM (Gemini 直连) | aistudio.google.com |
zhipu | LLM (智谱直连) | bigmodel.cn |
volcengine | LLM (火山引擎直连) | volcengine.com |
openrouter | LLM (推荐,可访问所有模型) | openrouter.ai |
anthropic | LLM (Claude 直连) | console.anthropic.com |
openai | LLM (GPT 直连) | platform.openai.com |
venice | LLM (Venice AI 直连) | venice.ai |
deepseek | LLM (DeepSeek 直连) | platform.deepseek.com |
qwen | LLM (通义千问) | dashscope.console.aliyun.com |
groq | LLM + 语音转录 (Whisper) | console.groq.com |
cerebras | LLM (Cerebras 直连) | cerebras.ai |
vivgrid | LLM (Vivgrid 直连) | vivgrid.com |
moonshot | LLM (Kimi/Moonshot 直连) | platform.moonshot.cn |
minimax | LLM (Minimax 直连) | platform.minimaxi.com |
avian | LLM (Avian 直连) | avian.io |
mistral | LLM (Mistral 直连) | console.mistral.ai |
longcat | LLM (Longcat 直连) | longcat.ai |
modelscope | LLM (ModelScope 直连) | modelscope.cn |
mimo | LLM (小米 MiMo 直连) | platform.xiaomimimo.com |
<a id="模型配置-model_list"></a>
新功能! PicoClaw 现在优先推荐显式
provider+ 原生model的配置方式,例如"provider": "zhipu", "model": "glm-4.7"。如果未设置provider,旧的单字段provider/model写法仍然兼容。
如果你想看 agent 分发和轻量模型路由的完整示例,请看 路由使用指南。
该设计同时支持多 Agent 场景,提供灵活的 Provider 选择:
| 厂商 | provider 值 | 默认 API Base | 协议 | 获取 API Key |
|---|---|---|---|---|
| OpenAI | openai | https://api.openai.com/v1 | OpenAI | 获取密钥 |
| Venice AI | venice | https://api.venice.ai/api/v1 | OpenAI | 获取密钥 |
| Anthropic | anthropic | https://api.anthropic.com/v1 | Anthropic | 获取密钥 |
| 智谱 AI (GLM) | zhipu | https://open.bigmodel.cn/api/paas/v4 | OpenAI | 获取密钥 |
| DeepSeek | deepseek | https://api.deepseek.com/v1 | OpenAI | 获取密钥 |
| Google Gemini | gemini | https://generativelanguage.googleapis.com/v1beta | Gemini | 获取密钥 |
| Groq | groq | https://api.groq.com/openai/v1 | OpenAI | 获取密钥 |
| Moonshot | moonshot | https://api.moonshot.cn/v1 | OpenAI | 获取密钥 |
| 通义千问 (Qwen) | qwen | https://dashscope.aliyuncs.com/compatible-mode/v1 | OpenAI | 获取密钥 |
| NVIDIA | nvidia | https://integrate.api.nvidia.com/v1 | OpenAI | 获取密钥 |
| Ollama | ollama | http://localhost:11434/v1 | OpenAI | 本地(无需密钥) |
| LM Studio | lmstudio | http://localhost:1234/v1 | OpenAI | 可选(本地默认无需密钥) |
| OpenRouter | openrouter | https://openrouter.ai/api/v1 | OpenAI | 获取密钥 |
| LiteLLM Proxy | litellm | http://localhost:4000/v1 | OpenAI | 你的 LiteLLM 代理密钥 |
| VLLM | vllm | http://localhost:8000/v1 | OpenAI | 本地 |
| Cerebras | cerebras | https://api.cerebras.ai/v1 | OpenAI | 获取密钥 |
| 火山引擎(Doubao) | volcengine | https://ark.cn-beijing.volces.com/api/v3 | OpenAI | 获取密钥 |
| 神算云 | shengsuanyun | https://router.shengsuanyun.com/api/v1 | OpenAI | - |
| BytePlus | byteplus | https://ark.ap-southeast.bytepluses.com/api/v3 | OpenAI | 获取密钥 |
| Vivgrid | vivgrid | https://api.vivgrid.com/v1 | OpenAI | 获取密钥 |
| LongCat | longcat | https://api.longcat.chat/openai | OpenAI | 获取密钥 |
| ModelScope (魔搭) | modelscope | https://api-inference.modelscope.cn/v1 | OpenAI | 获取 Token |
| 小米 MiMo | mimo | https://api.xiaomimimo.com/v1 | OpenAI | 获取密钥 |
| Antigravity | antigravity | Google Cloud | 自定义 | 仅 OAuth |
| GitHub Copilot | github-copilot | localhost:4321 | gRPC | - |
{
"model_list": [
{
"model_name": "ark-code-latest",
"provider": "volcengine",
"model": "ark-code-latest",
"api_keys": ["sk-your-api-key"]
},
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_keys": ["sk-your-openai-key"]
},
{
"model_name": "claude-sonnet-4.6",
"provider": "anthropic",
"model": "claude-sonnet-4.6",
"api_keys": ["sk-ant-your-key"]
},
{
"model_name": "glm-4.7",
"provider": "zhipu",
"model": "glm-4.7",
"api_keys": ["your-zhipu-key"]
}
],
"agents": {
"defaults": {
"model_name": "gpt-5.4"
}
}
}
model_list 条目字段| 字段 | 类型 | 必填 | 说明 |
|---|---|---|---|
model_name | string | 是 | 在 agent 配置中引用此模型的唯一名称 |
provider | string | 否 | 推荐的 provider 标识。设置后,PicoClaw 会将 model 原样发送给该 provider |
model | string | 是 | 当设置 provider 时,这里填写 provider 原生模型 ID。若未设置 provider,仍兼容旧的 provider/model 写法 |
api_keys | string[] | 是* | 认证密钥。多个密钥可按请求轮换。本地 provider(Ollama、LM Studio、VLLM)不需要 |
api_base | string | 否 | 覆盖默认的 API 端点 URL |
proxy | string | 否 | 此模型条目的 HTTP 代理 URL |
user_agent | string | 否 | 自定义 User-Agent 请求头(支持 OpenAI 兼容、Gemini、Anthropic 和 Azure provider) |
request_timeout | int | 否 | 请求超时时间(秒),默认值因 provider 而异 |
max_tokens_field | string | 否 | 覆盖请求体中 max tokens 的字段名(如 o1 模型使用 max_completion_tokens) |
thinking_level | string | 否 | 扩展思考级别:off、low、medium、high、xhigh 或 adaptive |
extra_body | object | 否 | 注入到每个请求体中的额外字段 |
custom_headers | object | 否 | 注入到每个请求中的额外 HTTP 请求头(例如 {"X-Source":"coding-plan"})。若键名与内置请求头同名,会覆盖内置值(如 Authorization、User-Agent、Content-Type、Accept)。 |
rpm | int | 否 | 每分钟请求速率限制 |
fallbacks | string[] | 否 | 自动故障转移的备用模型名称 |
enabled | bool | 否 | 是否启用此模型条目(默认:true) |
provider / model 解析规则PicoClaw 按下面的规则解析 provider 和最终发给上游的模型 ID:
provider,则直接使用 model。provider,则把 model 中第一个 / 之前的字段当作 provider,第一个 / 之后的全部内容当作最终模型 ID。示例:
| 配置 | 解析后的 Provider | 实际发送的模型 ID |
|---|---|---|
"provider": "openai", "model": "gpt-5.4" | openai | gpt-5.4 |
"model": "openai/gpt-5.4" | openai | gpt-5.4 |
"provider": "openrouter", "model": "openai/gpt-5.4" | openrouter | openai/gpt-5.4 |
"model": "openrouter/openai/gpt-5.4" | openrouter | openai/gpt-5.4 |
你可以通过 voice.model_name 为语音转录指定一个专用模型。这样可以直接复用已经配置好的、支持音频输入的多模态 provider,而不必只依赖 Groq。
如果没有配置 voice.model_name,且存在 Groq API Key,PicoClaw 会继续回退到 Groq 转录。
{
"model_list": [
{
"model_name": "voice-gemini",
"provider": "gemini",
"model": "gemini-2.5-flash",
"api_keys": ["your-gemini-key"]
}
],
"voice": {
"model_name": "voice-gemini",
"echo_transcription": false
},
"providers": {
"groq": {
"api_key": "gsk_xxx"
}
}
}
OpenAI
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_keys": ["sk-..."]
}
火山引擎(Doubao)
{
"model_name": "ark-code-latest",
"provider": "volcengine",
"model": "ark-code-latest",
"api_keys": ["sk-..."]
}
智谱 AI (GLM)
{
"model_name": "glm-4.7",
"provider": "zhipu",
"model": "glm-4.7",
"api_keys": ["your-key"]
}
DeepSeek
{
"model_name": "deepseek-chat",
"provider": "deepseek",
"model": "deepseek-chat",
"api_keys": ["sk-..."]
}
Anthropic (使用 OAuth)
{
"model_name": "claude-sonnet-4.6",
"provider": "anthropic",
"model": "claude-sonnet-4.6",
"auth_method": "oauth"
}
运行
picoclaw auth login --provider anthropic来设置 OAuth 凭证。
Anthropic Messages API(原生格式)
用于直接访问 Anthropic API 或仅支持 Anthropic 原生消息格式的自定义端点:
{
"model_name": "claude-opus-4-6",
"provider": "anthropic-messages",
"model": "claude-opus-4-6",
"api_keys": ["sk-ant-your-key"],
"api_base": "https://api.anthropic.com"
}
使用
anthropic-messages协议的场景:
- 使用仅支持 Anthropic 原生
/v1/messages端点的第三方代理(不支持 OpenAI 兼容的/v1/chat/completions)- 连接到 MiniMax、Synthetic 等需要 Anthropic 原生消息格式的服务
- 现有的
anthropic协议返回 404 错误(说明端点不支持 OpenAI 兼容格式)注意:
anthropic协议使用 OpenAI 兼容格式(/v1/chat/completions),而anthropic-messages使用 Anthropic 原生格式(/v1/messages)。请根据端点支持的格式选择。
Ollama (本地)
{
"model_name": "llama3",
"provider": "ollama",
"model": "llama3"
}
LM Studio(本地)
{
"model_name": "lmstudio-local",
"provider": "lmstudio",
"model": "openai/gpt-oss-20b"
}
api_base 默认是 http://localhost:1234/v1。除非你在 LM Studio 侧启用了认证,否则不需要配置 API Key。
显式设置 provider 后,PicoClaw 会把 openai/gpt-oss-20b 原样发送给 LM Studio。旧的兼容写法 "model": "lmstudio/openai/gpt-oss-20b" 在未设置 provider 时也会解析成相同的上游模型 ID。
自定义代理/API
{
"model_name": "my-custom-model",
"provider": "openai",
"model": "custom-model",
"api_base": "https://my-proxy.com/v1",
"api_keys": ["sk-..."],
"user_agent": "MyApp/1.0",
"request_timeout": 300
}
LiteLLM Proxy
{
"model_name": "lite-gpt4",
"provider": "litellm",
"model": "lite-gpt4",
"api_base": "http://localhost:4000/v1",
"api_keys": ["sk-..."]
}
显式设置 provider 后,PicoClaw 会将 model 原样发送。因此 "provider": "litellm", "model": "lite-gpt4" 会发送 lite-gpt4,而 "provider": "litellm", "model": "openai/gpt-4o" 会发送 openai/gpt-4o。旧的兼容写法 litellm/lite-gpt4 和 litellm/openai/gpt-4o 在未设置 provider 时也会得到相同结果。
为同一个模型名称配置多个端点——PicoClaw 会自动在它们之间轮询:
{
"model_list": [
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_base": "https://api1.example.com/v1",
"api_keys": ["sk-key1"]
},
{
"model_name": "gpt-5.4",
"provider": "openai",
"model": "gpt-5.4",
"api_base": "https://api2.example.com/v1",
"api_keys": ["sk-key2"]
}
]
}
当你在 Agent 的模型设置里配置 primary + fallbacks 时,PicoClaw 已经支持自动失败切换。
运行时 fallback 链会在可重试错误时切到下一个候选(例如 HTTP 429、配额/限流错误、超时错误)。
同时会对每个候选应用 cooldown,避免对刚失败的目标立即重试。
{
"model_list": [
{
"model_name": "qwen-main",
"provider": "openai",
"model": "qwen3.5:cloud",
"api_base": "https://api.example.com/v1",
"api_keys": ["sk-main"]
},
{
"model_name": "deepseek-backup",
"provider": "deepseek",
"model": "deepseek-chat",
"api_keys": ["sk-backup-1"]
},
{
"model_name": "gemini-backup",
"provider": "gemini",
"model": "gemini-2.5-flash",
"api_keys": ["sk-backup-2"]
}
],
"agents": {
"defaults": {
"model": {
"primary": "qwen-main",
"fallbacks": ["deepseek-backup", "gemini-backup"]
}
}
}
}
如果你在同一模型上启用了 key 级失败切换,PicoClaw 会先在该模型的多 key 候选间切换,再继续切到跨模型备选。
providers 配置迁移旧的 providers 配置格式已弃用,V2 中已移除。现有 V0/V1 配置会自动迁移。
旧配置(已弃用):
{
"providers": {
"zhipu": {
"api_key": "your-key",
"api_base": "https://open.bigmodel.cn/api/paas/v4"
}
},
"agents": {
"defaults": {
"provider": "zhipu",
"model": "glm-4.7"
}
}
}
新配置(推荐):
{
"version": 3,
"model_list": [
{
"model_name": "glm-4.7",
"provider": "zhipu",
"model": "glm-4.7",
"api_keys": ["your-key"]
}
],
"agents": {
"defaults": {
"model_name": "glm-4.7"
}
}
}
详细的迁移指南请参考 docs/migration/model-list-migration.md。
PicoClaw 按协议族路由 Provider:
models/*:generateContent 和 models/*:streamGenerateContent 端点接入。这使得运行时保持轻量,同时让新的 OpenAI 兼容后端基本只需配置操作(api_base + api_keys)。
1. 获取 API key 和 base URL
2. 配置
{
"agents": {
"defaults": {
"workspace": "~/.picoclaw/workspace",
"model_name": "glm-4.7",
"max_tokens": 8192,
"temperature": 0.7,
"max_tool_iterations": 20
}
},
"providers": {
"zhipu": {
"api_key": "Your API Key",
"api_base": "https://open.bigmodel.cn/api/paas/v4"
}
}
}
3. 运行
picoclaw agent -m "你好"
{
"agents": {
"defaults": {
"model_name": "claude-opus-4-5"
}
},
"session": {
"dm_scope": "per-channel-peer"
},
"providers": {
"openrouter": {
"api_key": "sk-or-v1-xxx"
},
"groq": {
"api_key": "gsk_xxx"
}
},
"voice": {
"model_name": "voice-gemini",
"echo_transcription": false
},
"channel_list": {
"telegram": {
"enabled": true,
"type": "telegram",
"token": "123456:ABC...",
"allow_from": ["123456789"]
},
"discord": {
"enabled": true,
"type": "discord",
"token": "",
"allow_from": [""]
},
"whatsapp": {
"enabled": false,
"type": "whatsapp",
"bridge_url": "ws://localhost:3001",
"use_native": false,
"session_store_path": "",
"allow_from": []
},
"feishu": {
"enabled": false,
"type": "feishu",
"app_id": "cli_xxx",
"app_secret": "xxx",
"encrypt_key": "",
"verification_token": "",
"allow_from": []
},
"qq": {
"enabled": false,
"type": "qq",
"app_id": "",
"app_secret": "",
"allow_from": []
}
},
"tools": {
"web": {
"brave": {
"enabled": false,
"api_key": "BSA...",
"max_results": 5
},
"duckduckgo": {
"enabled": true,
"max_results": 5
},
"perplexity": {
"enabled": false,
"api_key": "",
"max_results": 5
},
"searxng": {
"enabled": false,
"base_url": "http://localhost:8888",
"max_results": 5
}
},
"cron": {
"exec_timeout_minutes": 5
}
},
"heartbeat": {
"enabled": true,
"interval": 30
}
}
| 服务 | 价格 | 适用场景 |
|---|---|---|
| OpenRouter | 免费: 200K tokens/月 | 多模型聚合 (Claude, GPT-4 等) |
| 火山引擎 CodingPlan | ¥9.9/首月 | 最适合国内用户,多种 SOTA 模型(豆包、DeepSeek 等) |
| 智谱 (Zhipu) | 免费: 200K tokens/月 | 适合中国用户 |
| Brave Search | $5/1000 次查询 | 网络搜索功能 |
| SearXNG | 免费(自建) | 隐私优先的元搜索引擎(70+ 搜索引擎) |
| Groq | 免费额度可用 | 极速推理 (Llama, Mixtral) |
| Cerebras | 免费额度可用 | 极速推理 (Llama, Qwen 等) |
| LongCat | 免费: 最多 5M tokens/天 | 极速推理 |
| ModelScope (魔搭) | 免费: 2000 次请求/天 | 推理 (Qwen, GLM, DeepSeek 等) |