document/content/docs/self-host/faq.en.mdx
Check Issues first, or create a new one. For private deployment errors, provide detailed steps, logs, and screenshots — otherwise it's very difficult to diagnose.
docker ps -a to check all container statuses. Verify everything is running. If not, use docker logs <container_name> to view logs.docker logs <container_name> to check for error logs.When the frontend crashes, the page will display an error prompting you to check console logs. Open the browser console and check the console tab. Click the log hyperlinks to see the specific error file — provide these details for troubleshooting.
Errors with requestId are from OneAPI, usually caused by model API issues. See Common OneAPI Errors
object parameters are abnormal (arrays and objects) — if empty, try providing an empty array or object.This is the index model's length limit — it's the same regardless of deployment method. Different index models have different configurations, which you can modify in the admin panel.
Mount the verification file to: /app/projects/app/public/xxxx.txt
Then restart. For example:
Change the port mapping to something like 3307, e.g., 3307:3306.
See details at https://fael3z0zfze.feishu.cn/wiki/OFpAw8XzAi36Guk8dfucrCKUnjg.
Yes. You'll need vector models and LLM models ready.
toolChoice=false and functionCall=false to fall back to prompt mode. Built-in prompts are only tested with commercial model APIs. Question classification mostly works; content extraction is less reliable.customCQPrompt.The page uses stream=true mode, so the API also needs stream=true for testing. Some model APIs (especially domestic ones) have poor non-stream compatibility.
Same as above — test with curl.
Check error logs first. Possible scenarios:
Network issue. Servers in China can't reach OpenAI directly — verify your AI model connection.
Or FastGPT can't reach OneAPI (not on the same network).
Errors with requestId are from OneAPI.
OneAPI account balance insufficient. The default root user only has $200 — increase it manually.
Path: Open OneAPI -> Users -> Edit root user -> Increase remaining balance
The model in FastGPT's config must match a model in OneAPI channels. Check:
If OneAPI doesn't have the model configured, don't add it to config.json either.
OneAPI only tests the first model in a channel, and only chat models. Vector models can't be auto-tested — send manual requests. View test command examples
"https://xxx" dial tcp: xxxxOneAPI can't reach the model — check network configuration.
OneAPI API Key configured incorrectly. Modify the OPENAI_API_KEY environment variable and restart (docker-compose down then docker-compose up -d).
Use exec to enter the container, then env to verify environment variables.
OneAPI downloads a tiktoken dependency at startup. Network failure causes startup failure. See OneAPI Offline Deployment.
Here are some test CURL examples:
<Tabs items={['LLM Model','Embedding Model','Rerank Model','TTS Model','Whisper Model']}> <Tab value="LLM Model">
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
This occurs when OneAPI ends the stream request without returning any content.
Version 4.8.10 added error logging — the actual request body is printed in logs on error. Copy it and use curl to test against OneAPI.
Since OneAPI can't properly catch errors in stream mode, you can set stream=false to get precise error messages.
Possible causes:
Test example — copy the request body from error logs:
curl --location --request POST 'https://api.openai.com/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "xxx",
"temperature": 0.01,
"max_tokens": 1000,
"stream": true,
"messages": [
{
"role": "user",
"content": " 你是饿"
}
]
}'
Both the model provider and OneAPI must support tool calling. Test as follows:
curl.curl --location --request POST 'https://oneapi.xxx/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5",
"temperature": 0.01,
"max_tokens": 8000,
"stream": true,
"messages": [
{
"role": "user",
"content": "几点了"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "hCVbIY",
"description": "获取用户当前时区的时间。",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
],
"tool_choice": "auto"
}'
If tool calling works, the response includes tool_calls parameters.
{
"id": "chatcmpl-A7kwo1rZ3OHYSeIFgfWYxu8X2koN3",
"object": "chat.completion.chunk",
"created": 1726412126,
"model": "gpt-5",
"system_fingerprint": "fp_483d39d857",
"choices": [
{
"index": 0,
"id": "call_0n24eiFk8OUyIyrdEbLdirU7",
"type": "function",
"function": {
"name": "mEYIcFl84rYC",
"arguments": ""
}
}
],
"refusal": null
},
"logprobs": null,
"finish_reason": null
}
],
"usage": null
}
curl.The second round sends tool results back to the model and returns the model's response.
curl --location --request POST 'https://oneapi.xxxx/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5",
"temperature": 0.01,
"max_tokens": 8000,
"stream": true,
"messages": [
{
"role": "user",
"content": "几点了"
},
{
"role": "assistant",
"tool_calls": [
{
"id": "kDia9S19c4RO",
"type": "function",
"function": {
"name": "hCVbIY",
"arguments": "{}"
}
}
]
},
{
"tool_call_id": "kDia9S19c4RO",
"role": "tool",
"name": "hCVbIY",
"content": "{\n \"time\": \"2024-09-14 22:59:21 Sunday\"\n}"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "hCVbIY",
"description": "获取用户当前时区的时间。",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
],
"tool_choice": "auto"
}'
Caused by the model not being normalized. Only normalized models are currently supported.