document/content/docs/self-host/troubleshooting/model-errors.en.mdx
Here are a few test CURL examples:
<Tabs items={['LLM Model','Embedding Model','Rerank Model','TTS Model','Whisper Model']}> <Tab value="LLM Model">
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
This error is due to the fact that under stream mode, oneapi directly ended the stream request and did not return any content.
Version 4.8.10 added error logs. When an error occurs, the actual Body parameters sent will be printed in the log. You can copy the parameters and send a request test to oneapi through curl.
Since oneapi cannot correctly capture errors in stream mode, sometimes you can set stream=false to get the exact error.
Possible error issues:
If you encounter this error (e.g. request id:xxx) in the logs or response, this is typically an OneAPI channel issue. Try switching to a different model or a different relay provider.
Most likely the API key is pointing to OpenAI's endpoint, but the server is deployed in mainland China and can't reach overseas endpoints. Use a relay service or reverse proxy to resolve the connectivity issue.
You need to correctly configure the OCR model in Admin -> System Configuration.