docs/Public Chat API.md
This API is used to create chat completions, process provided messages, and generate responses. The API supports streaming responses and is compatible with OpenAI format.
Before using this API, you need to:
{instance_id} from the registration responsehttps://app.secondme.io/api/chat/{instance_id}POST /api/chat/{instance_id}
POST /api/chat/{instance_id}/chat/completions
| Parameter | Type | Required | Description |
|---|---|---|---|
instance_id | string | Yes | Unique identifier for the model instance, obtained during registration |
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
messages | array | Yes | - | List of messages in the conversation |
metadata | object | No | null | Additional metadata for the request |
temperature | float | No | 0.7 | Controls randomness of the response, value between 0 and 1 |
max_tokens | integer | No | 2000 | Maximum number of tokens to generate |
stream | boolean | No | true | Whether to stream the response |
Each message should contain the following fields:
| Field | Type | Required | Description |
|---|---|---|---|
role | string | Yes | Role of the message sender. Can be 'system', 'user', or 'assistant' |
content | string | Yes | Content of the message |
| Field | Type | Required | Description |
|---|---|---|---|
enable_l0_retrieval | boolean | No | Whether to enable L0 level retrieval |
role_id | string | No | Role ID to use for this chat |
[DONE]data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"lpm-registry-model","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"lpm-registry-model","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" world!"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"lpm-registry-model","system_fingerprint":null,"choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
curl -X POST "https://app.secondme.io/api/chat/{instance_id}" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, please introduce yourself."}
],
"metadata": {
"enable_l0_retrieval": false,
"role_id": "default_role"
},
"temperature": 0.7,
"max_tokens": 2000,
"stream": true
}'
import http.client
import json
url = "app.secondme.io"
path = "/api/chat/{instance_id}"
headers = {"Content-Type": "application/json"}
data = {
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, please introduce yourself."}
],
"metadata": {
"enable_l0_retrieval": False,
"role_id": "default_role"
},
"temperature": 0.7,
"max_tokens": 2000,
"stream": True
}
# Prepare the connection
conn = http.client.HTTPSConnection(url)
# Send the POST request
conn.request("POST", path, body=json.dumps(data), headers=headers)
# Get the response
response = conn.getresponse()
# Read the body line by line
for line in response:
if line:
decoded_line = line.decode('utf-8').strip()
if decoded_line == 'data: [DONE]':
break
if decoded_line.startswith('data: '):
try:
json_str = decoded_line[6:]
chunk = json.loads(json_str)
content = chunk['choices'][0]['delta'].get('content', '')
if content:
print(content, end='', flush=True)
except json.JSONDecodeError:
pass
# Close the connection when done
conn.close()
| Status Code | Description |
|---|---|
| 404 | Instance not found |
| 422 | Invalid request parameters |
| 503 | Instance not connected or unavailable |