document/content/docs/self-host/custom-models/chatglm2.en.mdx
import { Alert } from '@/components/docs/Alert';
FastGPT lets you use your own OpenAI API KEY to quickly call OpenAI APIs. It currently integrates GPT-3.5, GPT-4, and embedding models for building Knowledge Bases. However, for data security reasons, you may not want to send all data to cloud-based LLMs.
So how do you connect a private model to FastGPT? This guide walks through integrating Tsinghua's ChatGLM2 as an example.
ChatGLM2-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. For details, see the ChatGLM2-6B project page.
<Alert context="warning"> Note: ChatGLM2-6B weights are fully open for academic research. Commercial use requires official written permission. This tutorial only demonstrates one integration method and does not grant any license. </Alert>According to official data, generating 8192 tokens requires 12.8GB VRAM at FP16, 8.1GB at int8, and 5.1GB at int4. Quantization slightly affects performance, but not significantly.
Recommended configurations:
| Type | RAM | VRAM | Disk Space | Start Command |
|---|---|---|---|---|
| fp16 | >=16GB | >=16GB | >=25GB | python openai_api.py 16 |
| int8 | >=16GB | >=9GB | >=25GB | python openai_api.py 8 |
| int4 | >=16GB | >=6GB | >=25GB | python openai_api.py 4 |
pip install -r requirements.txt;verify_token method -- this adds a layer of authentication to prevent unauthorized access;python openai_api.py --model_name 16. Choose the number based on the configuration table above.Wait for the model to download and load. If you encounter errors, try asking GPT for help.
On successful startup, you should see an address like this:
http://0.0.0.0:6006is the connection address.
Image and Port
stawky/chatglm2:latestregistry.cn-hangzhou.aliyuncs.com/fastgpt_docker/chatglm2:latest# Set the security token (used as the channel key in OneAPI)
Default: sk-aaabbbcccdddeeefffggghhhiiijjjkkk
You can also set it via the environment variable: sk-key. Refer to Docker documentation for how to pass environment variables.
Add a channel for chatglm2 with the following parameters:
Here, chatglm2 is used as the language model.
curl example:
curl --location --request POST 'https://domain/v1/chat/completions' \
--header 'Authorization: Bearer sk-aaabbbcccdddeeefffggghhhiiijjjkkk' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "chatglm2",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Set Authorization to sk-aaabbbcccdddeeefffggghhhiiijjjkkk. The model field should match the custom model name you entered in One API.
Edit the config.json file and add chatglm2 to llmModels:
"llmModels": [
// Existing models
{
"model": "chatglm2",
"name": "chatglm2",
"maxContext": 4000,
"maxResponse": 4000,
"quoteMaxToken": 2000,
"maxTemperature": 1,
"vision": false,
"defaultSystemChatPrompt": ""
}
]
Simply select chatglm2 as the model.