document/content/docs/self-host/custom-models/m3e.en.mdx
FastGPT uses OpenAI's embedding model by default. For private deployment, you can replace it with the M3E embedding model. M3E is a lightweight model with low resource requirements -- it can even run on CPU. The following tutorial is based on an image provided by community contributor "睡大觉".
Image: stawky/m3e-large-api:latest
China mirror: registry.cn-hangzhou.aliyuncs.com/fastgpt_docker/m3e-large-api:latest
Port: 6008
Environment variables:
# Set the security token (used as the channel key in OneAPI)
Default: sk-aaabbbcccdddeeefffggghhhiiijjjkkk
You can also set it via the environment variable: sk-key. Refer to Docker documentation for how to pass environment variables.
Add a channel with the following parameters:
curl example:
curl --location --request POST 'https://domain/v1/embeddings' \
--header 'Authorization: Bearer xxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "m3e",
"input": ["What is laf"]
}'
Set Authorization to your sk-key. The model field should match the custom model name you entered in One API.
Edit the config.json file and add the M3E model to vectorModels:
"vectorModels": [
{
"model": "text-embedding-ada-002",
"name": "Embedding-2",
"price": 0.2,
"defaultToken": 500,
"maxToken": 3000
},
{
"model": "m3e",
"name": "M3E (for testing)",
"price": 0.1,
"defaultToken": 500,
"maxToken": 1800
}
]
Select the M3E model when creating a Knowledge Base.
Note: once selected, the embedding model for the Knowledge Base cannot be changed.
Import data
Test search
Bind the Knowledge Base to an app
Note: an app can only bind Knowledge Bases that use the same embedding model -- cross-model binding is not supported. You may also need to adjust the similarity threshold, as different embedding models produce different similarity (distance) scores. Test and tune accordingly.