documentation/docs/features/search.md
Take advantage of super fast search to find relevant notes and documents from your Second Brain.
M-x khoj <user-query>A bi-encoder models is used to create meaning vectors (aka vector embeddings) of your documents and search queries.
You are not required to configure the search model config when self-hosting. Khoj sets up decent default local search model config for general use.
You may want to configure this if you need better multi-lingual search, want to experiment with different, newer models or the default models do not work for your use-case.
You can use bi-encoder models downloaded locally from Huggingface, served via the HuggingFace Inference API, OpenAI API, Azure OpenAI API or any OpenAI Compatible API like Ollama, LiteLLM etc. Follow the steps below to configure your search model:
biencoder field to the name of the bi-encoder model supported locally or via the API you configure.Embeddings inference endpoint api key to your OpenAI API key and Embeddings inference endpoint type to OpenAI to use an OpenAI embedding model.Embeddings inference endpoint to your Azure OpenAI or OpenAI compatible API URL to use the model via those APIs.name field set to default1.:::info You will need to re-index all your documents if you want to use a different bi-encoder model. :::
:::info
You may need to tune the Bi encoder confidence threshold field for each bi-encoder to get appropriate number of documents for chat with your Knowledge base.
Confidence here is a normalized measure of semantic distance between your query and documents. The confidence threshold limits the documents returned to chat that fall within the distance specified in this field. It can take values between 0.0 (exact overlap) and 1.0 (no meaning overlap). :::
Khoj uses the first search model config named default it finds on startup as the search model config for that session ↩