Back to Continue

Rerank Role

docs/customize/model-roles/reranking.mdx

1.5.453.4 KB
Original Source

A "reranking model" is trained to take two pieces of text (often a user question and a document) and return a relevancy score between 0 and 1, estimating how useful the document will be in answering the question. Rerankers are typically much smaller than LLMs, and will be extremely fast and cheap in comparison.

In Continue, rerankers are designated using the rerank role and used by codebase awareness in order to select the most relevant code snippets after vector search.

<Info> For a comparison of all reranking models including open and closed options, see our [comprehensive model recommendations](/customize/models#recommended-models). </Info>

If you have the ability to use any model, we recommend rerank-2 by Voyage AI, which is listed below along with the rest of the options for rerankers.

Voyage AI

Voyage AI offers the best reranking model for code with their rerank-2 model. After obtaining an API key from here, you can configure a reranker as follows:

<Tabs> <Tab title="Hub"> ```yaml title="config.yaml" name: My Config version: 0.0.1 schema: v1

models: - uses: voyageai/rerank-2

</Tab>
<Tab title="YAML">
```yaml title="config.yaml"
name: My Config
version: 0.0.1
schema: v1

models:
  - name: My Voyage Reranker
    provider: voyage
    apiKey: <YOUR_VOYAGE_API_KEY>
    model: rerank-2
    roles:
      - rerank
</Tab> </Tabs>

Cohere

See Cohere's documentation for rerankers here.

<Tabs> {/* <Tab title="Hub"> [Cohere Reranker English v3](https://continue.dev/) </Tab> */} <Tab title="YAML"> ```yaml title="config.yaml" name: My Config version: 0.0.1 schema: v1

models: - name: Cohere Reranker provider: cohere model: rerank-english-v3.0 apiKey: <YOUR_COHERE_API_KEY> roles: - rerank

</Tab>
</Tabs>

### LLM

If you only have access to a single LLM, then you can use it as a reranker. This is discouraged unless truly necessary, because it will be much more expensive and still less accurate than any of the above models trained specifically for the task. Note that this will not work if you are using a local model, for example with Ollama, because too many parallel requests need to be made.

<Tabs>
{/* <Tab title="Hub">
[GPT-4o LLM Reranker Block](https://continue.dev/)
</Tab> */}
<Tab title="YAML">
```yaml title="config.yaml"
name: My Config
version: 0.0.1
schema: v1

models:
  - name: LLM Reranker
    provider: openai
    model: gpt-4o
    roles:
      - rerank
</Tab> </Tabs>

Text Embeddings Inference

Hugging Face Text Embeddings Inference enables you to host your own reranker endpoint. You can configure your reranker as follows:

<Tabs> {/* <Tab title="Hub"> [HuggingFace TEI Reranker block](https://continue.dev/) </Tab> */} <Tab title="YAML"> ```yaml title="config.yaml" name: My Config version: 0.0.1 schema: v1
models:
  - name: Huggingface-tei Reranker
    provider: huggingface-tei
    model: tei
    apiBase: http://localhost:8080
    apiKey: <YOUR_TEI_API_KEY>
    roles:
      - rerank
```
</Tab>
</Tabs>