Back to Vllm

Token Embedding Usages

docs/models/pooling_models/token_embed.md

0.20.16.0 KB
Original Source

Token Embedding Usages

Summary

  • Model Usage: Token classification models
  • Pooling Tasks: token_embed
  • Offline APIs:
    • LLM.encode(..., pooling_task="token_embed")
  • Online APIs:
    • Pooling API (/pooling)

The difference between the (sequence) embedding task and the token embedding task is that (sequence) embedding outputs one embedding for each sequence, while token embedding outputs an embedding for each token.

Many embedding models support both (sequence) embedding and token embedding. For further details on (sequence) embedding, please refer to this page.

!!! note

Pooling multitask support is deprecated and will be removed in v0.20. When the default pooling task (embed) is not 
what you want, you need to manually specify it via `PoolerConfig(task="token_embed")` offline or
`--pooler-config.task token_embed` online.

Typical Use Cases

Multi-Vector Retrieval

For implementation examples, see:

Offline: examples/pooling/token_embed/multi_vector_retrieval_offline.py

Online: examples/pooling/token_embed/multi_vector_retrieval_online.py

Late interaction

Similarity scores can be computed using late interaction between two input prompts via the score API. For more information, see Score API.

Extract last hidden states

Models of any architecture can be converted into embedding models using --convert embed. Token embedding can then be used to extract the last hidden states from these models.

Supported Models

--8<-- [start:supported-token-embed-models]

Text-only Models

ArchitectureModelsExample HF ModelsLoRAPP
ColBERTLfm2ModelLFM2LiquidAI/LFM2-ColBERT-350M
ColBERTModernBertModelModernBERTlightonai/GTE-ModernColBERT-v1
ColBERTJinaRobertaModelJina XLM-RoBERTajinaai/jina-colbert-v2
HF_ColBERTBERTanswerdotai/answerai-colbert-small-v1, colbert-ir/colbertv2.0
*Model<sup>C</sup>, *ForCausalLM<sup>C</sup>, etc.Generative modelsN/A**

Multimodal Models

!!! note For more information about multimodal models inputs, see this page.

ArchitectureModelsInputsExample HF ModelsLoRAPP
ColModernVBertForRetrievalColModernVBERTT / IModernVBERT/colmodernvbert-merged
ColPaliForRetrievalColPaliT / Ividore/colpali-v1.3-hf
ColQwen3Qwen3-VLT / ITomoroAI/tomoro-colqwen3-embed-4b, TomoroAI/tomoro-colqwen3-embed-8b
ColQwen3_5ColQwen3.5T + I + Vathrael-soju/colqwen3.5-4.5B-v3
OpsColQwen3ModelQwen3-VLT / IOpenSearch-AI/Ops-Colqwen3-4B, OpenSearch-AI/Ops-Colqwen3-8B
Qwen3VLNemotronEmbedModelQwen3-VLT / Invidia/nemotron-colembed-vl-4b-v2, nvidia/nemotron-colembed-vl-8b-v2✅︎✅︎
*ForConditionalGeneration<sup>C</sup>, *ForCausalLM<sup>C</sup>, etc.Generative models*N/A**

<sup>C</sup> Automatically converted into an embedding model via --convert embed. (details)
* Feature support is the same as that of the original model.

If your model is not in the above list, we will try to automatically convert the model using [as_embedding_model][vllm.model_executor.models.adapters.as_embedding_model].

Special models

ArchitectureModelsExample HF ModelsLoRAPP
JinaForRankingQwen3-basedjinaai/jina-reranker-v3

jina-reranker-v3 is a listwise document reranker model with a novel last but not late interaction architecture. More information can be found at: examples/pooling/token_embed/jina_reranker_v3_offline.py

--8<-- [end:supported-token-embed-models]

Offline Inference

Pooling Parameters

The following [pooling parameters][vllm.PoolingParams] are supported.

python
--8<-- "vllm/pooling_params.py:common-pooling-params"
--8<-- "vllm/pooling_params.py:embed-pooling-params"

LLM.encode

The [encode][vllm.LLM.encode] method is available to all pooling models in vLLM.

Set pooling_task="token_embed" when using LLM.encode for token embedding Models:

python
from vllm import LLM

llm = LLM(model="answerdotai/answerai-colbert-small-v1", runner="pooling")
(output,) = llm.encode("Hello, my name is", pooling_task="token_embed")

data = output.outputs.data
print(f"Data: {data!r}")

LLM.score

The [score][vllm.LLM.score] method outputs similarity scores between sentence pairs.

All models that support token embedding task also support using the score API to compute similarity scores by calculating the late interaction of two input prompts.

python
from vllm import LLM

llm = LLM(model="answerdotai/answerai-colbert-small-v1", runner="pooling")
(output,) = llm.score(
    "What is the capital of France?",
    "The capital of Brazil is Brasilia.",
)

score = output.outputs.score
print(f"Score: {score}")

Online Serving

Please refer to the pooling API and use "task":"token_embed".

More examples

More examples can be found here: examples/pooling/token_embed

Supported Features

Token embedding features should be consistent with (sequence) embedding. For more information, see this page.