Back to Vllm

Attention Backend Feature Support

docs/design/attention_backends.md

0.20.19.9 KB
Original Source

Attention Backend Feature Support

This document is auto-generated by tools/pre_commit/generate_attention_backend_docs.py. It shows the feature support for each registered attention backend based on the checks in AttentionBackend.validate_configuration().

Do not edit this file manually. Run the following command to regenerate it:

bash
python tools/pre_commit/generate_attention_backend_docs.py

Setting the Attention Backend

Command Line

There are two ways to specify the backend from the command line:

Option 1: Using --attention-backend (simple)

bash
vllm serve <model> --attention-backend FLASH_ATTN

Option 2: Using --attention-config.backend / -ac.backend (structured config)

bash
# Dot notation
vllm serve <model> --attention-config.backend FLASH_ATTN
vllm serve <model> -ac.backend FLASH_ATTN

# JSON format
vllm serve <model> --attention-config '{"backend": "FLASH_ATTN"}'
vllm serve <model> -ac '{"backend": "FLASH_ATTN"}'

Note: --attention-backend and --attention-config.backend are mutually exclusive. Use one or the other, not both.

Python API

Use AttentionConfig with the LLM class:

python
from vllm import LLM
from vllm.config import AttentionConfig
from vllm.v1.attention.backends.registry import AttentionBackendEnum

# Method 1: Using AttentionConfig with enum
llm = LLM(
    model="Qwen/Qwen3-0.6B",
    attention_config=AttentionConfig(backend=AttentionBackendEnum.FLASH_ATTN),
)

# Method 2: Using attention_backend parameter with string
llm = LLM(
    model="Qwen/Qwen3-0.6B",
    attention_backend="FLASH_ATTN",
)

Backend Selection Behavior

Manual Selection

When you explicitly set a backend via --attention-backend or AttentionConfig:

  1. The backend is validated against your configuration (model dtype, head size, compute capability, etc.)
  2. If the backend doesn't support your configuration, an error is raised with the specific reason
  3. If valid, the backend is used

Example error when selecting an incompatible backend:

text
ValueError: Selected backend FLASHMLA is not valid for this configuration.
Reason: ['compute capability not supported']

Automatic Selection

When no backend is specified (the default):

  1. vLLM iterates through backends in priority order (see tables below)
  2. Each backend is validated against your configuration
  3. The first compatible backend is selected
  4. If no backend is compatible, an error is raised listing all backends and their incompatibility reasons

Backend Priority (CUDA)

When no backend is explicitly selected, vLLM chooses the first compatible backend from these priority-ordered lists.

Priority is 1 = highest (tried first).

Standard Attention (MHA, MQA, GQA)

Blackwell (SM 10.x):

PriorityBackend
1FLASHINFER
2FLASH_ATTN
3TRITON_ATTN
4FLEX_ATTENTION
5TURBOQUANT

Ampere/Hopper (SM 8.x-9.x):

PriorityBackend
1FLASH_ATTN
2FLASHINFER
3TRITON_ATTN
4FLEX_ATTENTION
5TURBOQUANT

MLA Attention (DeepSeek-style)

Blackwell (SM 10.x):

PriorityBackend
1FLASHINFER_MLA
2CUTLASS_MLA
3FLASH_ATTN_MLA
4FLASHMLA
5TRITON_MLA
6FLASHINFER_MLA_SPARSE*
7FLASHMLA_SPARSE

Ampere/Hopper (SM 8.x-9.x):

PriorityBackend
1FLASH_ATTN_MLA
2FLASHMLA
3FLASHINFER_MLA
4TRITON_MLA
5FLASHMLA_SPARSE

* For sparse MLA, FP8 KV cache always prefers FLASHINFER_MLA_SPARSE. With BF16 KV cache, FLASHINFER_MLA_SPARSE is preferred for low query-head counts (<= 16), while FLASHMLA_SPARSE is preferred otherwise.

Note: ROCm and CPU platforms have their own selection logic. See the platform-specific documentation for details.

Legend

ColumnDescription
DtypesSupported model data types (fp16, bf16, fp32)
KV DtypesSupported KV cache data types (auto, fp8, fp8_e4m3, etc.)
Block SizesSupported KV cache block sizes (%N means multiples of N)
Head SizesSupported attention head sizes
SinkAttention sink support (for StreamingLLM)
SparseSparse attention support (MLA only)
MM PrefixMultimodal prefix full attention support
DCPDecode Context Parallelism support (--decode-context-parallel-size)
Attention TypesSupported attention patterns (Decoder, Encoder, Enc-Dec)
Compute Cap.Required CUDA compute capability (N/A for non-CUDA backends)

Symbols: ✅ = Supported, ❌ = Not supported

Standard Attention (MHA, MQA, GQA) Backends

BackendVersionDtypesKV DtypesBlock SizesHead SizesSinkMM PrefixDCPAttention TypesCompute Cap.
CPU_ATTNfp16, bf16, fp32autoAny32, 64, 80, 96, 112, 128, 160, 192, 224, 256, 512AllN/A
FLASHINFERNative†fp16, bf16auto, float16, bfloat16, fp8, fp8_e4m3, fp8_e5m216, 32, 6464, 128, 256Decoder7.x-9.x
FLASHINFERTRTLLM†fp16, bf16auto, float16, bfloat16, fp8, fp8_e4m3, fp8_e5m216, 32, 6464, 128, 256Decoder10.x
FLASH_ATTNFA2*fp16, bf16auto, float16, bfloat16%16AnyAll≥8.0
FLASH_ATTNFA3*fp16, bf16auto, float16, bfloat16, fp8, fp8_e4m3, fp8_e5m2%16AnyAll9.x
FLASH_ATTNFA4*fp16, bf16auto, float16, bfloat16%16AnyAll≥10.0
FLASH_ATTN_DIFFKVfp16, bf16autoAnyAnyDecoderAny
FLEX_ATTENTIONfp16, bf16, fp32auto, float16, bfloat16AnyAnyDecoder, Encoder OnlyAny
ROCM_AITER_FAfp16, bf16auto, float16, bfloat16, fp8, fp8_e4m3, fp8_e5m216, 3264, 128, 256DecoderN/A
ROCM_AITER_UNIFIED_ATTNfp16, bf16auto%16AnyAllN/A
ROCM_ATTNfp16, bf16, fp32auto, float16, bfloat16, fp8, fp8_e4m3, fp8_e5m2%1632, 64, 80, 96, 128, 160, 192, 224, 256Decoder, Encoder, Encoder OnlyN/A
TREE_ATTNfp16, bf16auto, float16, bfloat16%1632, 64, 96, 128, 160, 192, 224, 256DecoderAny
TRITON_ATTNfp16, bf16, fp32auto, float16, bfloat16, fp8, fp8_e4m3, fp8_e5m2, int8_per_token_head, fp8_per_token_head%16AnyAllAny
TURBOQUANTfp16, bf16turboquant_k8v4, turboquant_4bit_nc, turboquant_k3v4_nc, turboquant_3bit_nc16, 32, 64, 128AnyDecoderAny

FlashInfer uses TRTLLM attention on Blackwell (SM100), which supports sinks. Disable via --attention-config.use_trtllm_attention=0.

* Specify the FlashAttention version via --attention-config.flash_attn_version=2, 3, or 4. Default is FA4 on SM100+ (Blackwell), FA3 on SM90 (Hopper), FA2 otherwise.

MLA (Multi-head Latent Attention) Backends

MLA uses separate backends for prefill and decode phases.

Prefill Backends

The prefill backend is selected at runtime based on hardware and configuration.

BackendDescriptionCompute Cap.EnableDisableNotes
TRT-LLM Ragged‡TensorRT-LLM ragged attention10.xDefault on SM100-ac.use_trtllm_ragged_deepseek_prefill=0DeepSeek R1 dims only
FlashInferFlashInfer CUTLASS backend10.x-ac.disable_flashinfer_prefill=0-ac.disable_flashinfer_prefill=1DeepSeek R1 dims only
cuDNNcuDNN-based attention10.x-ac.use_cudnn_prefill=1-ac.use_cudnn_prefill=0
FlashAttentionFlashAttention varlen (FA2/FA3)AnyDefault fallbackUse other backendsFA3 on SM90, FA2 otherwise

TRT-LLM Ragged is the default on Blackwell (SM100). On other GPUs, FlashAttention is used as the default.

Decode Backends

BackendDtypesKV DtypesBlock SizesHead SizesSinkSparseMM PrefixDCPAttention TypesCompute Cap.
CUTLASS_MLAfp16, bf16auto, float16, bfloat16, fp8, fp8_e4m3128AnyDecoder10.x
FLASHINFER_MLAfp16, bf16auto, float16, bfloat16, fp8, fp8_e4m332, 64AnyDecoder10.x
FLASHINFER_MLA_SPARSEfp16, bf16auto, float16, bfloat16, fp8, fp8_e4m332, 64576Decoder10.x
FLASHMLAfp16, bf16auto, float16, bfloat16, fp8, fp8_e4m364AnyDecoder9.x-10.x
FLASHMLA_SPARSEbf16auto, bfloat16, fp8_ds_mla64512, 576Decoder9.x-10.x
FLASH_ATTN_MLAfp16, bf16auto, float16, bfloat16%16AnyDecoder9.x
ROCM_AITER_MLAfp16, bf16auto, float16, bfloat16, fp8, fp8_e4m3, fp8_e5m2%1AnyDecoderN/A
ROCM_AITER_MLA_SPARSEfp16, bf16auto, float16, bfloat161AnyDecoderN/A
ROCM_AITER_TRITON_MLAfp16, bf16autoAnyAnyDecoderN/A
TRITON_MLAfp16, bf16auto, float16, bfloat16, fp8, fp8_e4m3%16AnyDecoderAny
XPU_MLA_SPARSEfp16, bf16auto, float16, bfloat16Any576DecoderAny