docs/benchmarking/cli.md
This section guides you through running benchmark tests with the extensive datasets supported on vLLM.
It's a living document, updated as new features and datasets become available.
!!! tip The benchmarks described on this page are mainly for evaluating specific vLLM features as well as regression testing.
For benchmarking production vLLM servers, we recommend [GuideLLM](https://github.com/vllm-project/guidellm), an established performance benchmarking framework with live progress updates and automatic report generation. It is also more flexible than `vllm bench serve` in terms of dataset loading, request formatting, and workload patterns.
| Dataset | Online | Offline | Data Path |
|---|---|---|---|
| ShareGPT | β | β | wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json |
| ShareGPT4V (Image) | β | β | wget https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/resolve/main/sharegpt4v_instruct_gpt4-vision_cap100k.json |
| Note that the images need to be downloaded separately. For example, to download COCO's 2017 Train images: | |||
wget http://images.cocodataset.org/zips/train2017.zip | |||
| ShareGPT4Video (Video) | β | β | git clone https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video |
| BurstGPT | β | β | wget https://github.com/HPMLL/BurstGPT/releases/download/v1.1/BurstGPT_without_fails_2.csv |
| Sonnet (deprecated) | β | β | Local file: benchmarks/sonnet.txt |
| Random | β | β | synthetic |
| RandomMultiModal (Image/Video) | β | β | synthetic |
| RandomForReranking | β | β | synthetic |
| Prefix Repetition | β | β | synthetic |
| HuggingFace-VisionArena | β | β | lmarena-ai/VisionArena-Chat |
| HuggingFace-MMVU | β | β | yale-nlp/MMVU |
| HuggingFace-InstructCoder | β | β | likaixin/InstructCoder |
| HuggingFace-AIMO | β | β | AI-MO/aimo-validation-aime, AI-MO/NuminaMath-1.5, AI-MO/NuminaMath-CoT |
| HuggingFace-Other | β | β | lmms-lab/LLaVA-OneVision-Data, Aeala/ShareGPT_Vicuna_unfiltered |
| HuggingFace-MTBench | β | β | philschmid/mt-bench |
| HuggingFace-Blazedit | β | β | vdaita/edit_5k_char, vdaita/edit_10k_char |
| HuggingFace-ASR | β | β | openslr/librispeech_asr, facebook/voxpopuli, LIUM/tedlium, edinburghcstr/ami, speechcolab/gigaspeech, kensho/spgispeech |
| Spec Bench | β | β | wget https://raw.githubusercontent.com/hemingkx/Spec-Bench/refs/heads/main/data/spec_bench/question.jsonl |
| SPEED-Bench | β | β | curl -LsSf https://raw.githubusercontent.com/NVIDIA-NeMo/Skills/refs/heads/main/nemo_skills/dataset/speed-bench/prepare.py | python3 - |
| Custom | β | β | Local file: data.jsonl |
| Custom MM | β | β | Local file: mm_data.jsonl |
Legend:
!!! note
HuggingFace dataset's dataset-name should be set to hf.
For local dataset-path, please set hf-name to its Hugging Face ID like
```bash
--dataset-path /datasets/VisionArena-Chat/ --hf-name lmarena-ai/VisionArena-Chat
```
First start serving your model:
vllm serve NousResearch/Hermes-3-Llama-3.1-8B
Then run the benchmarking script:
# download dataset
# wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
vllm bench serve \
--backend vllm \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--endpoint /v1/completions \
--dataset-name sharegpt \
--dataset-path <your data path>/ShareGPT_V3_unfiltered_cleaned_split.json \
--num-prompts 10
If successful, you will see the following output:
============ Serving Benchmark Result ============
Successful requests: 10
Benchmark duration (s): 5.78
Total input tokens: 1369
Total generated tokens: 2212
Request throughput (req/s): 1.73
Output token throughput (tok/s): 382.89
Total token throughput (tok/s): 619.85
---------------Time to First Token----------------
Mean TTFT (ms): 71.54
Median TTFT (ms): 73.88
P99 TTFT (ms): 79.49
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 7.91
Median TPOT (ms): 7.96
P99 TPOT (ms): 8.03
---------------Inter-token Latency----------------
Mean ITL (ms): 7.74
Median ITL (ms): 7.70
P99 ITL (ms): 8.39
==================================================
The --plot-timeline and --plot-dataset-stats can be used to generate respectively the requests completion timeline and dataset prompt and output tokens statistics, which can be useful for debugging purpose or for deeper analysis.
vllm bench serve \
--backend vllm \
--model meta-llama/Llama-3.1-8B-Instruct \
--endpoint /v1/completions \
--dataset-name sharegpt \
--dataset-path <your data path>/ShareGPT_V3_unfiltered_cleaned_split.json \
--num-prompts 100 \
--plot-timeline \
--timeline-itl-thresholds 2,5 \
--plot-dataset-stats \
--save-result
The generated timeline is an interactive visualization in the form of an HTML file that can be rendered in most browsers. To customize the ITL color thresholds, one can use --timeline-itl-thresholds flag (default: 25ms, 50ms)
Example output:
<iframe src="../../assets/contributing/vllm_bench_serve_timeline.html" width="100%" height="600" frameborder="0"></iframe>The generated figure shows the input prompt and output tokens distribution.
Example output:
If the dataset you want to benchmark is not supported yet in vLLM, even then you can benchmark on it using CustomDataset. Your data needs to be in .jsonl format and needs to have "prompt" field per entry, e.g., data.jsonl
{"prompt": "What is the capital of India?"}
{"prompt": "What is the capital of Iran?"}
{"prompt": "What is the capital of China?"}
# start server
vllm serve meta-llama/Llama-3.1-8B-Instruct
# run benchmarking script
vllm bench serve --port 9001 --save-result --save-detailed \
--backend vllm \
--model meta-llama/Llama-3.1-8B-Instruct \
--endpoint /v1/completions \
--dataset-name custom \
--dataset-path <path-to-your-data-jsonl> \
--custom-skip-chat-template \
--num-prompts 80 \
--max-concurrency 1 \
--temperature=0.3 \
--top-p=0.75 \
--result-dir "./log/"
You can skip applying chat template if your data already has it by using --custom-skip-chat-template.
If the multimodal dataset you want to benchmark is not supported yet in vLLM, then you can benchmark on it using CustomMMDataset. Your data needs to be in .jsonl format and needs to have "prompt" and "image_files" field per entry, e.g., mm_data.jsonl:
{"prompt": "How many animals are present in the given image?", "image_files": ["/path/to/image/folder/horsepony.jpg"]}
{"prompt": "What colour is the bird shown in the image?", "image_files": ["/path/to/image/folder/flycatcher.jpeg"]}
# need a model with vision capability here
vllm serve Qwen/Qwen2-VL-7B-Instruct
# run benchmarking script
vllm bench serve--save-result --save-detailed \
--backend openai-chat \
--model Qwen/Qwen2-VL-7B-Instruct \
--endpoint /v1/chat/completions \
--dataset-name custom_mm \
--dataset-path <path-to-your-mm-data-jsonl> \
--allowed-local-media-path /path/to/image/folder
Note that we need to use the openai-chat backend and /v1/chat/completions endpoint for multimodal inputs.
# need a model with vision capability here
vllm serve Qwen/Qwen2-VL-7B-Instruct
vllm bench serve \
--backend openai-chat \
--model Qwen/Qwen2-VL-7B-Instruct \
--endpoint /v1/chat/completions \
--dataset-name hf \
--dataset-path lmarena-ai/VisionArena-Chat \
--hf-split train \
--num-prompts 1000
vllm serve meta-llama/Meta-Llama-3-8B-Instruct \
--speculative-config $'{"method": "ngram",
"num_speculative_tokens": 5, "prompt_lookup_max": 5,
"prompt_lookup_min": 2}'
vllm bench serve \
--model meta-llama/Meta-Llama-3-8B-Instruct \
--dataset-name hf \
--dataset-path likaixin/InstructCoder \
--num-prompts 2048
vllm serve meta-llama/Meta-Llama-3-8B-Instruct \
--speculative-config $'{"method": "ngram",
"num_speculative_tokens": 5, "prompt_lookup_max": 5,
"prompt_lookup_min": 2}'
Run all categories:
# Download the dataset using:
# wget https://raw.githubusercontent.com/hemingkx/Spec-Bench/refs/heads/main/data/spec_bench/question.jsonl
vllm bench serve \
--model meta-llama/Meta-Llama-3-8B-Instruct \
--dataset-name spec_bench \
--dataset-path "<YOUR_DOWNLOADED_PATH>/data/spec_bench/question.jsonl" \
--num-prompts -1
Available categories include [writing, roleplay, reasoning, math, coding, extraction, stem, humanities, translation, summarization, qa, math_reasoning, rag].
Run only a specific category like "summarization":
vllm bench serve \
--model meta-llama/Meta-Llama-3-8B-Instruct \
--dataset-name spec_bench \
--dataset-path "<YOUR_DOWNLOADED_PATH>/data/spec_bench/question.jsonl" \
--num-prompts -1
--spec-bench-category "summarization"
SPEED-Bench is a unified and diverse dataset for speculative decoding, supporting acceptance rate and length measurements using the Qualitative split and throughput measurements using the Throughput splits in 5 configuration of input sequence length (1k, 2k, 8k, 16k, 32k).
!!! note
This dataset is governed by the NVIDIA Evaluation Dataset License Agreement. For each dataset a user elects to use, the user is responsible for checking if the dataset license is fit for the intended purpose. The prepare.py script automatically fetches data from all the source datasets.
First, download the dataset to a folder, using this one liner:
curl -LsSf https://raw.githubusercontent.com/NVIDIA-NeMo/Skills/refs/heads/main/nemo_skills/dataset/speed-bench/prepare.py \| python3 -
The command supports also the following arguments:
--config: download only a subset of the dataset: qualitative, throughput_1k, throughput_2k, throughput_8k, throughput_16k and throughput_32k. By default, it will download all subsets.--output_dir: download to a specified folder. By default, it will download to the current directory.Start a server with speculative decoding:
vllm serve meta-llama/Llama-3.3-70B-Instruct \
--speculative-config $'{"method": "eagle3",
"num_speculative_tokens": 3,
"model": "nvidia/Llama-3.3-70B-Instruct-Eagle3"}'
Run all categories in the Qualitative split:
vllm bench serve \
--model meta-llama/Llama-3.3-70B-Instruct \
--dataset-name speed_bench \
--dataset-path "<YOUR_DOWNLOADED_PATH>/data/speed_bench" \
--num-prompts -1
Available categories include [writing, roleplay, reasoning, math, coding, stem, humanities, multilingual, summarization, qa, rag].
Run only a specific category like "multilingual":
vllm bench serve \
--model meta-llama/Llama-3.3-70B-Instruct \
--dataset-name speed_bench \
--dataset-path "<YOUR_DOWNLOADED_PATH>/data/speed_bench" \
--num-prompts -1
--speed-bench-category "multilingual"
Run all categories in the Throughput split (2k ISL):
vllm bench serve \
--model meta-llama/Llama-3.3-70B-Instruct \
--dataset-name speed_bench \
--speed-bench-dataset-subset throughput_2k
--dataset-path "<YOUR_DOWNLOADED_PATH>/data/speed_bench/" \
--num-prompts -1
Available categories include [high_entropy, mixed, low_entropy], where high entropy data contains unstructued data such as creative writing while low entropy data contains more structured data such as coding, more details are in the dataset card.
vllm serve Qwen/Qwen2-VL-7B-Instruct
lmms-lab/LLaVA-OneVision-Data:
vllm bench serve \
--backend openai-chat \
--model Qwen/Qwen2-VL-7B-Instruct \
--endpoint /v1/chat/completions \
--dataset-name hf \
--dataset-path lmms-lab/LLaVA-OneVision-Data \
--hf-split train \
--hf-subset "chart2text(cauldron)" \
--num-prompts 10
Aeala/ShareGPT_Vicuna_unfiltered:
vllm bench serve \
--backend openai-chat \
--model Qwen/Qwen2-VL-7B-Instruct \
--endpoint /v1/chat/completions \
--dataset-name hf \
--dataset-path Aeala/ShareGPT_Vicuna_unfiltered \
--hf-split train \
--num-prompts 10
AI-MO/aimo-validation-aime:
vllm bench serve \
--model Qwen/QwQ-32B \
--dataset-name hf \
--dataset-path AI-MO/aimo-validation-aime \
--num-prompts 10 \
--seed 42
philschmid/mt-bench:
vllm bench serve \
--model Qwen/QwQ-32B \
--dataset-name hf \
--dataset-path philschmid/mt-bench \
--num-prompts 80
vdaita/edit_5k_char or vdaita/edit_10k_char:
vllm bench serve \
--model Qwen/QwQ-32B \
--dataset-name hf \
--dataset-path vdaita/edit_5k_char \
--num-prompts 90 \
--blazedit-min-distance 0.01 \
--blazedit-max-distance 0.99
openslr/librispeech_asr, facebook/voxpopuli, LIUM/tedlium, edinburghcstr/ami, speechcolab/gigaspeech, kensho/spgispeech
vllm bench serve \
--model openai/whisper-large-v3-turbo \
--backend openai-audio \
--dataset-name hf \
--dataset-path facebook/voxpopuli --hf-subset en --hf-split test --no-stream --trust-remote-code \
--num-prompts 99999999 \
--no-oversample \
--endpoint /v1/audio/transcriptions \
--ready-check-timeout-sec 600 \
--save-result \
--max-concurrency 512
When using OpenAI-compatible backends such as vllm, optional sampling
parameters can be specified. Example client command:
vllm bench serve \
--backend vllm \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--endpoint /v1/completions \
--dataset-name sharegpt \
--dataset-path <your data path>/ShareGPT_V3_unfiltered_cleaned_split.json \
--top-k 10 \
--top-p 0.9 \
--temperature 0.5 \
--num-prompts 10
The benchmark tool also supports ramping up the request rate over the duration of the benchmark run. This can be useful for stress testing the server or finding the maximum throughput that it can handle, given some latency budget.
Two ramp-up strategies are supported:
linear: Increases the request rate linearly from a start value to an end value.exponential: Increases the request rate exponentially.The following arguments can be used to control the ramp-up:
--ramp-up-strategy: The ramp-up strategy to use (linear or exponential).--ramp-up-start-rps: The request rate at the beginning of the benchmark.--ramp-up-end-rps: The request rate at the end of the benchmark.vLLM's benchmark serving script provides sophisticated load pattern simulation capabilities through three key parameters that control request generation and concurrency behavior:
--request-rate: Controls the target request generation rate (requests per second). Set to inf for maximum throughput testing or finite values for controlled load simulation.--burstiness: Controls traffic variability using a Gamma distribution (range: > 0). Lower values create bursty traffic, higher values create uniform traffic.--max-concurrency: Limits concurrent outstanding requests. If this argument is not provided, concurrency is unlimited. Set a value to simulate backpressure.These parameters work together to create realistic load patterns with carefully chosen defaults. The --request-rate parameter defaults to inf (infinite), which sends all requests immediately for maximum throughput testing. When set to finite values, it uses either a Poisson process (default --burstiness=1.0) or Gamma distribution for realistic request timing. The --burstiness parameter only takes effect when --request-rate is not infinite - a value of 1.0 creates natural Poisson traffic, while lower values (0.1-0.5) create bursty patterns and higher values (2.0-5.0) create uniform spacing. The --max-concurrency parameter defaults to None (unlimited) but can be set to simulate real-world constraints where a load balancer or API gateway limits concurrent connections. When combined, these parameters allow you to simulate everything from unrestricted stress testing (--request-rate=inf) to production-like scenarios with realistic arrival patterns and resource constraints.
The --burstiness parameter mathematically controls request arrival patterns using a Gamma distribution where:
burstiness valueburstiness = 0.1: Highly bursty traffic (CV β 3.16) - stress testingburstiness = 1.0: Natural Poisson traffic (CV = 1.0) - realistic simulationburstiness = 5.0: Uniform traffic (CV β 0.45) - controlled load testingFigure: Load pattern examples for each use case. Top row: Request arrival timelines showing cumulative requests over time. Bottom row: Inter-arrival time distributions showing traffic variability patterns. Each column represents a different use case with its specific parameter settings and resulting traffic characteristics.
Load Pattern Recommendations by Use Case:
| Use Case | Burstiness | Request Rate | Max Concurrency | Description |
|---|---|---|---|---|
| Maximum Throughput | N/A | Infinite | Limited | Most common: Simulates load balancer/gateway limits with unlimited user demand |
| Realistic Testing | 1.0 | Moderate (5-20) | Infinite | Natural Poisson traffic patterns for baseline performance |
| Stress Testing | 0.1-0.5 | High (20-100) | Infinite | Challenging burst patterns to test resilience |
| Latency Profiling | 2.0-5.0 | Low (1-10) | Infinite | Uniform load for consistent timing analysis |
| Capacity Planning | 1.0 | Variable | Limited | Test resource limits with realistic constraints |
| SLA Validation | 1.0 | Target rate | SLA limit | Production-like constraints for compliance testing |
These load patterns help evaluate different aspects of your vLLM deployment, from basic performance characteristics to resilience under challenging traffic conditions.
The Maximum Throughput pattern (--request-rate=inf --max-concurrency=<limit>) is the most commonly used configuration for production benchmarking. This simulates real-world deployment architectures where:
--burstiness has no effect since request timing is not controlled when rate is infiniteThis pattern helps determine optimal concurrency settings for your production load balancer configuration.
To effectively configure load patterns, especially for Capacity Planning and SLA Validation use cases, you need to understand your system's resource limits. During startup, vLLM reports KV cache configuration that directly impacts your load testing parameters:
GPU KV cache size: 15,728,640 tokens
Maximum concurrency for 8,192 tokens per request: 1920
Where:
max_model_lenmax_concurrency = kv_cache_size / max_model_lenUsing KV cache metrics for load pattern configuration:
--max-concurrency to 80-90% of the reported maximum to test realistic resource constraintsvllm bench throughput \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--dataset-name sonnet \
--dataset-path vllm/benchmarks/sonnet.txt \
--num-prompts 10
If successful, you will see the following output
Throughput: 7.15 requests/s, 4656.00 total tokens/s, 1072.15 output tokens/s
Total num prompt tokens: 5014
Total num output tokens: 1500
vllm bench throughput \
--model Qwen/Qwen2-VL-7B-Instruct \
--backend vllm-chat \
--dataset-name hf \
--dataset-path lmarena-ai/VisionArena-Chat \
--num-prompts 1000 \
--hf-split train
The num prompt tokens now includes image token counts
Throughput: 2.55 requests/s, 4036.92 total tokens/s, 326.90 output tokens/s
Total num prompt tokens: 14527
Total num output tokens: 1280
VLLM_WORKER_MULTIPROC_METHOD=spawn \
vllm bench throughput \
--dataset-name=hf \
--dataset-path=likaixin/InstructCoder \
--model=meta-llama/Meta-Llama-3-8B-Instruct \
--input-len=1000 \
--output-len=100 \
--num-prompts=2048 \
--async-engine \
--speculative-config $'{"method": "ngram",
"num_speculative_tokens": 5, "prompt_lookup_max": 5,
"prompt_lookup_min": 2}'
Throughput: 104.77 requests/s, 23836.22 total tokens/s, 10477.10 output tokens/s
Total num prompt tokens: 261136
Total num output tokens: 204800
lmms-lab/LLaVA-OneVision-Data:
vllm bench throughput \
--model Qwen/Qwen2-VL-7B-Instruct \
--backend vllm-chat \
--dataset-name hf \
--dataset-path lmms-lab/LLaVA-OneVision-Data \
--hf-split train \
--hf-subset "chart2text(cauldron)" \
--num-prompts 10
Aeala/ShareGPT_Vicuna_unfiltered:
vllm bench throughput \
--model Qwen/Qwen2-VL-7B-Instruct \
--backend vllm-chat \
--dataset-name hf \
--dataset-path Aeala/ShareGPT_Vicuna_unfiltered \
--hf-split train \
--num-prompts 10
AI-MO/aimo-validation-aime:
vllm bench throughput \
--model Qwen/QwQ-32B \
--backend vllm \
--dataset-name hf \
--dataset-path AI-MO/aimo-validation-aime \
--hf-split train \
--num-prompts 10
Benchmark with LoRA adapters:
# download dataset
# wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
vllm bench throughput \
--model meta-llama/Llama-2-7b-hf \
--backend vllm \
--dataset_path <your data path>/ShareGPT_V3_unfiltered_cleaned_split.json \
--dataset_name sharegpt \
--num-prompts 10 \
--max-loras 2 \
--max-lora-rank 8 \
--enable-lora \
--lora-path yard1/llama-2-7b-sql-lora-test
Generate synthetic multimodal inputs for offline throughput testing without external datasets.
Use --backend vllm-chat so that image tokens are counted correctly.
vllm bench throughput \
--model Qwen/Qwen2-VL-7B-Instruct \
--backend vllm-chat \
--dataset-name random-mm \
--num-prompts 100 \
--random-input-len 300 \
--random-output-len 40 \
--random-mm-base-items-per-request 2 \
--random-mm-limit-mm-per-prompt '{"image": 3, "video": 0}' \
--random-mm-bucket-config '{(256, 256, 1): 0.7, (720, 1280, 1): 0.3}'
Benchmark the performance of structured output generation (JSON, grammar, regex).
vllm serve NousResearch/Hermes-3-Llama-3.1-8B
python3 benchmarks/benchmark_serving_structured_output.py \
--backend vllm \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--dataset json \
--structured-output-ratio 1.0 \
--request-rate 10 \
--num-prompts 1000
python3 benchmarks/benchmark_serving_structured_output.py \
--backend vllm \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--dataset grammar \
--structure-type grammar \
--request-rate 10 \
--num-prompts 1000
python3 benchmarks/benchmark_serving_structured_output.py \
--backend vllm \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--dataset regex \
--request-rate 10 \
--num-prompts 1000
python3 benchmarks/benchmark_serving_structured_output.py \
--backend vllm \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--dataset choice \
--request-rate 10 \
--num-prompts 1000
python3 benchmarks/benchmark_serving_structured_output.py \
--backend vllm \
--model NousResearch/Hermes-3-Llama-3.1-8B \
--dataset xgrammar_bench \
--request-rate 10 \
--num-prompts 1000
Benchmark the performance of long document question-answering with prefix caching.
python3 benchmarks/benchmark_long_document_qa_throughput.py \
--model meta-llama/Llama-2-7b-chat-hf \
--enable-prefix-caching \
--num-documents 16 \
--document-length 2000 \
--output-len 50 \
--repeat-count 5
# Random mode (default) - shuffle prompts randomly
python3 benchmarks/benchmark_long_document_qa_throughput.py \
--model meta-llama/Llama-2-7b-chat-hf \
--enable-prefix-caching \
--num-documents 8 \
--document-length 3000 \
--repeat-count 3 \
--repeat-mode random
# Tile mode - repeat entire prompt list in sequence
python3 benchmarks/benchmark_long_document_qa_throughput.py \
--model meta-llama/Llama-2-7b-chat-hf \
--enable-prefix-caching \
--num-documents 8 \
--document-length 3000 \
--repeat-count 3 \
--repeat-mode tile
# Interleave mode - repeat each prompt consecutively
python3 benchmarks/benchmark_long_document_qa_throughput.py \
--model meta-llama/Llama-2-7b-chat-hf \
--enable-prefix-caching \
--num-documents 8 \
--document-length 3000 \
--repeat-count 3 \
--repeat-mode interleave
Benchmark the efficiency of automatic prefix caching.
python3 benchmarks/benchmark_prefix_caching.py \
--model meta-llama/Llama-2-7b-chat-hf \
--enable-prefix-caching \
--num-prompts 1 \
--repeat-count 100 \
--input-length-range 128:256
# download dataset
# wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
python3 benchmarks/benchmark_prefix_caching.py \
--model meta-llama/Llama-2-7b-chat-hf \
--dataset-path /path/ShareGPT_V3_unfiltered_cleaned_split.json \
--enable-prefix-caching \
--num-prompts 20 \
--repeat-count 5 \
--input-length-range 128:256
vllm bench serve \
--backend openai \
--model meta-llama/Llama-2-7b-chat-hf \
--dataset-name prefix_repetition \
--num-prompts 100 \
--prefix-repetition-prefix-len 512 \
--prefix-repetition-suffix-len 128 \
--prefix-repetition-num-prefixes 5 \
--prefix-repetition-output-len 128
Two helper scripts live in benchmarks/ to compare hashing options used by prefix caching and related utilities. They are standalone (no server required) and help choose a hash algorithm before enabling prefix caching in production.
benchmarks/benchmark_hash.py: Micro-benchmark that measures per-call latency of three implementations on a representative (bytes, tuple[int]) payload.python benchmarks/benchmark_hash.py --iterations 20000 --seed 42
benchmarks/benchmark_prefix_block_hash.py: End-to-end block hashing benchmark that runs the full prefix-cache hash pipeline (hash_block_tokens) across many fake blocks and reports throughput.python benchmarks/benchmark_prefix_block_hash.py --num-blocks 20000 --block-size 32 --trials 5
Supported algorithms: sha256, sha256_cbor, xxhash, xxhash_cbor. Install optional deps to exercise all variants:
uv pip install xxhash cbor2
If an algorithmβs dependency is missing, the script will skip it and continue.
</details>Benchmark the performance of request prioritization in vLLM.
python3 benchmarks/benchmark_prioritization.py \
--model meta-llama/Llama-2-7b-chat-hf \
--input-len 128 \
--output-len 64 \
--num-prompts 100 \
--scheduling-policy priority
python3 benchmarks/benchmark_prioritization.py \
--model meta-llama/Llama-2-7b-chat-hf \
--input-len 128 \
--output-len 64 \
--num-prompts 100 \
--scheduling-policy priority \
--n 2
Benchmark the performance of multi-modal requests in vLLM.
Start vLLM:
vllm serve Qwen/Qwen2.5-VL-7B-Instruct \
--dtype bfloat16 \
--limit-mm-per-prompt '{"image": 1}' \
--allowed-local-media-path /path/to/sharegpt4v/images
Send requests with images:
vllm bench serve \
--backend openai-chat \
--model Qwen/Qwen2.5-VL-7B-Instruct \
--dataset-name sharegpt \
--dataset-path /path/to/ShareGPT4V/sharegpt4v_instruct_gpt4-vision_cap100k.json \
--num-prompts 100 \
--save-result \
--result-dir ~/vllm_benchmark_results \
--save-detailed \
--endpoint /v1/chat/completions
Start vLLM:
vllm serve Qwen/Qwen2.5-VL-7B-Instruct \
--dtype bfloat16 \
--limit-mm-per-prompt '{"video": 1}' \
--allowed-local-media-path /path/to/sharegpt4video/videos
Send requests with videos:
vllm bench serve \
--backend openai-chat \
--model Qwen/Qwen2.5-VL-7B-Instruct \
--dataset-name sharegpt \
--dataset-path /path/to/ShareGPT4Video/llava_v1_5_mix665k_with_video_chatgpt72k_share4video28k.json \
--num-prompts 100 \
--save-result \
--result-dir ~/vllm_benchmark_results \
--save-detailed \
--endpoint /v1/chat/completions
Generate synthetic image inputs alongside random text prompts to stress-test vision models without external datasets.
Notes:
--backend openai-chat with endpoint /v1/chat/completions.--backend vllm-chat (see Offline Throughput Benchmark for an example).Start the server (example):
vllm serve Qwen/Qwen2.5-VL-3B-Instruct \
--dtype bfloat16 \
--max-model-len 16384 \
--limit-mm-per-prompt '{"image": 3, "video": 0}' \
--mm-processor-kwargs max_pixels=1003520
Benchmark. It is recommended to use the flag --ignore-eos to simulate real responses. You can set the size of the output via the arg random-output-len.
Ex.1: Fixed number of items and a single image resolution, enforcing generation of approx 40 tokens:
vllm bench serve \
--backend openai-chat \
--model Qwen/Qwen2.5-VL-3B-Instruct \
--endpoint /v1/chat/completions \
--dataset-name random-mm \
--num-prompts 100 \
--max-concurrency 10 \
--random-prefix-len 25 \
--random-input-len 300 \
--random-output-len 40 \
--random-range-ratio 0.2 \
--random-mm-base-items-per-request 2 \
--random-mm-limit-mm-per-prompt '{"image": 3, "video": 0}' \
--random-mm-bucket-config '{(224, 224, 1): 1.0}' \
--request-rate inf \
--ignore-eos \
--seed 42
The number of items per request can be controlled by passing multiple image buckets:
--random-mm-base-items-per-request 2 \
--random-mm-num-mm-items-range-ratio 0.5 \
--random-mm-limit-mm-per-prompt '{"image": 4, "video": 0}' \
--random-mm-bucket-config '{(256, 256, 1): 0.7, (720, 1280, 1): 0.3}' \
Flags specific to random-mm:
--random-mm-base-items-per-request: base number of multimodal items per request.--random-mm-num-mm-items-range-ratio: vary item count uniformly in the closed integer range [floor(nΒ·(1βr)), ceil(nΒ·(1+r))]. Set r=0 to keep it fixed; r=1 allows 0 items.--random-mm-limit-mm-per-prompt: per-modality hard caps, e.g. '{"image": 3, "video": 0}'.--random-mm-bucket-config: dict mapping (H, W, T) β probability. Entries with probability 0 are removed; remaining probabilities are renormalized to sum to 1. Use T=1 for images. Set any T>1 for videos (video sampling not yet supported).Behavioral notes:
How sampling works:
--random-mm-base-items-per-request and --random-mm-num-mm-items-range-ratio, then clamp k to at most the sum of per-modality limits.--random-mm-bucket-config, while tracking how many items of each modality have been added.--random-mm-limit-mm-per-prompt, all buckets of that modality are excluded and the remaining bucket probabilities are renormalized before continuing.
This should be seen as an edge case, and if this behavior can be avoided by setting --random-mm-limit-mm-per-prompt to a large number. Note that this might result in errors due to engine config --limit-mm-per-prompt.multi_modal_data (OpenAI Chat format). When random-mm is used with the OpenAI Chat backend, prompts remain text and MM content is attached via multi_modal_data.Benchmark per-stage latency of the multimodal (MM) input processor pipeline, including the encoder forward pass. This is useful for profiling preprocessing bottlenecks in vision-language models.
<details class="admonition abstract" markdown="1"> <summary>Show more</summary>The benchmark measures the following stages for each request:
| Stage | Description |
|---|---|
get_mm_hashes_secs | Time spent hashing multimodal inputs |
get_cache_missing_items_secs | Time spent looking up the processor cache |
apply_hf_processor_secs | Time spent in the HuggingFace processor |
merge_mm_kwargs_secs | Time spent merging multimodal kwargs |
apply_prompt_updates_secs | Time spent updating prompt tokens |
preprocessor_total_secs | Total preprocessing time |
encoder_forward_secs | Time spent in the encoder model forward pass |
num_encoder_calls | Number of encoder invocations per request |
The benchmark also reports end-to-end latency (TTFT + decode time) per
request. Use --metric-percentiles to select which percentiles to report
(default: p99) and --output-json to save results.
vllm bench mm-processor \
--model Qwen/Qwen2-VL-7B-Instruct \
--dataset-name random-mm \
--num-prompts 50 \
--random-input-len 300 \
--random-output-len 40 \
--random-mm-base-items-per-request 2 \
--random-mm-limit-mm-per-prompt '{"image": 3, "video": 0}' \
--random-mm-bucket-config '{(256, 256, 1): 0.7, (720, 1280, 1): 0.3}'
vllm bench mm-processor \
--model Qwen/Qwen2-VL-7B-Instruct \
--dataset-name hf \
--dataset-path lmarena-ai/VisionArena-Chat \
--hf-split train \
--num-prompts 100
vllm bench mm-processor \
--model Qwen/Qwen2-VL-7B-Instruct \
--dataset-name random-mm \
--num-prompts 200 \
--num-warmups 5 \
--random-input-len 300 \
--random-output-len 40 \
--random-mm-base-items-per-request 1 \
--metric-percentiles 50,90,95,99 \
--output-json results.json
See vllm bench mm-processor for the full argument reference.
Benchmark the performance of embedding requests in vLLM.
<details class="admonition abstract" markdown="1"> <summary>Show more</summary>Unlike generative models which use Completions API or Chat Completions API,
you should set --backend openai-embeddings and --endpoint /v1/embeddings to use the Embeddings API.
You can use any text dataset to benchmark the model, such as ShareGPT.
Start the server:
vllm serve jinaai/jina-embeddings-v3 --trust-remote-code
Run the benchmark:
# download dataset
# wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
vllm bench serve \
--model jinaai/jina-embeddings-v3 \
--backend openai-embeddings \
--endpoint /v1/embeddings \
--dataset-name sharegpt \
--dataset-path <your data path>/ShareGPT_V3_unfiltered_cleaned_split.json
Unlike generative models which use Completions API or Chat Completions API,
you should set --endpoint /v1/embeddings to use the Embeddings API. The backend to use depends on the model:
--backend openai-embeddings-clip--backend openai-embeddings-vlm2vecFor other models, please add your own implementation inside vllm/benchmarks/lib/endpoint_request_func.py to match the expected instruction format.
You can use any text or multi-modal dataset to benchmark the model, as long as the model supports it. For example, you can use ShareGPT and VisionArena to benchmark vision-language embeddings.
Serve and benchmark CLIP:
# Run this in another process
vllm serve openai/clip-vit-base-patch32
# Run these one by one after the server is up
# download dataset
# wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
vllm bench serve \
--model openai/clip-vit-base-patch32 \
--backend openai-embeddings-clip \
--endpoint /v1/embeddings \
--dataset-name sharegpt \
--dataset-path <your data path>/ShareGPT_V3_unfiltered_cleaned_split.json
vllm bench serve \
--model openai/clip-vit-base-patch32 \
--backend openai-embeddings-clip \
--endpoint /v1/embeddings \
--dataset-name hf \
--dataset-path lmarena-ai/VisionArena-Chat
Serve and benchmark VLM2Vec:
# Run this in another process
vllm serve TIGER-Lab/VLM2Vec-Full --runner pooling \
--trust-remote-code \
--chat-template examples/template_vlm2vec_phi3v.jinja
# Run these one by one after the server is up
# download dataset
# wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
vllm bench serve \
--model TIGER-Lab/VLM2Vec-Full \
--backend openai-embeddings-vlm2vec \
--endpoint /v1/embeddings \
--dataset-name sharegpt \
--dataset-path <your data path>/ShareGPT_V3_unfiltered_cleaned_split.json
vllm bench serve \
--model TIGER-Lab/VLM2Vec-Full \
--backend openai-embeddings-vlm2vec \
--endpoint /v1/embeddings \
--dataset-name hf \
--dataset-path lmarena-ai/VisionArena-Chat
Benchmark the performance of rerank requests in vLLM.
<details class="admonition abstract" markdown="1"> <summary>Show more</summary>Unlike generative models which use Completions API or Chat Completions API,
you should set --backend vllm-rerank and --endpoint /v1/rerank to use the Reranker API.
For reranking, the only supported dataset is --dataset-name random-rerank
Start the server:
vllm serve BAAI/bge-reranker-v2-m3
Run the benchmark:
vllm bench serve \
--model BAAI/bge-reranker-v2-m3 \
--backend vllm-rerank \
--endpoint /v1/rerank \
--dataset-name random-rerank \
--tokenizer BAAI/bge-reranker-v2-m3 \
--random-input-len 512 \
--num-prompts 10 \
--random-batch-size 5
For reranker models, this will create num_prompts / random_batch_size requests with
random_batch_size "documents" where each one has close to random_input_len tokens.
In the example above, this results in 2 rerank requests with 5 "documents" each where
each document has close to 512 tokens.
Please note that the /v1/rerank is also supported by embedding models. So if you're running
with an embedding model, also set --no_reranker. Because in this case the query is
treated as an individual prompt by the server, here we send random_batch_size - 1 documents
to account for the extra prompt which is the query. The token accounting to report the
throughput numbers correctly is also adjusted.