docs/features/speculative_decoding/parallel_draft_model.md
The following code configures vLLM to use speculative decoding where proposals are generated by PARD (Parallel Draft Models).
from vllm import LLM, SamplingParams
prompts = ["The future of AI is"]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(
model="Qwen/Qwen3-8B",
tensor_parallel_size=1,
speculative_config={
"model": "amd/PARD-Qwen3-0.6B",
"num_speculative_tokens": 12,
"method": "draft_model",
"parallel_drafting": True,
},
)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
vllm serve Qwen/Qwen3-4B \
--host 0.0.0.0 \
--port 8000 \
--seed 42 \
-tp 1 \
--max-model-len 2048 \
--gpu-memory-utilization 0.8 \
--speculative-config '{"model": "amd/PARD-Qwen3-0.6B", "num_speculative_tokens": 12, "method": "draft_model", "parallel_drafting": true}'