Skip to content

Speculative Decoding

This document shows how to use Speculative Decoding with vLLM to reduce inter-token latency under medium-to-low QPS (query per second), memory-bound workloads.

To train your own draft models for optimized speculative decoding, see vllm-project/speculators for seamless training and integration with vLLM.

vLLM Speculation Methods

vLLM supports a variety of methods of speculative decoding. Model-based methods such as EAGLE, draft models, and mlp provide the best latency reduction, while simpler methods such as n-gram and and suffix decoding provide modest speedups without increasing workload during peak traffic.

Lossless guarantees of Speculative Decoding

In vLLM, speculative decoding aims to enhance inference efficiency while maintaining accuracy. This section addresses the lossless guarantees of speculative decoding, breaking down the guarantees into three key areas:

  1. Theoretical Losslessness - Speculative decoding sampling is theoretically lossless up to the precision limits of hardware numerics. Floating-point errors might cause slight variations in output distributions, as discussed in Accelerating Large Language Model Decoding with Speculative Sampling

  2. Algorithmic Losslessness - vLLM’s implementation of speculative decoding is algorithmically validated to be lossless. Key validation tests include:

    • Rejection Sampler Convergence: Ensures that samples from vLLM’s rejection sampler align with the target distribution. View Test Code
    • Greedy Sampling Equality: Confirms that greedy sampling with speculative decoding matches greedy sampling without it. This verifies that vLLM's speculative decoding framework, when integrated with the vLLM forward pass and the vLLM rejection sampler, provides a lossless guarantee. Almost all of the tests in tests/spec_decode/e2e. verify this property using this assertion implementation
  3. vLLM Logprob Stability - vLLM does not currently guarantee stable token log probabilities (logprobs). This can result in different outputs for the same request across runs. For more details, see the FAQ section titled Can the output of a prompt vary across runs in vLLM? in the FAQs.

While vLLM strives to ensure losslessness in speculative decoding, variations in generated outputs with and without speculative decoding can occur due to following factors:

  • Floating-Point Precision: Differences in hardware numerical precision may lead to slight discrepancies in the output distribution.
  • Batch Size and Numerical Stability: Changes in batch size may cause variations in logprobs and output probabilities, potentially due to non-deterministic behavior in batched operations or numerical instability.

For mitigation strategies, please refer to the FAQ entry Can the output of a prompt vary across runs in vLLM? in the FAQs.

Known Feature Incompatibility

  1. Pipeline parallelism is not composible with speculative decoding as of vllm<=0.15.0
  2. Speculative decoding with a draft models is not supported in vllm<=0.10.0

Resources for vLLM contributors