MLP Draft Models¶
The following code configures vLLM to use speculative decoding where proposals are generated by draft models that condition draft predictions on both context vectors and sampled tokens. For more information see The Hitchhiker's Guide to Speculative Decoding and IBM Research's Technical Report.
MLP Drafter Example¶
from vllm import LLM, SamplingParams
prompts = ["The future of AI is"]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(
model="meta-llama/Meta-Llama-3.1-70B-Instruct",
tensor_parallel_size=4,
speculative_config={
"model": "ibm-ai-platform/llama3-70b-accelerator",
"draft_tensor_parallel_size": 1,
"method": "mlp_speculator",
},
)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Pre-Trained MLP Drafter Models¶
A variety of speculative models of this type are available on HF hub: