Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding (2309.08168v2)

Published 15 Sep 2023 in cs.CL

Abstract: We present a novel inference scheme, self-speculative decoding, for accelerating LLMs without the need for an auxiliary model. This approach is characterized by a two-stage process: drafting and verification. The drafting stage generates draft tokens at a slightly lower quality but more quickly, which is achieved by selectively skipping certain intermediate layers during drafting. Subsequently, the verification stage employs the original LLM to validate those draft output tokens in one forward pass. This process ensures the final output remains identical to that produced by the unaltered LLM. Moreover, the proposed method requires no additional neural network training and no extra memory footprint, making it a plug-and-play and cost-effective solution for inference acceleration. Benchmarks with LLaMA-2 and its variants demonstrated a speedup up to 1.99$\times$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jun Zhang (1008 papers)
  2. Jue Wang (203 papers)
  3. Huan Li (102 papers)
  4. Lidan Shou (16 papers)
  5. Ke Chen (241 papers)
  6. Gang Chen (592 papers)
  7. Sharad Mehrotra (37 papers)
Citations (51)