FastDraft: How to Train Your Draft (2411.11055v1)
Abstract: Speculative Decoding has gained popularity as an effective technique for accelerating the auto-regressive inference process of LLMs. However, Speculative Decoding entirely relies on the availability of efficient draft models, which are often lacking for many existing LLMs due to a stringent constraint of vocabulary incompatibility. In this work we introduce FastDraft, a novel and efficient approach for pre-training and aligning a draft model to any LLM by incorporating efficient pre-training, followed by fine-tuning over synthetic datasets generated by the target model. We demonstrate FastDraft by training two highly parameter efficient drafts for the popular Phi-3-mini and Llama-3.1-8B models. Using FastDraft, we were able to produce a draft with approximately 10 billion tokens on a single server with 8 Intel$\circledR$ Gaudi$\circledR$ 2 accelerators in under 24 hours. Our results show that the draft model achieves impressive results in key metrics of acceptance rate, block efficiency and up to 3x memory bound speed up when evaluated on code completion and up to 2x in summarization, text completion and instruction tasks. We validate our theoretical findings through benchmarking on the latest Intel$\circledR$ Core${\tiny \text{TM}}$ Ultra, achieving a wall-clock time speedup of up to 2x, indicating a significant reduction in runtime. Due to its high quality, FastDraft unlocks LLMs inference on AI-PC and other edge-devices.
- Ofir Zafrir (5 papers)
- Igor Margulis (1 paper)
- Dorin Shteyman (4 papers)
- Guy Boudoukh (5 papers)