Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ouroboros: Generating Longer Drafts Phrase by Phrase for Faster Speculative Decoding (2402.13720v3)

Published 21 Feb 2024 in cs.CL

Abstract: Speculative decoding is a widely used method that accelerates the generation process of LLMs with no compromise in model performance. It achieves this goal by using an existing smaller model for drafting and then employing the target LLM to verify the draft in a low-cost parallel manner. Under such a drafting-verification framework, drafting efficiency has become a bottleneck in the final speedup of speculative decoding. Therefore, generating longer drafts at less cost can lead to better decoding speedup. To achieve this, we introduce Ouroboros, which can generate draft phrases to parallelize the drafting process and meanwhile lengthen drafts in a training-free manner. The experimental results on various typical text generation tasks show that Ouroboros can achieve speedups of up to $2.8\times$ over speculative decoding and $3.9\times$ over vanilla decoding, without fine-tuning draft and target models. The source code of Ouroboros is available at https://github.com/thunlp/Ouroboros.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. 01-ai. Yi, 2023. URL https://github.com/01-ai/Yi.
  2. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  3. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
  4. Fast and robust early-exiting framework for autoregressive language models with synchronized parallel decoding. In Proceedings of EMNLP, 2023.
  5. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954, 2024.
  6. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation, pp.  131–198, 2016.
  7. Medusa: Simple llm inference acceleration framework with multiple decoding heads. arXiv preprint arXiv: 2401.10774, 2024.
  8. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023a.
  9. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
  10. The lottery ticket hypothesis for pre-trained bert networks. In Proceedings of NeurIPS, pp.  15834–15846, 2020.
  11. Cascade speculative drafting for even faster llm inference. arXiv preprint arXiv:2312.11462, 2023b.
  12. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
  13. Flash-decoding for long-context inference, 2023.
  14. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.
  15. Depth-adaptive transformer. In Proceedings of ICLR, 2019.
  16. Reducing transformer depth on demand with structured dropout. In Proceedings of ICLR, 2020.
  17. Gptq: Accurate quantization for generative pre-trained transformers. In Proceedings of ICLR, 2023.
  18. Breaking the sequential dependency of llm inference using lookahead decoding, November 2023. URL https://lmsys.org/blog/2023-11-21-lookahead-decoding/.
  19. Mask-predict: Parallel decoding of conditional masked language models. arXiv preprint arXiv:1904.09324, 2019.
  20. Jointly masked sequence-to-sequence model for non-autoregressive neural machine translation. In Proceedings of ACL, pp.  376–385, 2020.
  21. Learning both weights and connections for efficient neural network. In Proceedings of NeurIPS, pp.  1135–1143, 2015.
  22. Teaching machines to read and comprehend. In Proceedings of NeurIPS, pp.  1693–1701, 2015.
  23. Squeezellm: Dense-and-sparse quantization. arXiv preprint arXiv:2306.07629, 2023.
  24. Efficient memory management for large language model serving with pagedattention. In Proceedings of SOSP, 2023.
  25. Fast inference from transformers via speculative decoding. In Proceedings of ICML, pp.  19274–19286, 2023.
  26. Solving quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022.
  27. Awq: Activation-aware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978, 2023.
  28. Specinfer: Accelerating generative llm serving with speculative inference and token tree verification. arXiv preprint arXiv:2305.09781, 2023.
  29. Pass: Parallel speculative sampling. arXiv preprint arXiv:2311.13581, 2023.
  30. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
  31. Nvidia. Fastertransformer, a. URL https://github.com/NVIDIA/FasterTransformer.
  32. Nvidia. Tensorrt-llm, b. URL https://github.com/NVIDIA/TensorRT-LLM.
  33. OpenAI, T. Chatgpt: Optimizing language models for dialogue. OpenAI, 2022.
  34. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of SIGKDD, pp.  3505–3506, 2020.
  35. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
  36. Accelerating transformer inference for translation via parallel decoding. 2023.
  37. Get to the point: Summarization with pointer-generator networks. In Proceedings of ACL, pp.  1073–1083, 2017.
  38. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019.
  39. Powerinfer: Fast large language model serving with a consumer-grade gpu. arXiv preprint arXiv:2312.12456, 2023.
  40. Accelerating llm inference with staged speculative decoding. arXiv preprint arXiv:2308.04623, 2023.
  41. Blockwise parallel decoding for deep autoregressive models. volume 31, 2018.
  42. Training with quantization noise for extreme model compression. In Proceedings of ICLR, 2021.
  43. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  44. LightSeq: A high performance inference library for transformers. In Proceedings of NAACL, pp.  113–120, 2021.
  45. Structured pruning of large language models. In Proceedings of EMNLP, pp.  6151–6162, 2020.
  46. Imitation learning for non-autoregressive neural machine translation. arXiv preprint arXiv:1906.02041, 2019.
  47. Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation. In Proceedings of EMNLP, pp.  3909–3925, 2023.
  48. Structured pruning learns compact and accurate models. In Proceedings of ACL, pp.  1513–1528, 2022.
  49. Rethinking network pruning – under the pre-train and fine-tune paradigm. In Proceedings of NAACL, pp.  2376–2382, 2021.
  50. Predictive pipelined decoding: A compute-latency trade-off for exact llm decoding. arXiv preprint arXiv:2307.05908, 2023.
  51. Q8bert: Quantized 8bit bert. In Proceedings of EMC2-NIPS, pp.  36–39, 2019.
  52. Draft & verify: Lossless large language model acceleration via self-speculative decoding. arXiv preprint arXiv:2309.08168, 2023.
  53. Know what you don’t need: Single-shot meta-pruning for attention heads. volume 2, pp.  36–42, 2021.
  54. MoEfication: Transformer feed-forward layers are mixtures of experts. In Findings of ACL, pp.  877–890, 2022.
  55. Distillspec: Improving speculative decoding via knowledge distillation. arXiv preprint arXiv:2310.08461, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Weilin Zhao (22 papers)
  2. Yuxiang Huang (17 papers)
  3. Xu Han (270 papers)
  4. Chaojun Xiao (39 papers)
  5. Zhiyuan Liu (433 papers)
  6. Maosong Sun (337 papers)
  7. Wang Xu (16 papers)
  8. Xinrong Zhang (9 papers)
  9. Yewei Fang (7 papers)
  10. Kaihuo Zhang (4 papers)
Citations (4)

Summary

Enhancing Inference Acceleration in LLMs with Ouroboros: A Speculative Decoding Framework

Introduction

The recent advancements in LLMs have set remarkable benchmarks in various natural language processing tasks. However, the stringent requirement for efficient inference in real-time applications presents a significant challenge. The crux of the matter lies in the inference inefficiency arising from the autoregressive decoding mechanism prevalent in LLMs, which decodes tokens sequentially, thus limiting parallelization capabilities and leading to extensive computational overheads. Addressing this, the paper introduces Ouroboros, an innovative decoding framework designed to enhance the initial drafting phase significantly and utilize the verification errors constructively, enabling faster and more efficient inference for LLMs without compromising task performance.

Speculative Decoding Framework

Ouroboros operates on a drafting-then-verifying decoding principle, starting with generating initial drafts using a smaller model and subsequently utilizing an LLM for verification. Uniquely, Ouroboros introduces a phrase candidate pool, leveraging the verification outcomes to enrich the drafting phase, thus generating longer and more accurate drafts. This iterative refinement facilitated by the candidate pool not only improves inference speed but also ensures the quality of generated content, tackling the fundamental limitations observed in existing drafting-then-verifying methods related to insufficient draft lengths and underutilized verification results.

Framework Components and Mechanisms

Ouroboros methodology extends beyond conventional speculative decoding by introducing several pivotal features:

  • Shared Candidate Pool: It fosters a well-integrated interaction between the drafting and verifying phases. By utilizing a phrase candidate pool for drafting, Ouroboros enhances both the length and quality of initial drafts, leading to accelerated inference times.
  • Utilization of Verification Results: Instead of discarding tokens following a verification failure, Ouroboros uses them for candidate inspiration, efficiently leveraging all verification outputs to refine subsequent drafts.
  • Warm Start Capability: Addressing the issue of cold starts, Ouroboros implements a pre-filled candidate pool based on similar tasks, further enhancing generation speeds through context locality.

Empirical Validation

Across a spectrum of text generation tasks, including code generation and machine translation, Ouroboros has demonstrated substantial improvements in inference acceleration, achieving up to 1.9× and 2.8× speed increases compared to lookahead and speculative decoding, respectively. Furthermore, Ouroboros’s approach is lossless regarding task performance, maintaining the output quality of the LLMs used.

Implications and Future Directions

The development of Ouroboros signifies a promising direction in the endeavor to reconcile the need for real-time responsiveness with the computational demands of LLMs. This framework opens avenues for further research into optimizing the interaction between larger and smaller models in generative tasks, exploring the bounds of efficiency and quality in model drafting and verification processes. Additionally, while the current implementation focuses on greedy decoding scenarios, extending Ouroboros to support random sampling decoding strategies presents a potential area for future investigation.

Conclusion

Ouroboros emerges as a groundbreaking framework in the landscape of LLM inference acceleration, addressing the dual challenges of inefficiency and quality compromise. Through its innovative use of a shared candidate pool and the constructive application of verification results, Ouroboros stands as a testament to the possibilities inherent in speculative decoding methodologies. As the field of AI continues to evolve, such advancements herald a new era of efficiency and capability for real-world applications of LLMs.