Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 29 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 102 tok/s
GPT OSS 120B 462 tok/s Pro
Kimi K2 181 tok/s Pro
2000 character limit reached

SAM Decoding: Speculative Decoding via Suffix Automaton (2411.10666v3)

Published 16 Nov 2024 in cs.CL and cs.AI

Abstract: Speculative decoding (SD) has been demonstrated as an effective technique for lossless LLM inference acceleration. Retrieval-based SD methods, one kind of model-free method, have yielded promising speedup, but they often rely on incomplete retrieval resources, inefficient retrieval methods, and are constrained to certain domains. This paper presents a novel retrieval-based speculative decoding method that adapts suffix automaton (SAM) for efficient and accurate draft generation by utilizing common text corpus and dynamic text sequence. Unlike existing $n$-gram matching methods, SAM-Decoding finds the exact longest suffix match, achieving an average time complexity of O(1) per generation step of SAM update and suffix retrieval. It can also integrate with existing methods, adaptively selecting a draft generation strategy based on match length to generalize to broader domains. Extensive experiments on Spec-Bench show that our method is $18\%+$ faster than other retrieval-based SD methods. Additionally, when combined with advanced EAGLE-2, it provides an additional speedup of $3.28\%$ -- $11.13\%$ across various-sized LLM backbones. Our code is available at our \href{https://github.com/hyx1999/SAM-Decoding}{repository}.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces a speculative decoding method that leverages suffix automata to reduce average time complexity to O(1) per generation step.
  • It demonstrates significant speedups—up to 2.86 times over autoregressive decoding—when combined with existing retrieval and model-based techniques.
  • The method enhances performance through both static and dynamic automata, optimizing draft selection for tasks like summarization and multi-turn conversation.

Review of "SAM Decoding: Speculative Decoding via Suffix Automaton"

The paper "SAM Decoding: Speculative Decoding via Suffix Automaton" addresses a significant challenge in the domain of LLMs—enhancing inference speed while maintaining output quality. The authors present SAM-Decoding, a novel retrieval-based speculative decoding method that leverages suffix automata to optimize the draft generation process.

Core Contributions

SAM-Decoding introduces several key innovations:

  1. Suffix Automaton for Decoding: A central advancement in this work is the use of a suffix automaton as a mechanism for speculative decoding. This method departs from the traditional n-gram matching techniques often employed in retrieval-based strategies by finding the longest suffix match between generated text and a text corpus. The use of suffix automata in this manner reduces the average time complexity to O(1)O(1) per generation step, which is a substantial improvement over existing retrieval techniques.
  2. Combination with Existing Methods: The paper not only describes the standalone implementation of SAM-Decoding but also emphasizes its adaptability and compatibility with current model-free and model-based speculative decoding methods. When integrated with methods like Token Recycling and EAGLE2, SAM-Decoding achieves notable speedups of 2.27 and 2.49 times over traditional autoregressive decoding, respectively.
  3. Dynamic and Static Automata: The approach constructs both static and dynamic suffix automata; the former is built on a predefined text corpus while the latter dynamically evolves with the ongoing generation process. This dual-automaton system enhances the retrieval capability, allowing more precise draft selection.

Numerical Evaluation

The paper provides a robust quantitative analysis across various tasks using Spec-Bench. The results show that SAM-Decoding outperforms several baseline speculative decoding methods. Specifically, in tasks suitable for retrieval-based methods—multi-turn conversation, summarization, and retrieval-augmented generation—SAM-Decoding exhibits superior speedup ratios compared to other state-of-the-art techniques. For instance, the model-free SAM-Decoding[T] achieves a speedup of 2.86 times in summarization tasks, illustrating the effectiveness of the retrieval strategy when aligned with task characteristics.

Implications and Future Directions

This work has both theoretical and practical implications. Theoretically, it demonstrates the feasibility and efficiency of using suffix automata in speculative decoding, opening avenues for further exploration of automata-based methods in natural language processing. Practically, by enhancing inference speeds, SAM-Decoding facilitates more efficient deployment of LLMs in real-world applications where computational resources and response times are critical considerations.

Going forward, further investigation into more sophisticated automata construction techniques could yield additional efficiencies. Furthermore, exploration into contextually adaptive automata could further optimize the integration of retrieval with dynamic LLM contexts, potentially broadening the applicability of SAM-Decoding to a wider range of NLP tasks.

Conclusion

Overall, "SAM Decoding: Speculative Decoding via Suffix Automaton" presents a significant contribution to the field of speculative decoding. Its innovative use of suffix automata provides a promising direction for optimizing inference in LLMs. This paper sets a solid foundation for future research aimed at marrying retrieval-based methods with model-based speculative strategies, potentially redefining approaches to efficient text generation in large-scale models.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets