Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Faster Cascades via Speculative Decoding (2405.19261v2)

Published 29 May 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Cascades and speculative decoding are two common approaches to improving LLMs' inference efficiency. Both approaches involve interleaving models of different sizes, but via fundamentally distinct mechanisms: cascades employ a deferral rule that invokes the larger model only for "hard" inputs, while speculative decoding uses speculative execution to primarily invoke the larger model in parallel verification mode. These mechanisms offer different benefits: empirically, cascades offer better cost-quality trade-offs, often even outperforming the large model, while theoretically, speculative decoding offers a guarantee of quality-neutrality. In this paper, we leverage the best of both these approaches by designing new speculative cascading techniques that implement their deferral rule through speculative execution. We characterize the optimal deferral rule for our speculative cascades, and employ a plug-in approximation to the optimal rule. Experiments with Gemma and T5 models on a range of language benchmarks show that our approach yields better cost quality trade-offs than cascading and speculative decoding baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Harikrishna Narasimhan (30 papers)
  2. Wittawat Jitkrittum (42 papers)
  3. Ankit Singh Rawat (64 papers)
  4. Seungyeon Kim (22 papers)
  5. Neha Gupta (45 papers)
  6. Aditya Krishna Menon (56 papers)
  7. Sanjiv Kumar (123 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com