Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Amphista: Bi-directional Multi-head Decoding for Accelerating LLM Inference (2406.13170v2)

Published 19 Jun 2024 in cs.AI and cs.CL

Abstract: LLMs inherently use autoregressive decoding, which lacks parallelism in inference and results in significantly slow inference speed. While methods such as Medusa constructs parallelized heads, they lack adequate information interaction across different prediction positions. To overcome this limitation, we introduce Amphista, an enhanced speculative decoding framework that builds upon Medusa. Specifically, Amphista models an Auto-embedding Block capable of parallel inference, incorporating bi-directional attention to enable interaction between different drafting heads. Additionally, Amphista integrates Staged Adaptation Layers, which ensure a seamless transition of semantic information from the target model's autoregressive inference to the drafting heads' non-autoregressive inference, effectively achieving paradigm shift and feature fusion. Experimental results on Vicuna models using MT-Bench and Spec-Bench demonstrate that Amphista achieves substantial acceleration while maintaining generation quality. On MT-Bench, Amphista delivers up to 2.75$\times$ speedup over vanilla autoregressive decoding and 1.40$\times$ over Medusa on Vicuna 33B in wall-clock time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Zeping Li (6 papers)
  2. Xinlong Yang (8 papers)
  3. Ziheng Gao (3 papers)
  4. Ji Liu (285 papers)
  5. Zhuang Liu (63 papers)
  6. Dong Li (429 papers)
  7. Jinzhang Peng (11 papers)
  8. Lu Tian (58 papers)
  9. Emad Barsoum (41 papers)
  10. Guanchen Li (9 papers)
Citations (2)
Youtube Logo Streamline Icon: https://streamlinehq.com