Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Magic Pyramid: Accelerating Inference with Early Exiting and Token Pruning (2111.00230v1)

Published 30 Oct 2021 in cs.CL

Abstract: Pre-training and then fine-tuning LLMs is commonly used to achieve state-of-the-art performance in NLP tasks. However, most pre-trained models suffer from low inference speed. Deploying such large models to applications with latency constraints is challenging. In this work, we focus on accelerating the inference via conditional computations. To achieve this, we propose a novel idea, Magic Pyramid (MP), to reduce both width-wise and depth-wise computation via token pruning and early exiting for Transformer-based models, particularly BERT. The former manages to save the computation via removing non-salient tokens, while the latter can fulfill the computation reduction by terminating the inference early before reaching the final layer, if the exiting condition is met. Our empirical studies demonstrate that compared to previous state of arts, MP is not only able to achieve a speed-adjustable inference but also to surpass token pruning and early exiting by reducing up to 70% giga floating point operations (GFLOPs) with less than 0.5% accuracy drop. Token pruning and early exiting express distinctive preferences to sequences with different lengths. However, MP is capable of achieving an average of 8.06x speedup on two popular text classification tasks, regardless of the sizes of the inputs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xuanli He (43 papers)
  2. Iman Keivanloo (3 papers)
  3. Yi Xu (302 papers)
  4. Xiang He (62 papers)
  5. Belinda Zeng (16 papers)
  6. Santosh Rajagopalan (2 papers)
  7. Trishul Chilimbi (22 papers)
Citations (16)