Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CascadeBERT: Accelerating Inference of Pre-trained Language Models via Calibrated Complete Models Cascade (2012.14682v2)

Published 29 Dec 2020 in cs.CL

Abstract: Dynamic early exiting aims to accelerate the inference of pre-trained LLMs (PLMs) by emitting predictions in internal layers without passing through the entire model. In this paper, we empirically analyze the working mechanism of dynamic early exiting and find that it faces a performance bottleneck under high speed-up ratios. On one hand, the PLMs' representations in shallow layers lack high-level semantic information and thus are not sufficient for accurate predictions. On the other hand, the exiting decisions made by internal classifiers are unreliable, leading to wrongly emitted early predictions. We instead propose a new framework for accelerating the inference of PLMs, CascadeBERT, which dynamically selects proper-sized and complete models in a cascading manner, providing comprehensive representations for predictions. We further devise a difficulty-aware objective, encouraging the model to output the class probability that reflects the real difficulty of each instance for a more reliable cascading mechanism. Experimental results show that CascadeBERT can achieve an overall 15\% improvement under 4$\times$ speed-up compared with existing dynamic early exiting methods on six classification tasks, yielding more calibrated and accurate predictions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Lei Li (1293 papers)
  2. Yankai Lin (125 papers)
  3. Deli Chen (20 papers)
  4. Shuhuai Ren (30 papers)
  5. Peng Li (390 papers)
  6. Jie Zhou (687 papers)
  7. Xu Sun (194 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.