Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MOYU: A Theoretical Study on Massive Over-activation Yielded Uplifts in LLMs (2406.12569v2)

Published 18 Jun 2024 in cs.LG

Abstract: Massive Over-activation Yielded Uplifts(MOYU) is an inherent property of LLMs, and dynamic activation(DA) based on the MOYU property is a clever yet under-explored strategy designed to accelerate inference in these models. Existing methods that utilize MOYU often face a significant 'Impossible Trinity': struggling to simultaneously maintain model performance, enhance inference speed, and extend applicability across various architectures. Due to the theoretical ambiguities surrounding MOYU, this paper elucidates the root cause of the MOYU property and outlines the mechanisms behind two primary limitations encountered by current DA methods: 1) history-related activation uncertainty, and 2) semantic-irrelevant activation inertia. Our analysis not only underscores the limitations of current dynamic activation strategies within large-scale LLaMA models but also proposes opportunities for refining the design of future sparsity schemes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chi Ma (15 papers)
  2. Mincong Huang (7 papers)
  3. Chao Wang (555 papers)
  4. Yujie Wang (103 papers)
  5. Lei Yu (234 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com