Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Skip for Language Modeling (2311.15436v1)

Published 26 Nov 2023 in cs.CL

Abstract: Overparameterized large-scale LLMs have impressive generalization performance of in-context few-shot learning. However, most LLMs allocate the same amount of parameters or computation to each token, disregarding the complexity or importance of the input data. We argue that in LLM pretraining, a variable amount of computation should be assigned to different tokens, and this can be efficiently achieved via a simple routing mechanism. Different from conventional early stopping techniques where tokens can early exit at only early layers, we propose a more general method that dynamically skips the execution of a layer (or module) for any input token with a binary router. In our extensive evaluation across 24 NLP tasks, we demonstrate that the proposed method can significantly improve the 1-shot performance compared to other competitive baselines only at mild extra cost for inference.

Citations (7)

Summary

We haven't generated a summary for this paper yet.