Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling (2110.03252v1)

Published 7 Oct 2021 in cs.CL

Abstract: While Transformer-based models have shown impressive LLMing performance, the large computation cost is often prohibitive for practical use. Attention head pruning, which removes unnecessary attention heads in the multihead attention, is a promising technique to solve this problem. However, it does not evenly reduce the overall load because the heavy feedforward module is not affected by head pruning. In this paper, we apply layer-wise attention head pruning on All-attention Transformer so that the entire computation and the number of parameters can be reduced proportionally to the number of pruned heads. While the architecture has the potential to fully utilize head pruning, we propose three training methods that are especially helpful to minimize performance degradation and stabilize the pruning process. Our pruned model shows consistently lower perplexity within a comparable parameter size than Transformer-XL on WikiText-103 LLMing benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kyuhong Shim (26 papers)
  2. Iksoo Choi (3 papers)
  3. Wonyong Sung (33 papers)
  4. Jungwook Choi (28 papers)
Citations (9)