Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SparseBERT: Rethinking the Importance Analysis in Self-attention (2102.12871v3)

Published 25 Feb 2021 in cs.LG

Abstract: Transformer-based models are popularly used in NLP. Its core component, self-attention, has aroused widespread interest. To understand the self-attention mechanism, a direct method is to visualize the attention map of a pre-trained model. Based on the patterns observed, a series of efficient Transformers with different sparse attention masks have been proposed. From a theoretical perspective, universal approximability of Transformer-based models is also recently proved. However, the above understanding and analysis of self-attention is based on a pre-trained model. To rethink the importance analysis in self-attention, we study the significance of different positions in attention matrix during pre-training. A surprising result is that diagonal elements in the attention map are the least important compared with other attention positions. We provide a proof showing that these diagonal elements can indeed be removed without deteriorating model performance. Furthermore, we propose a Differentiable Attention Mask (DAM) algorithm, which further guides the design of the SparseBERT. Extensive experiments verify our interesting findings and illustrate the effect of the proposed algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Han Shi (27 papers)
  2. Jiahui Gao (25 papers)
  3. Xiaozhe Ren (21 papers)
  4. Hang Xu (205 papers)
  5. Xiaodan Liang (318 papers)
  6. Zhenguo Li (195 papers)
  7. James T. Kwok (65 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.