SparseBERT: Rethinking the Importance Analysis in Self-attention (2102.12871v3)
Abstract: Transformer-based models are popularly used in NLP. Its core component, self-attention, has aroused widespread interest. To understand the self-attention mechanism, a direct method is to visualize the attention map of a pre-trained model. Based on the patterns observed, a series of efficient Transformers with different sparse attention masks have been proposed. From a theoretical perspective, universal approximability of Transformer-based models is also recently proved. However, the above understanding and analysis of self-attention is based on a pre-trained model. To rethink the importance analysis in self-attention, we study the significance of different positions in attention matrix during pre-training. A surprising result is that diagonal elements in the attention map are the least important compared with other attention positions. We provide a proof showing that these diagonal elements can indeed be removed without deteriorating model performance. Furthermore, we propose a Differentiable Attention Mask (DAM) algorithm, which further guides the design of the SparseBERT. Extensive experiments verify our interesting findings and illustrate the effect of the proposed algorithm.
- Han Shi (27 papers)
- Jiahui Gao (25 papers)
- Xiaozhe Ren (21 papers)
- Hang Xu (205 papers)
- Xiaodan Liang (318 papers)
- Zhenguo Li (195 papers)
- James T. Kwok (65 papers)