Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Repulsive Attention: Rethinking Multi-head Attention as Bayesian Inference (2009.09364v2)

Published 20 Sep 2020 in cs.LG and stat.ML

Abstract: The neural attention mechanism plays an important role in many natural language processing applications. In particular, the use of multi-head attention extends single-head attention by allowing a model to jointly attend information from different perspectives. Without explicit constraining, however, multi-head attention may suffer from attention collapse, an issue that makes different heads extract similar attentive features, thus limiting the model's representation power. In this paper, for the first time, we provide a novel understanding of multi-head attention from a Bayesian perspective. Based on the recently developed particle-optimization sampling techniques, we propose a non-parametric approach that explicitly improves the repulsiveness in multi-head attention and consequently strengthens model's expressiveness. Remarkably, our Bayesian interpretation provides theoretical inspirations on the not-well-understood questions: why and how one uses multi-head attention. Extensive experiments on various attention models and applications demonstrate that the proposed repulsive attention can improve the learned feature diversity, leading to more informative representations with consistent performance improvement on various tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Bang An (33 papers)
  2. Jie Lyu (5 papers)
  3. Zhenyi Wang (27 papers)
  4. Chunyuan Li (122 papers)
  5. Changwei Hu (11 papers)
  6. Fei Tan (25 papers)
  7. Ruiyi Zhang (98 papers)
  8. Yifan Hu (89 papers)
  9. Changyou Chen (108 papers)
Citations (25)