Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Talking-Heads Attention (2003.02436v1)

Published 5 Mar 2020 in cs.LG, cs.NE, cs.SD, eess.AS, and stat.ML

Abstract: We introduce "talking-heads attention" - a variation on multi-head attention which includes linearprojections across the attention-heads dimension, immediately before and after the softmax operation.While inserting only a small number of additional parameters and a moderate amount of additionalcomputation, talking-heads attention leads to better perplexities on masked LLMing tasks, aswell as better quality when transfer-learning to language comprehension and question answering tasks.

Citations (71)

Summary

We haven't generated a summary for this paper yet.