Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do Transformer Attention Heads Provide Transparency in Abstractive Summarization? (1907.00570v2)

Published 1 Jul 2019 in cs.CL, cs.AI, cs.IR, and cs.LG

Abstract: Learning algorithms become more powerful, often at the cost of increased complexity. In response, the demand for algorithms to be transparent is growing. In NLP tasks, attention distributions learned by attention-based deep learning models are used to gain insights in the models' behavior. To which extent is this perspective valid for all NLP tasks? We investigate whether distributions calculated by different attention heads in a transformer architecture can be used to improve transparency in the task of abstractive summarization. To this end, we present both a qualitative and quantitative analysis to investigate the behavior of the attention heads. We show that some attention heads indeed specialize towards syntactically and semantically distinct input. We propose an approach to evaluate to which extent the Transformer model relies on specifically learned attention distributions. We also discuss what this implies for using attention distributions as a means of transparency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Joris Baan (7 papers)
  2. Maartje ter Hoeve (21 papers)
  3. Marlies van der Wees (3 papers)
  4. Anne Schuth (5 papers)
  5. Maarten de Rijke (261 papers)
Citations (21)