Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Attention-Model Explainability through Faithfulness Violation Test (2201.12114v3)

Published 28 Jan 2022 in cs.LG, cs.CL, and cs.CV

Abstract: Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading -- features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attentio$\odot$Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test) to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yibing Liu (12 papers)
  2. Haoliang Li (67 papers)
  3. Yangyang Guo (45 papers)
  4. Chenqi Kong (19 papers)
  5. Jing Li (621 papers)
  6. Shiqi Wang (163 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.