Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

XAI for Transformers: Better Explanations through Conservative Propagation (2202.07304v2)

Published 15 Feb 2022 in cs.LG

Abstract: Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gradient information, have been proposed. We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction. We identify Attention Heads and LayerNorm as main reasons for such unreliable explanations and propose a more stable way for propagation through these layers. Our proposal, which can be seen as a proper extension of the well-established LRP method to Transformers, is shown both theoretically and empirically to overcome the deficiency of a simple gradient-based approach, and achieves state-of-the-art explanation performance on a broad range of Transformer models and datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ameen Ali (10 papers)
  2. Thomas Schnake (7 papers)
  3. Oliver Eberle (14 papers)
  4. Grégoire Montavon (50 papers)
  5. Klaus-Robert Müller (167 papers)
  6. Lior Wolf (217 papers)
Citations (73)

Summary

We haven't generated a summary for this paper yet.