Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation (2206.03208v2)

Published 7 Jun 2022 in cs.LG and cs.AI

Abstract: The field of eXplainable Artificial Intelligence (XAI) aims to bring transparency to today's powerful but opaque deep learning models. While local XAI methods explain individual predictions in form of attribution maps, thereby identifying where important features occur (but not providing information about what they represent), global explanation techniques visualize what concepts a model has generally learned to encode. Both types of methods thus only provide partial insights and leave the burden of interpreting the model's reasoning to the user. In this work we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the "where" and "what" questions for individual predictions. We demonstrate the capability of our method in various settings, showcasing that CRP leads to more human interpretable explanations and provides deep insights into the model's representation and reasoning through concept atlases, concept composition analyses, and quantitative investigations of concept subspaces and their role in fine-grained decision making.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Reduan Achtibat (8 papers)
  2. Maximilian Dreyer (15 papers)
  3. Ilona Eisenbraun (1 paper)
  4. Sebastian Bosse (11 papers)
  5. Thomas Wiegand (29 papers)
  6. Wojciech Samek (144 papers)
  7. Sebastian Lapuschkin (66 papers)
Citations (99)

Summary

We haven't generated a summary for this paper yet.