Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards better understanding of gradient-based attribution methods for Deep Neural Networks (1711.06104v4)

Published 16 Nov 2017 in cs.LG and stat.ML

Abstract: Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work, we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Marco Ancona (7 papers)
  2. Enea Ceolini (5 papers)
  3. Markus Gross (67 papers)
  4. Cengiz Ă–ztireli (12 papers)
Citations (142)

Summary

We haven't generated a summary for this paper yet.