Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Feature Attribution through Input-specific Network Pruning (1911.11081v2)

Published 25 Nov 2019 in cs.CV

Abstract: Attributing the output of a neural network to the contribution of given input elements is a way of shedding light on the black-box nature of neural networks. Due to the complexity of current network architectures, current gradient-based attribution methods provide very noisy or coarse results. We propose to prune a neural network for a given single input to keep only neurons that highly contribute to the prediction. We show that by input-specific pruning, network gradients change from reflecting local (noisy) importance information to global importance. Our proposed method is efficient and generates fine-grained attribution maps. We further provide a theoretical justification of the pruning approach relating it to perturbations and validate it through a novel experimental setup. Our method is evaluated by multiple benchmarks: sanity checks, pixel perturbation, and Remove-and-Retrain (ROAR). These benchmarks evaluate the method from different perspectives and our method performs better than other methods across all evaluations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ashkan Khakzar (28 papers)
  2. Soroosh Baselizadeh (6 papers)
  3. Saurabh Khanduja (2 papers)
  4. Christian Rupprecht (90 papers)
  5. Seong Tae Kim (42 papers)
  6. Nassir Navab (459 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.