Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attribution in Scale and Space (2004.03383v2)

Published 3 Apr 2020 in cs.CV, cs.LG, and cs.NE

Abstract: We study the attribution problem [28] for deep networks applied to perception tasks. For vision tasks, attribution techniques attribute the prediction of a network to the pixels of the input image. We propose a new technique called \emph{Blur Integrated Gradients}. This technique has several advantages over other methods. First, it can tell at what scale a network recognizes an object. It produces scores in the scale/frequency dimension, that we find captures interesting phenomena. Second, it satisfies the scale-space axioms [14], which imply that it employs perturbations that are free of artifact. We therefore produce explanations that are cleaner and consistent with the operation of deep networks. Third, it eliminates the need for a 'baseline' parameter for Integrated Gradients [31] for perception tasks. This is desirable because the choice of baseline has a significant effect on the explanations. We compare the proposed technique against previous techniques and demonstrate application on three tasks: ImageNet object recognition, Diabetic Retinopathy prediction, and AudioSet audio event identification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shawn Xu (6 papers)
  2. Subhashini Venugopalan (35 papers)
  3. Mukund Sundararajan (27 papers)
Citations (67)

Summary

  • The paper's main contribution is BlurIG, which extends Integrated Gradients by capturing feature attributions across both spatial and frequency scales.
  • It employs Gaussian blurs iteratively to ensure artifact-free perturbations and eliminate baseline dependency, enhancing explanation reliability.
  • BlurIG demonstrates superior interpretability across diverse tasks, including ImageNet classification, medical image analysis, and audio event recognition.

Overview of "Attribution in Scale and Space"

The paper "Attribution in Scale and Space" presents a novel technique, Blur Integrated Gradients (BlurIG), designed to advance the field of feature attribution for deep networks applied to vision tasks. Attribution techniques are pivotal for interpreting the predictions of neural networks by attributing model outputs to input features, such as image pixels.

Contributions

Blur Integrated Gradients builds on the established Integrated Gradients (IG) method. Key innovations include:

  1. Scale and Frequency Localization: BlurIG extends explanations not only across spatial dimensions but also in the scale/frequency domain. This enables the identification of at what scale features are recognized by the network, distinguishing between coarse features for objects like a steel-arch bridge and fine-grained details for distinguishing dog breeds.
  2. Artifact-Free Perturbations: By employing the axioms of scale-space theory, BlurIG ensures that perturbations—specifically, Gaussian blurs applied at varying scales—do not introduce artifacts that could falsely influence attributions. This contrasts with methods reliant on arbitrary baselines, which may add spurious features.
  3. Elimination of Baseline Dependency: BlurIG obviates the need for selecting a baseline image, a process that can significantly affect the coherence and reliability of explanations in IG. Instead, it relies on a series of scale-parameterized blurs, facilitating consistent and meaningful attributions.

Methodology

  • The BlurIG method involves iteratively blurring the input image using Gaussian filters with increasing scale parameters. Gradients are computed at each step, capturing feature importance across scales.
  • The method leverages properties of Gaussian filters—symmetry, the semi-group property—and their adherence to scale-space axioms to ensure smooth, artifact-free transitions between scales.

Comparative Analysis and Results

The paper reports a comparative evaluation of BlurIG against other attribution methods like GradCAM and standard IG across three tasks: ImageNet classification, Diabetic Retinopathy prediction, and AudioSet audio event identification.

  • ImageNet and Vision Tasks: BlurIG provides cleaner and more interpretable explanations, free from coarse granularity and false attributions typical in other methods. It is particularly adept at identifying the relevant image portions that contribute meaningfully to class predictions.
  • Diabetic Retinopathy: BlurIG marginally outperforms existing methods in discerning relevant pathological features. It is better suited for medical image tasks demanding high precision and interpretability.
  • Audio Recognition: Application to audio spectrograms reveals its strength in capturing frequency-specific information. BlurIG successfully explains class predictions by attributing them to distinct audio features.

Implications and Future Work

BlurIG substantially advances the interpretability of deep networks in vision and auditory tasks by addressing prevalent limitations in existing methods. Its implications span improved model debugging, enhanced trust in AI predictions, and support for cross-disciplinary applications in healthcare and audio analysis.

Future research could extend BlurIG to other domains, such as natural language processing, and explore its adaptability to other neural architectures. Further empirical studies could strengthen the quantitative evaluation, solidifying its position in practical applications and theoretical understanding. Additionally, exploring hybrid approaches combining BlurIG with other visualization techniques could yield synergies to enhance interpretability in complex models.

Youtube Logo Streamline Icon: https://streamlinehq.com