Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 188 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 102 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

CASE: Contrastive Activation for Saliency Estimation (2506.07327v3)

Published 8 Jun 2025 in cs.CV and cs.LG

Abstract: Saliency methods are widely used to visualize which input features are deemed relevant to a model's prediction. However, their visual plausibility can obscure critical limitations. In this work, we propose a diagnostic test for class sensitivity: a method's ability to distinguish between competing class labels on the same input. Through extensive experiments, we show that many widely used saliency methods produce nearly identical explanations regardless of the class label, calling into question their reliability. We find that class-insensitive behavior persists across architectures and datasets, suggesting the failure mode is structural rather than model-specific. Motivated by these findings, we introduce CASE, a contrastive explanation method that isolates features uniquely discriminative for the predicted class. We evaluate CASE using the proposed diagnostic and a perturbation-based fidelity test, and show that it produces faithful and more class-specific explanations than existing methods.

Summary

  • The paper introduces CASE to enhance class-specific saliency maps by contrasting neuron activations between the predicted and contrast classes.
  • It employs a novel contrast mechanism within CAM and validates effectiveness using statistical tests and confidence drop metrics across different architectures.
  • The method reduces feature overlap, thereby improving interpretability and reliability of model explanations in high-stakes applications.

Overview of "CASE: Contrastive Activation for Saliency Estimation"

"CASE: Contrastive Activation for Saliency Estimation," authored by Williamson et al., introduces a novel approach to enhancing the class specificity of saliency maps generated by neural networks. The paper identifies a significant limitation in current saliency methods, such as Grad-CAM and its derivatives, whereby these techniques often produce identical saliency maps for different class predictions on the same input. This lack of distinction among class labels can compromise the reliability of saliency methods, particularly in safety-critical applications like medical diagnostics.

The authors propose, evaluate, and validate CASE, a contrastive mechanism applied within the framework of Class Activation Mapping (CAM) to address class insensitivity. This method aims to isolate features that uniquely support a model's predicted class by leveraging differences between predicted activations and commonly confused classes.

Methodology

CASE operates by comparing neuron activations attentive to the predicted class with those from a contrast class, thereby reducing non-distinct attributions. This approach bolsters the class specificity of explanations without modifying the underlying model architectures. The paper formalizes the class sensitivity diagnostic tool, measuring the top-k feature overlap between saliency maps of competing class labels. The method is tested using a one-sided Wilcoxon signed-rank test to confirm whether the median agreement is significantly less than a threshold value, thereby suggesting effective class distinction.

The authors assess CASE using multiple benchmark datasets and popular model architectures and compare its performance against other widely-used saliency techniques, testing both class sensitivity and explanation fidelity. To determine fidelity, they calculate the confidence drop experienced by a model when top-k salient features identified by a method are perturbed.

Results and Implications

CASE demonstrated superior class-specificity across architectures such as DenseNet, VGG, ResNet, and ConvNeXt when compared to existing methods. In all configurations, CASE consistently achieved lower top-k feature overlap scores—indicating more distinct saliency maps for different classes—than the baseline methods. Furthermore, fidelity results, determined by measuring the confidence drop, showed that CASE is comparable to other methods, indicating that it does not sacrifice evidence alignment with decision-critical features.

This contrastive method enriches our understanding of model behavior by guaranteeing that the features highlighted are not only salient but more uniquely discriminative for the target class. The authors suggest that the internal competition mechanism within CASE could become widely applicable and beneficial for resolving ambiguities in high-stakes domains.

Conclusion and Future Directions

The paper contributes significantly to the field of interpretable AI, providing a robust solution for enhancing the class-specificity of saliency maps without introducing complexity into model architectures. The insights highlighted by this work set the stage for future studies focusing on further optimizing the contrastive mechanism and evaluating its effectiveness in increasingly complex and varied contexts, such as multi-label classification and multimodal tasks.

Beyond immediate improvements in methodology, CASE stimulates increased attention to the broader role of architectural characteristics in explanation methods, advocating for a more nuanced consideration of layer-specific attributes when crafting saliencies. Future work might explore adaptive class contrast selections and minimize computational demands to extend the breadth and utility of CASE.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.