Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective (2303.00244v3)

Published 1 Mar 2023 in cs.CV and cs.AI

Abstract: Researchers have proposed various methods for visually interpreting the Convolutional Neural Network (CNN) via saliency maps, which include Class-Activation-Map (CAM) based approaches as a leading family. However, in terms of the internal design logic, existing CAM-based approaches often overlook the causal perspective that answers the core "why" question to help humans understand the explanation. Additionally, current CNN explanations lack the consideration of both necessity and sufficiency, two complementary sides of a desirable explanation. This paper presents a causality-driven framework, SUNY, designed to rationalize the explanations toward better human understanding. Using the CNN model's input features or internal filters as hypothetical causes, SUNY generates explanations by bi-directional quantifications on both the necessary and sufficient perspectives. Extensive evaluations justify that SUNY not only produces more informative and convincing explanations from the angles of necessity and sufficiency, but also achieves performances competitive to other approaches across different CNN architectures over large-scale datasets, including ILSVRC2012 and CUB-200-2011.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence, 298:103502, 2021.
  2. Sanity checks for saliency maps. arXiv preprint arXiv:1810.03292, 2018.
  3. From verification to causality-based explications. arXiv preprint arXiv:2105.09533, 2021.
  4. Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 839–847. IEEE, 2018.
  5. Neural network attributions: A causal perspective. In International Conference on Machine Learning, pages 981–990. PMLR, 2019.
  6. Responsibility and blame: A structural-model approach. Journal of Artificial Intelligence Research, 22:93–115, 2004.
  7. Saliency attack: Towards imperceptible black-box adversarial attack. arXiv preprint arXiv:2206.01898, 2022.
  8. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP), pages 598–617. IEEE, 2016.
  9. Hichem Debbi. Causal explanation of convolutional neural networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 633–649. Springer, 2021.
  10. Robust superpixel-guided attentional adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12895–12904, 2020.
  11. Context and causal mechanisms in political analysis. Comparative political studies, 42(9):1143–1166, 2009.
  12. Interpretation of neural networks is fragile. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3681–3688, 2019.
  13. Causal inference in statistics: A primer. John Wiley & Sons, 2016.
  14. Causal responsibility and robust causation. Frontiers in Psychology, 11:1069, 2020.
  15. David Gunning. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2, 2017.
  16. Causes and explanations: A structural-model approach. part i: Causes. The British journal for the philosophy of science, 2020.
  17. Causal learning and explanation of deep neural networks via autoencoded activations. arXiv preprint arXiv:1802.00541, 2018.
  18. Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognition Letters, 150:228–234, 2021.
  19. Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 652–663, 2021.
  20. Causal responsibility and counterfactuals. Cognitive science, 37(6):1036–1073, 2013.
  21. Peter Lipton. Contrastive explanation. Royal Institute of Philosophy Supplements, 27:247–266, 1990.
  22. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
  23. Torchvision the machine-vision package of torch. In Proceedings of the 18th ACM international conference on Multimedia, pages 1485–1488, 2010.
  24. Explaining deep learning models using causal inference. arXiv preprint arXiv:1811.04376, 2018.
  25. Smooth Grad-CAM++: An enhanced inference level visualization technique for deep convolutional neural network models. arXiv preprint arXiv:1908.01224, 2019.
  26. Automatic differentiation in pytorch. 2017.
  27. Judea Pearl. Causality. Cambridge university press, 2009.
  28. Rise: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421, 2018.
  29. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
  30. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
  31. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
  32. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  33. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
  34. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3):647–665, 2014.
  35. The many shapley values for model explanation. In International conference on machine learning, pages 9269–9278. PMLR, 2020.
  36. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
  37. Score-CAM: Improved visual explanations via score-weighted class activation mapping. arXiv preprint arXiv:1910.01279, 2019.
  38. Attention-guided black-box adversarial attacks with large-scale multiobjective evolutionary optimization. International Journal of Intelligent Systems, 2022.
  39. Local explanations via necessity and sufficiency: Unifying theory and practice. In Uncertainty in Artificial Intelligence, pages 1382–1392. PMLR, 2021.
  40. Caltech-ucsd birds 200. 2010.
  41. Wikipedia contributors. Taxonomic rank — Wikipedia, the free encyclopedia, 2022. [Online; accessed 19-October-2022].
  42. James Woodward. Sensitive and insensitive causation. The Philosophical Review, 115(1):1–50, 2006.
  43. Local black-box adversarial attacks: A query efficient approach. arXiv preprint arXiv:2101.01032, 2021.
  44. A survey on causal inference. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(5):1–46, 2021.
  45. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
  46. Group-cam: group score-weighted visual explanations for deep convolutional networks. arXiv preprint arXiv:2103.13859, 2021.
  47. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiwei Xuan (9 papers)
  2. Ziquan Deng (4 papers)
  3. Hsuan-Tien Lin (43 papers)
  4. Zhaodan Kong (20 papers)
  5. Kwan-Liu Ma (80 papers)
Citations (1)