Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multiple Different Black Box Explanations for Image Classifiers (2309.14309v4)

Published 25 Sep 2023 in cs.CV and cs.AI

Abstract: Existing explanation tools for image classifiers usually give only a single explanation for an image's classification. For many images, however, image classifiers accept more than one explanation for the image label. These explanations are useful for analyzing the decision process of the classifier and for detecting errors. Thus, restricting the number of explanations to just one severely limits insight into the behavior of the classifier. In this paper, we describe an algorithm and a tool, MultEX, for computing multiple explanations as the output of a black-box image classifier for a given image. Our algorithm uses a principled approach based on actual causality. We analyze its theoretical complexity and evaluate MultEX against the state-of-the-art across three different models and three different datasets. We find that MultEX finds more explanations and that these explanations are of higher quality.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Anderson, I. Combinatorics of Finite Sets. Oxford University Press, 1987.
  2. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS One, 10(7), 2015.
  3. Learning to explain: An information-theoretic perspective on model interpretation. In International Conference on Machine Learning (ICML), volume 80, pp.  882–891. PMLR, 2018.
  4. Responsibility and blame: A structural-model approach. J. Artif. Intell. Res., 22:93–115, 2004.
  5. Explaining image classifiers, 2024.
  6. Explanations for occluded images. In IEEE/CVF International Conference on Computer Vision, ICCV, pp.  1214–1223. IEEE, 2021.
  7. On the (complete) reasons behind decisions. J. Log. Lang. Inf., 32(1):63–88, 2023.
  8. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (S&P), pp.  598–617. IEEE, 2016.
  9. Dice, L. R. Measures of the amount of ecologic association between species. Ecology, 26:297––302, 1945.
  10. Complexity results for explanations in the structural-model approach. Artif. Intell., 154(1-2):145–198, 2004.
  11. A deeper look at human visual perception of images. SN Computer Science, 1(58), 2020.
  12. Understanding deep networks via extremal perturbations and smooth masks. In International Conference on Computer Vision (ICCV), pp. 2950–2958. IEEE, 2019.
  13. Halpern, J. Y. A modification of the Halpern–Pearl definition of causality. In Proceedings of IJCAI, pp.  3022–3033. AAAI Press, 2015.
  14. Halpern, J. Y. Actual Causality. The MIT Press, 2019.
  15. Causes and explanations: A structural-model approach. Part I: Causes. British Journal for the Philosophy of Science, 56(4), 2005.
  16. Hanson, N. R. Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science. Cambridge University Press, 2010.
  17. Hume, D. A Treatise of Human Nature. John Noon, 1739.
  18. Abduction-based explanations for machine learning models. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI, pp.  1511–1519. AAAI Press, 2019.
  19. Microsoft COCO: Common objects in context. In European conference on computer vision, pp.  740–755. Springer, 2014.
  20. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (NeurIPS), volume 30, pp.  4765–4774, 2017.
  21. Delivering trustworthy AI through formal XAI. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI, pp.  12342–12350. AAAI Press, 2022.
  22. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1–38, 2019.
  23. Relative attributing propagation: Interpreting the comparative contributions of individual units in deep neural networks. In AAAI Conference on Artificial Intelligence, volume 34, pp. 2501–2508, 2020.
  24. Papadimitriou, C. The complexity of unique solutions. Journal of ACM, 31:492–500, 1984.
  25. RISE: randomized input sampling for explanation of black-box models. In British Machine Vision Conference (BMVC). BMVA Press, 2018.
  26. “Why should I trust you?” Explaining the predictions of any classifier. In Knowledge Discovery and Data Mining (KDD), pp. 1135–1144. ACM, 2016.
  27. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In International Conference on Computer Vision (ICCV), pp. 618–626. IEEE, 2017.
  28. One explanation is not enough: Structured attention graphs for image classification. In Neural Information Processing Systems (NeurIPS), pp. 11352–11363, 2021.
  29. Learning important features through propagating activation differences. In International Conference on Machine Learning (ICML), volume 70, pp.  3145–3153. PMLR, 2017.
  30. Sørensen, T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. Kongelige Danske Videnskabernes Selskab., 5:1––34, 1948.
  31. Striving for simplicity: The all convolutional net. In ICLR (Workshop Track), 2015. URL http://arxiv.org/abs/1412.6806.
  32. Explaining image classifiers using statistical fault localization. In ECCV, Part XXVIII, volume 12373 of LNCS, pp. 391–406. Springer, 2020.
  33. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pp. 3319–3328. PMLR, 2017.
  34. A deeper look at human visual perception of images. Front. Neurosci., 15, 2021.
  35. A retina inspired model for enhancing visibility of hazy images. Front. Comput. Neurosci., 9, 2015.

Summary

We haven't generated a summary for this paper yet.