Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification (2405.03301v1)

Published 6 May 2024 in cs.LG and cs.CV

Abstract: Transparency and explainability in image classification are essential for establishing trust in machine learning models and detecting biases and errors. State-of-the-art explainability methods generate saliency maps to show where a specific class is identified, without providing a detailed explanation of the model's decision process. Striving to address such a need, we introduce a post-hoc method that explains the entire feature extraction process of a Convolutional Neural Network. These explanations include a layer-wise representation of the features the model extracts from the input. Such features are represented as saliency maps generated by clustering and merging similar feature maps, to which we associate a weight derived by generalizing Grad-CAM for the proposed methodology. To further enhance these explanations, we include a set of textual labels collected through a gamified crowdsourcing activity and processed using NLP techniques and Sentence-BERT. Finally, we show an approach to generate global explanations by aggregating labels across multiple images.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Comparing user perception of explanations developed with xai methods. In 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pages 1–7, 2022.
  2. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai, 2019.
  3. What do you mean? interpreting image classification with crowdsourced concept extraction and analysis. In Proceedings of the Web Conference 2021, WWW ’21, page 1937–1948, New York, NY, USA, 2021. Association for Computing Machinery.
  4. John Brooke. Sus: A quick and dirty usability scale. Usability Eval. Ind., 189, 11 1995.
  5. John Brooke. Sus: a retrospective. Journal of Usability Studies, 8:29–40, 01 2013.
  6. Learning explainable models using attribution priors. CoRR, abs/1906.10670, 2019.
  7. Human-in-the-loop construction of decision tree classifiers with parallel coordinates. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 3852–3859, 2020.
  8. A survey of methods for explaining black box models, 2018.
  9. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Peter A. Hancock and Najmedin Meshkati, editors, Human Mental Workload, volume 52 of Advances in Psychology, pages 139–183. North-Holland, 1988.
  10. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). 2017.
  11. Imagenet classification with deep convolutional neural networks. Neural Information Processing Systems, 25, 01 2012.
  12. Zachary Chase Lipton. The mythos of model interpretability. CoRR, abs/1606.03490, 2016.
  13. Crowdsourcing evaluation of saliency-based XAI methods. CoRR, abs/2107.00456, 2021.
  14. A unified approach to interpreting model predictions, 2017.
  15. Gcexplainer: Human-in-the-loop concept-based explanations for graph neural networks. CoRR, abs/2107.11889, 2021.
  16. Principal components analysis (pca). Computers & Geosciences, 19(3):303–342, 1993.
  17. Crowdsourcing and evaluating concept-driven explanations of machine learning models. Proc. ACM Hum.-Comput. Interact., 5(CSCW1), apr 2021.
  18. Gamification in crowdsourcing: A review. 01 2016.
  19. The mental workload analysis of staff in study program of private educational organization. IOP Conference Series: Materials Science and Engineering, 528(1):012018, may 2019.
  20. Sentence-bert: Sentence embeddings using siamese bert-networks. CoRR, abs/1908.10084, 2019.
  21. ”why should i trust you?”: Explaining the predictions of any classifier, 2016.
  22. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. CoRR, abs/1610.02391, 2016.
  23. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR, abs/1312.6034, 2013.
  24. Axiomatic attribution for deep networks. CoRR, abs/1703.01365, 2017.
  25. Visual, textual or hybrid: The effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces, IUI ’21, page 109–119, New York, NY, USA, 2021. Association for Computing Machinery.
  26. The role of human knowledge in explainable ai. Data, 7(7), 2022.
  27. Exp-crowd: A gamified crowdsourcing framework for explainability. Frontiers in Artificial Intelligence, 5, 2022.
  28. Investigating bias in image classification using model explanations, 2020.
  29. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579–2605, 11 2008.
  30. Warren von Eschenbach. Transparency and the black box problem: Why we do not trust ai. Philosophy & Technology, 34, 12 2021.
  31. Interactive topic model with enhanced interpretability. In IUI Workshops, 2019.
  32. Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges, pages 563–574. 09 2019.
  33. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013.
  34. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Matteo Bianchi (20 papers)
  2. Antonio De Santis (6 papers)
  3. Andrea Tocchetti (5 papers)
  4. Marco Brambilla (17 papers)