Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 69 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 402 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

On the Interpretability of Part-Prototype Based Classifiers: A Human Centric Analysis (2310.06966v1)

Published 10 Oct 2023 in cs.CV, cs.AI, cs.HC, and cs.LG

Abstract: Part-prototype networks have recently become methods of interest as an interpretable alternative to many of the current black-box image classifiers. However, the interpretability of these methods from the perspective of human users has not been sufficiently explored. In this work, we have devised a framework for evaluating the interpretability of part-prototype-based models from a human perspective. The proposed framework consists of three actionable metrics and experiments. To demonstrate the usefulness of our framework, we performed an extensive set of experiments using Amazon Mechanical Turk. They not only show the capability of our framework in assessing the interpretability of various part-prototype-based models, but they also are, to the best of our knowledge, the most comprehensive work on evaluating such methods in a unified framework.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Molnar, C. Interpretable machine learning (Lulu. com, 2020).
  2. Prototype classification and feature selection with fuzzy sets. \JournalTitleIEEE Transactions on Systems, Man, and Cybernetics 7, 87–92 (1977).
  3. Kohonen, T. Improved versions of learning vector quantization. In 1990 ijcnn international joint conference on Neural networks, 545–550 (IEEE, 1990).
  4. Nearest prototype classification: Clustering, genetic algorithms, or random search? \JournalTitleIEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 28, 160–164 (1998).
  5. Soft nearest prototype classification. \JournalTitleIEEE Transactions on Neural Networks 14, 390–398 (2003).
  6. Prototype classification: Insights from machine learning. \JournalTitleNeural computation 21, 272–300 (2009).
  7. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018).
  8. Toward faithful case-based reasoning through learning prototypes in a nearest neighbor-friendly space. In International Conference on Learning Representations (2021).
  9. Chen, C. et al. This looks like that: deep learning for interpretable image recognition. \JournalTitleAdvances in neural information processing systems 32 (2019).
  10. Huang, Q. et al. Evaluation and improvement of interpretability for self-explainable part-prototype networks (2023). 2212.05946.
  11. Concept-level debugging of part-prototype networks. \JournalTitlearXiv preprint arXiv:2205.15769 (2022).
  12. Hive: Evaluating the human interpretability of visual explanations. In European Conference on Computer Vision, 280–298 (Springer, 2022).
  13. Krosnick, J. A. Questionnaire Design, 439–455 (Springer International Publishing, Cham, 2018).
  14. This looks like that… does it? shortcomings of latent space prototype interpretability in deep networks. \JournalTitlearXiv preprint arXiv:2105.02968 (2021).
  15. Human-in-the-loop interpretability prior. \JournalTitleAdvances in neural information processing systems 31 (2018).
  16. What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods. \JournalTitleAdvances in Neural Information Processing Systems 35, 2832–2845 (2022).
  17. Kraft, S. et al. Sparrow: semantically coherent prototypes for image classification. In The 32nd British Machine Vision Conference (BMVC) (2021).
  18. Deformable protopnet: An interpretable image classifier using deformable prototypes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10265–10275 (2022).
  19. Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14933–14943 (2021).
  20. Interpretable image recognition by constructing transparent embedding space. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 895–904 (2021).
  21. Rymarczyk, D. et al. Interpretable image classification with differentiable prototypes assignment. In European Conference on Computer Vision, 351–368 (Springer, 2022).
  22. Towards automatic concept-based explanations. \JournalTitleAdvances in neural information processing systems 32 (2019).
  23. Cub-200-2011 dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology (2011).
  24. 3d object representations for fine-grained categorization. In 2013 IEEE International Conference on Computer Vision Workshops, 554–561, DOI: 10.1109/ICCVW.2013.77 (2013).
  25. Deng, J. et al. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255 (Ieee, 2009).
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.