Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models (2404.09828v2)
Abstract: In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem by making the decision-making process of deep learning models more interpretable and transparent. Traditional xAI methods concentrate on visualizing input features that influence model predictions, providing insights primarily suited for experts. In this work, we present an interaction-based xAI method that enhances user comprehension of image classification models through their interaction. Thus, we developed a web-based prototype allowing users to modify images via painting and erasing, thereby observing changes in classification results. Our approach enables users to discern critical features influencing the model's decision-making process, aligning their mental models with the model's logic. Experiments conducted with five images demonstrate the potential of the method to reveal feature importance through user interaction. Our work contributes a novel perspective to xAI by centering on end-user engagement and understanding, paving the way for more intuitive and accessible explainability in AI systems.
- A survey of object detection based on cnn and transformer. In 2021 IEEE 2nd international conference on pattern recognition and machine learning (PRML), pages 99–108. IEEE, 2021.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
- Explainable ai (xai): Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9):1–33, 2023.
- Image classification based on cnn: a survey. Journal of Cybersecurity and Information Management, 6(1):18–50, 2021.
- Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. journal of Computational and Graphical Statistics, 24(1):44–65, 2015.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- A situation awareness perspective on human-ai interaction: Tensions and opportunities. International Journal of Human–Computer Interaction, 39(9):1789–1806, 2023.
- Transformers in vision: A survey. ACM computing surveys (CSUR), 54(10s):1–41, 2022.
- Skin cancer classification using explainable artificial intelligence on pre-extracted image features. Intelligent Systems with Applications, 20:200275, 2023.
- Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pages 2668–2677. PMLR, 2018.
- ” help me help the ai”: Understanding how explainability can support human-ai interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–17, 2023.
- Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790, 2021.
- Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 607–617, 2020.
- Manish Narwaria. Does explainable machine learning uncover the black box in vision applications? Image and Vision Computing, 118:104353, 2022.
- Explainable deep learning methods in medical image classification: A survey. ACM Computing Surveys, 56(4):1–41, 2023.
- A case-based approach for the selection of explanation algorithms in image classification. In Case-Based Reasoning Research and Development: 29th International Conference, ICCBR 2021, Salamanca, Spain, September 13–16, 2021, Proceedings 29, pages 186–200. Springer, 2021.
- ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
- Pytorch: An imperative style, high-performance deep learning library. 2019.
- Warren J Von Eschenbach. Transparency and the black box problem: Why we do not trust ai. Philosophy & Technology, 34(4):1607–1622, 2021.
- Representer point selection for explaining deep neural networks. Advances in neural information processing systems, 31, 2018.
- Driving with black box assistance: Teleoperated driving interface design guidelines for computational driver assistance systems in unstructured environments. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pages 156–166, 2023.