Incorporating Explanations into Human-Machine Interfaces for Trust and Situation Awareness in Autonomous Vehicles (2404.07383v1)
Abstract: Autonomous vehicles often make complex decisions via machine learning-based predictive models applied to collected sensor data. While this combination of methods provides a foundation for real-time actions, self-driving behavior primarily remains opaque to end users. In this sense, explainability of real-time decisions is a crucial and natural requirement for building trust in autonomous vehicles. Moreover, as autonomous vehicles still cause serious traffic accidents for various reasons, timely conveyance of upcoming hazards to road users can help improve scene understanding and prevent potential risks. Hence, there is also a need to supply autonomous vehicles with user-friendly interfaces for effective human-machine teaming. Motivated by this problem, we study the role of explainable AI and human-machine interface jointly in building trust in vehicle autonomy. We first present a broad context of the explanatory human-machine systems with the "3W1H" (what, whom, when, how) approach. Based on these findings, we present a situation awareness framework for calibrating users' trust in self-driving behavior. Finally, we perform an experiment on our framework, conduct a user study on it, and validate the empirical findings with hypothesis testing.
- S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, et al., “Stanley: The robot that won the DARPA Grand Challenge,” Journal of Field Robotics, vol. 23, no. 9, pp. 661–692, 2006.
- S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel, “Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions,” arXiv preprint arXiv:2112.11561, 2021.
- Andrew J. Hawkins, “How will driverless cars ‘talk’ to pedestrians? Waymo has a few ideas,” The Verge, 2023.
- Waymo Team, “Self-Driving Car Technology for a Reliable Ride,” 2023.
- A. Schieben, M. Wilbrink, C. Kettwich, R. Madigan, T. Louw, and N. Merat, “Designing the interaction of automated vehicles with other traffic participants: design considerations based on human needs and expectations,” Cognition, Technology & Work, vol. 21, pp. 69–85, 2019.
- T. Schneider, J. Hois, A. Rosenstein, S. Metzl, A. R. Gerlicher, S. Ghellal, and S. Love, “Don’t fail me! The Level 5 Autonomous Driving Information Dilemma regarding Transparency and User Experience,” in Proceedings of the 28th International Conference on Intelligent User Interfaces, 2023, pp. 540–552.
- G. Wiegand, M. Eiband, M. Haubelt, and H. Hussmann, ““I’d like an Explanation for That!” Exploring Reactions to Unexpected Autonomous Driving,” in 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, 2020, pp. 1–11.
- G. Kim, D. Yeo, T. Jo, D. Rus, and S. Kim, “What and When to Explain? On-road Evaluation of Explanations in Highly Automated Vehicles,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 7, no. 3, pp. 1–26, 2023.
- J. Koo, J. Kwac, W. Ju, M. Steinert, L. Leifer, and C. Nass, “Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance,” International Journal on Interactive Design and Manufacturing (IJIDeM), vol. 9, pp. 269–275, 2015.
- J. Haspiel, N. Du, J. Meyerson, L. P. Robert Jr, D. Tilbury, X. J. Yang, and A. K. Pradhan, “Explanations and expectations: Trust building in automated vehicles,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 119–120.
- Y. Shen, S. Jiang, Y. Chen, and K. R. Driggs-Campbell, “To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles,” in NeurIPS Workshop on Progress and Challenges in Building Trustworthy Embodied AI, 2022.
- S. O. Hansson, M.-Å. Belin, and B. Lundgren, “Self‑Driving Vehicles—an Ethical Overview,” Philosophy & Technology, vol. 34, no. 4, pp. 1383–1408, 2021.
- J. Liu, N. Xu, Y. Shi, M. M. Rahman, T. Barnett, and S. Jones, “Do first responders trust connected and automated vehicles (CAVs)? A national survey,” Transport Policy, vol. 140, pp. 85–99, 2023.
- S. Faltaous, M. Baumann, S. Schneegass, and L. L. Chuang, “Design guidelines for reliability communication in autonomous vehicles,” in Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2018, pp. 258–267.
- T. Schneider, S. Ghellal, S. Love, and A. R. Gerlicher, “Increasing the User Experience in Autonomous Driving through different Feedback Modalities,” in 26th International Conference on Intelligent User Interfaces, 2021, pp. 7–10.
- H. Detjen, M. Salini, J. Kronenberger, S. Geisler, and S. Schneegass, “Towards Transparent Behavior of Automated Vehicles Design and Evaluation of HUD Concepts to Support System Predictability Through Motion Intent Communication,” in Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, 2021, pp. 1–12.
- A. Dandekar, L.-A. Mathis, M. Berger, and B. Pfleging, “How to display vehicle information to users of automated vehicles when conducting non-driving-related activities,” Proceedings of the ACM on Human-Computer Interaction, vol. 6, no. MHCI, pp. 1–22, 2022.
- S. Arfini, P. Bellani, A. Picardi, M. Yan, F. Fossa, and G. Caruso, “Design for Inclusivity in Driving Automation: Theoretical and Practical Challenges to Human-Machine Interactions and Interface Design,” in Connected and Automated Vehicles: Integrating Engineering and Ethics. Springer, 2023, pp. 63–85.
- M. R. Endsley, “Measurement of Situation Awareness in DynamicSystems,” Human Factors, vol. 37, no. 1, pp. 65–84, 1995.
- L. Sanneman and J. A. Shah, “The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems,” International Journal of Human-Computer Interaction, vol. 38, no. 18-20, pp. 1772–1788, 2022.
- S. Atakishiyev, M. Salameh, H. Babiker, and R. Goebel, “Explaining Autonomous Driving Actions with Visual Question Answering,” in 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 2023, pp. 1207–1214.
- H. Liu, C. Li, Q. Wu, and Y. J. Lee, “Visual Instruction Tuning,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing, “Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality,” March 2023. [Online]. Available: https://lmsys.org/blog/2023-03-30-vicuna/
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., “Learning Transferable Visual Models From Natural Language Supervision,” in International Conference on Machine Learning, 2021, pp. 8748–8763.
- Y. Xia, D. Zhang, J. Kim, K. Nakayama, K. Zipser, and D. Whitney, “Predicting Driver Attention in Critical Situations,” in Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part V 14. Springer, 2019, pp. 658–674.
- Mathematics Tutoring at Crafton Hills College, “Standard Normal Distribution Probabilities Table.”
- C. Hewitt, I. Politis, T. Amanatidis, and A. Sarkar, “Assessing Public Perception of Self-Driving Cars: the Autonomous Vehicle Acceptance Model,” in Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, pp. 518–527.