Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Effects of Explanation Specificity on Passengers in Autonomous Driving (2307.00633v1)

Published 2 Jul 2023 in cs.RO, cs.AI, cs.CY, cs.HC, and cs.LG

Abstract: The nature of explanations provided by an explainable AI algorithm has been a topic of interest in the explainable AI and human-computer interaction community. In this paper, we investigate the effects of natural language explanations' specificity on passengers in autonomous driving. We extended an existing data-driven tree-based explainer algorithm by adding a rule-based option for explanation generation. We generated auditory natural language explanations with different levels of specificity (abstract and specific) and tested these explanations in a within-subject user study (N=39) using an immersive physical driving simulation setup. Our results showed that both abstract and specific explanations had similar positive effects on passengers' perceived safety and the feeling of anxiety. However, the specific explanations influenced the desire of passengers to takeover driving control from the autonomous vehicle (AV), while the abstract explanations did not. We conclude that natural language auditory explanations are useful for passengers in autonomous driving, and their specificity levels could influence how much in-vehicle participants would wish to be in control of the driving activity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. D. Omeiza, H. Webb, M. Jirotka, and L. Kunze, “Explanations in Autonomous Driving: A Survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 10142–10162, 2022.
  2. S. Khastgir, S. Birrell, G. Dhadyalla, and P. Jennings, “Calibrating trust through knowledge: Introducing the concept of informed safety for automation in vehicles,” Transportation Research Part C: Emerging Technologies, vol. 96, pp. 290–303, 2018.
  3. A. Kunze, S. J. Summerskill, R. Marshall, and A. J. Filtness, “Conveying Uncertainties using Peripheral Awareness Displays in the Context of Automated Driving,” in International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 329–341, 2019.
  4. H. Liu, T. Hirayama, and M. Watanabe, “Importance of instruction for pedestrian-automated driving vehicle interaction with an external human machine interface: Effects on pedestrians’ situation awareness, trust, perceived risks and decision making,” in 2021 IEEE Intelligent Vehicles Symposium (IV), pp. 748–754, 2021.
  5. T. Schneider, J. Hois, A. Rosenstein, S. Ghellal, D. Theofanou-Fülbier, and A. R. Gerlicher, “ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving,” in CHI Conference on Human Factors in Computing Systems, 2021.
  6. J. Koo, J. Kwac, W. Ju, M. Steinert, L. Leifer, and C. Nass, “Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance,” International Journal on Interactive Design and Manufacturing (IJIDeM), vol. 9, no. 4, pp. 269–275, 2015.
  7. T. Ha, S. Kim, D. Seo, and S. Lee, “Effects of explanation types and perceived risk on trust in autonomous vehicles,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 73, pp. 271–280, 2020.
  8. D. Omeiza, K. Kollnig, H. Webb, M. Jirotka, and L. Kunze, “Why Not Explain? Effects of Explanations on Human Perceptions of Autonomous Driving,” in IEEE International Conference on Advanced Robotics and its Social Impacts, 2021.
  9. D. Omeiza, H. Web, M. Jirotka, and L. Kunze, “Towards Accountability: Providing Intelligible Explanations in Autonomous Driving,” in IEEE Intelligent Vehicles Symposium (IV), 2021.
  10. D. Omeiza, S. Anjomshoae, H. Webb, M. Jirotka, and L. Kunze, “From Spoken Thoughts to Automated Driving Commentary: Predicting and Explaining Intelligent Vehicles’ Actions,” in 2022 IEEE Intelligent Vehicles Symposium (IV), 2022.
  11. S. Buijsman, “Defining explanation and explanatory depth in XAI,” Minds and Machines, vol. 32, no. 3, pp. 563–584, 2022.
  12. R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM computing surveys (CSUR), vol. 51, no. 5, pp. 1–42, 2018.
  13. Y. Ramon, T. Vermeire, O. Toubia, D. Martens, and T. Evgeniou, “Understanding consumer preferences for explanations generated by XAI algorithms,” arXiv preprint arXiv:2107.02624, 2021.
  14. A. Kunze, S. J. Summerskill, R. Marshall, and A. J. Filtness, “Automation Transparency: Implications of Uncertainty Communication for Human-Automation Interaction and Interfaces,” Ergonomics, vol. 62, no. 3, pp. 345–360, 2019.
  15. M. Colley, B. Eder, J. O. Rixen, and E. Rukzio, “Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–11, 2021.
  16. M. Colley, M. Rädler, J. Glimmann, and E. Rukzio, “Effects of Scene Detection, Scene Prediction, and Maneuver Planning Visualizations on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 2, pp. 1–21, 2022.
  17. M. Colley, C. Bräuner, M. Lanzer, M. Walch, M. Baumann, and E. Rukzio, “Effect of Visualization of Pedestrian Intention Recognition on Trust and Cognitive Load,” in 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 181–191, 2020.
  18. M. R. Endsley, “Situation models: An avenue to the modeling of mental models,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 44, pp. 61–64, SAGE Publications Sage CA: Los Angeles, CA, 2000.
  19. G. Silvera, A. Biswas, and H. Admoni, “DReyeVR: Democratizing Virtual Reality Driving Simulation for Behavioural & Interaction Research,” in ACM/IEEE Human Robot Interaction Conference, 2022.
  20. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in Conference on Robot Learning, 2017.
  21. P. Wintersberger, H. Nicklas, T. Martlbauer, S. Hammer, and A. Riener, “Explainable automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems,” in International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 252–261, 2020.
  22. S. Anjomshoae, D. Omeiza, and L. Jiang, “Context-based image explanations for deep neural networks,” Image and Vision Computing, vol. 116, p. 104310, 2021.
  23. M. Eiband, D. Buschek, A. Kremer, and H. Hussmann, “The Impact of Placebic Explanations on Trust in Intelligent Systems,” in Extended abstracts of the CHI conference on human factors in computing systems, pp. 1–6, 2019.
  24. R. Bin Issa, M. Das, M. S. Rahman, M. Barua, M. K. Rhaman, K. S. N. Ripon, and M. G. R. Alam, “Double deep Q-learning and faster R-Cnn-based autonomous vehicle navigation and obstacle avoidance in dynamic environment,” Sensors, vol. 21, no. 4, p. 1468, 2021.
  25. C. Hewitt, I. Politis, T. Amanatidis, and A. Sarkar, “Assessing public perception of self-driving cars: The autonomous vehicle acceptance model,” in International Conference on Intelligent User Interfaces, 2019.
  26. S. M. Faas, J. Kraus, A. Schoenhals, and M. Baumann, “Calibrating Pedestrians’ Trust in Automated Vehicles: Does an Intent Display in an External HMI Support Trust Calibration and Safe Crossing Behavior?,” in CHI Conference on Human Factors in Computing Systems, 2021.
  27. M. M. Davidson, M. S. Butchko, K. Robbins, L. W. Sherd, and S. J. Gervais, “The mediating role of perceived safety on street harassment and anxiety,” Psychology of Violence, vol. 6, no. 4, p. 553, 2016.
  28. F. Quansah, J. E. Hagan Jr, F. Sambah, J. B. Frimpong, F. Ankomah, M. Srem-Sai, M. Seibu, R. S. K. Abieraba, and T. Schack, “Perceived safety of learning environment and associated anxiety factors during COVID-19 in Ghana: Evidence from physical education practical-oriented program,” European Journal of Investigation in Health, Psychology and Education, vol. 12, no. 1, pp. 28–41, 2022.
  29. N. Dillen, M. Ilievski, E. Law, L. E. Nacke, K. Czarnecki, and O. Schneider, “Keep Calm and Ride Along: Passenger Comfort and Anxiety as Physiological Responses to Autonomous Driving Styles,” in CHI conference on human factors in computing systems, pp. 1–13, 2020.
  30. J. Terken and B. Pfleging, “Toward shared control between automated vehicles and users,” Automotive Innovation, vol. 3, no. 1, pp. 53–61, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Daniel Omeiza (17 papers)
  2. Raunak Bhattacharyya (7 papers)
  3. Nick Hawes (38 papers)
  4. Marina Jirotka (12 papers)
  5. Lars Kunze (40 papers)
Citations (1)