People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior (2403.08828v2)
Abstract: Cognitive science can help us understand which explanations people might expect, and in which format they frame these explanations, whether causal, counterfactual, or teleological (i.e., purpose-oriented). Understanding the relevance of these concepts is crucial for building good explainable AI (XAI) which offers recourse and actionability. Focusing on autonomous driving, a complex decision-making domain, we report empirical data from two surveys on (i) how people explain the behavior of autonomous vehicles in 14 unique scenarios (N1=54), and (ii) how they perceive these explanations in terms of complexity, quality, and trustworthiness (N2=356). Participants deemed teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality and trustworthiness. Neither the perceived teleology nor the quality were affected by whether the car was an autonomous vehicle or driven by a person. This indicates that people use teleology to evaluate information about not just other people but also autonomous vehicles. Taken together, our findings highlight the importance of explanations that are framed in terms of purpose rather than just, as is standard in XAI, the causal mechanisms involved. We release the 14 scenarios and more than 1,300 elicited explanations publicly as the Human Explanations for Autonomous Driving Decisions (HEADD) dataset.
- Beckers, S.: Causal Explanations and XAI. In: Proceedings of the First Conference on Causal Learning and Reasoning. pp. 90–109. PMLR (Jun 2022)
- Dennett, D.C.: The intentional stance. MIT Press (1987)
- Halpern, J.Y.: Actual causality. MiT Press (2016)
- Pearl, J.: Causality. Cambridge university press (2000)
- Woodward, J.: Making things happen: A theory of causal explanation. Oxford university press (2003)
- Balint Gyevnar (10 papers)
- Stephanie Droop (2 papers)
- Tadeg Quillien (1 paper)
- Shay B. Cohen (78 papers)
- Neil R. Bramley (7 papers)
- Christopher G. Lucas (25 papers)
- Stefano V. Albrecht (73 papers)