Explanations in Autonomous Driving: A Survey
The paper "Explanations in Autonomous Driving: A Survey" authored by Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze provides a detailed examination into the significance of explainability in the field of autonomous vehicles (AVs). As the advancement of autonomous driving technology progresses, the comprehension, trustworthiness, and acceptance of autonomous vehicles by society have become imperative. This paper addresses the pivotal role of explanations in fostering transparency, accountability, and trust in AVs, which are necessary for their widespread adoption.
Overview
The survey methodically evaluates the extant literature surrounding explainable AI (XAI) within the context of autonomous driving. This analysis is structured to cover various dimensions of autonomous vehicle functioning including perception, localisation, planning, control, and system management. It consolidates the motivations behind explanation provision by emphasizing the importance of transparency, accountability, and trust.
Stakeholders and Explanation Needs
The authors categorize stakeholders into three broad classes:
- Class A: End-users, including passengers, pedestrians, and other road participants;
- Class B: Developers and technicians involved in AV development and maintenance;
- Class C: Regulators, system auditors, accident investigators, and insurers.
Their explanation requirements vary significantly, highlighting the need for personalized and intelligible justifications tailored to each group. End-users may benefit from explanations that elucidate AV decisions and behaviors, while developers require technical insights into AV systems for debugging and improving functionality.
Explanation Methodologies
The paper explores several methodologies for generating explanations:
- Unvalidated Guidelines (UG) rely on heuristic rules and influence scores without empirical justification.
- Empirically Derived (ED) explanations are informed by user studies and surveys to determine user needs.
- Psychological Constructs from Formal Theories (PC) incorporate cognitive and philosophical models to structure explanation frameworks.
Key Dimensions of Explanation
Explanations are categorized by their intelligibility types and functional styles, including factual, contrastive, counterfactual, input influence, sensitivity, and more. The importance of interactive and customizable explanations is underscored to support diverse stakeholder needs.
Challenges and Recommendations
The paper acknowledges significant challenges in the field, such as inadequate regulation specifically targeting AV explainability, the nascent state of interdisciplinary exploration, and inherent biases from existing datasets that may affect explanation faithfulness. It proposes a conceptual framework for integrating explainability into AV systems, ensuring that explanations cover perception, decision-making, and actions while being accessible for real-time assessment and stakeholder inquiry.
Implications
The paper holds implications for both practical applications in AV development and theoretical frameworks concerning the intelligibility of AI-driven systems. It emphasizes the future role of explainable systems in earning public trust, aiding accident investigations, and refining AV technologies through transparent operation logs. By addressing these elements, the paper outlines potential advancements in ethical AI practices and anticipates increased regulatory attention to enable greater integration of AVs into society.
Overall, this survey offers a foundational resource for researchers interested in the explainability of autonomous vehicles, suggesting directions for future research and development in this critical field.