Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explanations in Autonomous Driving: A Survey (2103.05154v4)

Published 9 Mar 2021 in cs.HC, cs.AI, cs.CY, cs.LG, and cs.RO

Abstract: The automotive industry has witnessed an increasing level of development in the past decades; from manufacturing manually operated vehicles to manufacturing vehicles with a high level of automation. With the recent developments in AI, automotive companies now employ blackbox AI models to enable vehicles to perceive their environments and make driving decisions with little or no input from a human. With the hope to deploy autonomous vehicles (AV) on a commercial scale, the acceptance of AV by society becomes paramount and may largely depend on their degree of transparency, trustworthiness, and compliance with regulations. The assessment of the compliance of AVs to these acceptance requirements can be facilitated through the provision of explanations for AVs' behaviour. Explainability is therefore seen as an important requirement for AVs. AVs should be able to explain what they have 'seen', done, and might do in environments in which they operate. In this paper, we provide a comprehensive survey of the existing body of work around explainable autonomous driving. First, we open with a motivation for explanations by highlighting and emphasising the importance of transparency, accountability, and trust in AVs; and examining existing regulations and standards related to AVs. Second, we identify and categorise the different stakeholders involved in the development, use, and regulation of AVs and elicit their explanation requirements for AV. Third, we provide a rigorous review of previous work on explanations for the different AV operations (i.e., perception, localisation, planning, control, and system management). Finally, we identify pertinent challenges and provide recommendations, such as a conceptual framework for AV explainability. This survey aims to provide the fundamental knowledge required of researchers who are interested in explainability in AVs.

Explanations in Autonomous Driving: A Survey

The paper "Explanations in Autonomous Driving: A Survey" authored by Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze provides a detailed examination into the significance of explainability in the field of autonomous vehicles (AVs). As the advancement of autonomous driving technology progresses, the comprehension, trustworthiness, and acceptance of autonomous vehicles by society have become imperative. This paper addresses the pivotal role of explanations in fostering transparency, accountability, and trust in AVs, which are necessary for their widespread adoption.

Overview

The survey methodically evaluates the extant literature surrounding explainable AI (XAI) within the context of autonomous driving. This analysis is structured to cover various dimensions of autonomous vehicle functioning including perception, localisation, planning, control, and system management. It consolidates the motivations behind explanation provision by emphasizing the importance of transparency, accountability, and trust.

Stakeholders and Explanation Needs

The authors categorize stakeholders into three broad classes:

  1. Class A: End-users, including passengers, pedestrians, and other road participants;
  2. Class B: Developers and technicians involved in AV development and maintenance;
  3. Class C: Regulators, system auditors, accident investigators, and insurers.

Their explanation requirements vary significantly, highlighting the need for personalized and intelligible justifications tailored to each group. End-users may benefit from explanations that elucidate AV decisions and behaviors, while developers require technical insights into AV systems for debugging and improving functionality.

Explanation Methodologies

The paper explores several methodologies for generating explanations:

  • Unvalidated Guidelines (UG) rely on heuristic rules and influence scores without empirical justification.
  • Empirically Derived (ED) explanations are informed by user studies and surveys to determine user needs.
  • Psychological Constructs from Formal Theories (PC) incorporate cognitive and philosophical models to structure explanation frameworks.

Key Dimensions of Explanation

Explanations are categorized by their intelligibility types and functional styles, including factual, contrastive, counterfactual, input influence, sensitivity, and more. The importance of interactive and customizable explanations is underscored to support diverse stakeholder needs.

Challenges and Recommendations

The paper acknowledges significant challenges in the field, such as inadequate regulation specifically targeting AV explainability, the nascent state of interdisciplinary exploration, and inherent biases from existing datasets that may affect explanation faithfulness. It proposes a conceptual framework for integrating explainability into AV systems, ensuring that explanations cover perception, decision-making, and actions while being accessible for real-time assessment and stakeholder inquiry.

Implications

The paper holds implications for both practical applications in AV development and theoretical frameworks concerning the intelligibility of AI-driven systems. It emphasizes the future role of explainable systems in earning public trust, aiding accident investigations, and refining AV technologies through transparent operation logs. By addressing these elements, the paper outlines potential advancements in ethical AI practices and anticipates increased regulatory attention to enable greater integration of AVs into society.

Overall, this survey offers a foundational resource for researchers interested in the explainability of autonomous vehicles, suggesting directions for future research and development in this critical field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Daniel Omeiza (17 papers)
  2. Helena Webb (11 papers)
  3. Marina Jirotka (12 papers)
  4. Lars Kunze (40 papers)
Citations (179)