Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts (2105.07190v4)

Published 15 May 2021 in cs.LG and cs.AI

Abstract: In the meantime, a wide variety of terminologies, motivations, approaches, and evaluation criteria have been developed within the research field of explainable artificial intelligence (XAI). With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method based on traits required by a specific use-case context. Many taxonomies for XAI methods of varying level of detail and depth can be found in the literature. While they often have a different focus, they also exhibit many points of overlap. This paper unifies these efforts and provides a complete taxonomy of XAI methods with respect to notions present in the current state of research. In a structured literature analysis and meta-study, we identified and reviewed more than 50 of the most cited and current surveys on XAI methods, metrics, and method traits. After summarizing them in a survey of surveys, we merge terminologies and concepts of the articles into a unified structured taxonomy. Single concepts therein are illustrated by more than 50 diverse selected example methods in total, which we categorize accordingly. The taxonomy may serve both beginners, researchers, and practitioners as a reference and wide-ranging overview of XAI method traits and aspects. Hence, it provides foundations for targeted, use-case-oriented, and context-sensitive future research.

A Systematic Survey on Explainable Artificial Intelligence Methods

The paper presented by G. Schwalbe and B. Finzel, "A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts" focuses on developing a structured taxonomy for the rapidly evolving field of Explainable Artificial Intelligence (XAI). The authors conduct a meta-paper of over 50 surveys on XAI, examining the methodologies, metrics, and concepts prevalent in this research area. The primary objective is to provide researchers and practitioners with a unified understanding of XAI methods that can assist in selecting appropriate techniques for various use-case scenarios.

Overview of the Taxonomy

The taxonomy proposed by the authors divides XAI into several key categories to streamline comprehension and practical application:

  1. Problem Definition: This involves understanding the task type and the data input type for the XAI methodology. It's essential to align the explanation method with the nature of the task, whether it be classification, regression, or clustering, and the type of data it processes, such as tabular, textual, or image data.
  2. Interpretability of Models:
    • Intrinsic and Blended Models: Models that offer inherent interpretability by design, such as decision trees, linear models, or models augmented with symbolically interpretable components.
    • Self-Explaining Models: These models generate explanations inherently during processing, potentially providing attention maps or feature relevance scores alongside conventional outputs.
    • Post-Hoc Methods: Approaches applied after model development to probe model decisions and inner workings, often using surrogate models or feature attribution techniques.
  3. Explanator Characteristics: It includes examining the input requirements, portability across different model types, and locality (global or local nature) of the explanation. Factors such as interactivity and presentation format are also considered, impacting user engagement and comprehension.
  4. Metrics for Evaluation:
    • Functionally grounded metrics assess the mathematical properties of explanations, such as fidelity, coverage, and complexity.
    • Human-grounded metrics consider the interpretability and effectiveness from the explainee's perspective, factoring in cognitive load and ease of understanding.
    • Application-grounded metrics evaluate the practical utility of explanations in real-world settings, including aspects of trust, usability, and impact on decision-making.

Key Contributions

  • Unified Taxonomy: The authors provide a comprehensive taxonomy that spans various dimensions of XAI, aiming to standardize the categorization and evaluation of explainability methods.
  • Extensive Review: By collating insights from numerous surveys, the paper constructs a robust framework that encapsulates the breadth and diversity of XAI methodologies.
  • Practical Guidance: It serves as a foundational reference for selecting XAI techniques based on specific needs, fostering targeted research and development in AI explainability.

Implications for Future Research

The framework proposed offers a basis for advancing XAI research, encouraging the exploration of novel explanation strategies that could enhance human-AI collaboration across diverse domains. As AI continues to permeate critical areas such as healthcare, automotive, and finance, the need for transparent, interpretable solutions becomes increasingly urgent. Future developments may focus on refining the measurement metrics for explainability, enriching interactive explanation interfaces, and promoting interdisciplinary approaches to tackle complex explainability challenges. The uniform taxonomy and systematic review provided by this paper stand to influence both the theoretical grounding and practical application of XAI technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Gesina Schwalbe (17 papers)
  2. Bettina Finzel (7 papers)
Citations (146)
X Twitter Logo Streamline Icon: https://streamlinehq.com