Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI (2201.08164v3)

Published 20 Jan 2022 in cs.AI

Abstract: The rising popularity of explainable artificial intelligence (XAI) to understand high-performing black boxes raised the question of how to evaluate explanations of ML models. While interpretability and explainability are often presented as a subjectively validated binary property, we consider it a multi-faceted concept. We identify 12 conceptual properties, such as Compactness and Correctness, that should be evaluated for comprehensively assessing the quality of an explanation. Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method. We find that 1 in 3 papers evaluate exclusively with anecdotal evidence, and 1 in 5 papers evaluate with users. This survey also contributes to the call for objective, quantifiable evaluation methods by presenting an extensive overview of quantitative XAI evaluation methods. Our systematic collection of evaluation methods provides researchers and practitioners with concrete tools to thoroughly validate, benchmark and compare new and existing XAI methods. The Co-12 categorization scheme and our identified evaluation methods open up opportunities to include quantitative metrics as optimization criteria during model training in order to optimize for accuracy and interpretability simultaneously.

Systematic Review on Evaluating Explainable AI

The paper "From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI" by Nauta et al. undertakes a substantial examination of the methods used to evaluate explainable artificial intelligence (XAI). Over the past few years, the field of XAI has gained traction due to the increasing complexity and opacity of ML models, necessitating methods to make these models more interpretable to human stakeholders. This narrative examines the paper's methodology, analysis, and conclusions concerning the evaluation of XAI methods, with attention to their practical and theoretical implications.

Overview

The authors collected and reviewed 606 papers from major AI conferences published between 2014 and 2020, of which 361 papers fit their inclusion criteria. Their analysis examines multiple dimensions of XAI, ranging from input data types and explanation types to evaluation practices. Importantly, 312 of these papers introduced a new XAI method, enabling the authors to thoroughly analyze evaluation practices pertinent to those contributions.

Evaluation Practices

One of the key findings in the paper is that 33% of the papers reviewed evaluate XAI methods purely with anecdotal evidence, while 58% utilize quantitative metrics. Furthermore, 22% of the studies involved human subjects, with a small fraction (23% of those 22%) using application-grounded user studies with domain experts. This indicates a gradual shift toward more rigorous evaluation practices in recent years, with an emphasis on quantification.

Co-12 Properties

A significant contribution of this review is the proposal of Co-12 properties, which provides a comprehensive set of criteria for evaluating explanations. Among them are Correctness, Completeness, Consistency, Continuity, and Coherence, providing a multi-dimensional framework which XAI evaluations can be based upon. Co-12 properties encapsulate the critical aspects of explanations, offering a structured approach to assess both qualitative and quantitative dimensions of interpretability distinctly.

Evaluation Methods

The paper categorizes various functionally-grounded evaluation methods covering the Co-12 properties. Some highlight methods for assessing correctness include Model Parameter Randomization Check and Controlled Synthetic Data Check, while continuity is evaluated via Stability for Slight Variations. These methods collectively rupture the reliance on anecdotal evaluations and forge a path toward more standardized assessment practices.

Implications and Future Developments

This work underscores a critical transition in the field, advocating for evaluations that holistically examine the multi-faceted nature of explanations rather than singular aspects. This is essential not only for building trust with stakeholders in AI-driven environments but also for enhancing the understanding of model mechanisms.

Furthermore, the paper points to a promising direction where evaluation metrics can be integrated into the model training process, potentially optimizing models for interpretability alongside predictive performance. This opens novel research avenues in optimizing the accuracy-interpretability trade-off, a known challenge in contemporary machine learning paradigms.

Conclusion

In sum, the paper by Nauta et al. is a commendable effort towards systematically categorizing and advancing the evaluation practices for XAI. Moving beyond the anecdotal field of assessment, it provides essential insights and tools for researchers aiming to develop robust, trustworthy, and user-aligned AI systems. This work not only enriches the theoretical groundwork but offers actionable pathways for objective evaluations, merging the lines between human interpretability and technical precision in machine learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Meike Nauta (9 papers)
  2. Jan Trienes (9 papers)
  3. Shreyasi Pathak (4 papers)
  4. Elisa Nguyen (7 papers)
  5. Michelle Peters (1 paper)
  6. Yasmin Schmitt (1 paper)
  7. Jörg Schlötterer (35 papers)
  8. Christin Seifert (46 papers)
  9. Maurice Van Keulen (9 papers)
Citations (295)