Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OpenXAI: Towards a Transparent Evaluation of Model Explanations (2206.11104v5)

Published 22 Jun 2022 in cs.LG and cs.AI

Abstract: While several types of post hoc explanation methods have been proposed in recent literature, there is very little work on systematically benchmarking these methods. Here, we introduce OpenXAI, a comprehensive and extensible open-source framework for evaluating and benchmarking post hoc explanation methods. OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, and (ii) open-source implementations of eleven quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, in turn providing comparisons of several explanation methods across a wide variety of metrics, models, and datasets. OpenXAI is easily extensible, as users can readily evaluate custom explanation methods and incorporate them into our leaderboards. Overall, OpenXAI provides an automated end-to-end pipeline that not only simplifies and standardizes the evaluation of post hoc explanation methods, but also promotes transparency and reproducibility in benchmarking these methods. While the first release of OpenXAI supports only tabular datasets, the explanation methods and metrics that we consider are general enough to be applicable to other data modalities. OpenXAI datasets and models, implementations of state-of-the-art explanation methods and evaluation metrics, are publicly available at this GitHub link.

OpenXAI: Enhanced Benchmarking for Post hoc Model Explanations

The paper "OpenXAI: Towards a Transparent Evaluation of Post hoc Model Explanations" presents a comprehensive open-source framework designed to systematically benchmark post hoc explanation methods. This framework, OpenXAI, aims to address current gaps in the evaluation and comparison of ML model explanations, an area of growing importance as ML models are increasingly deployed in critical domains such as healthcare and finance.

OpenXAI consists of several core components: a synthetic data generator, a collection of real-world datasets, implementations of state-of-the-art feature attribution methods, and metrics for evaluating faithfulness, stability, and fairness. The framework is constructed to be extensible, allowing researchers to integrate custom explanation methods and models and benchmark them against established metrics.

Key Contributions

The significant contributions of OpenXAI to the field of explainable AI (XAI) include:

  1. Synthetic Data Generation: The framework introduces a novel data generator, SynthGauss, which addresses prior limitations by ensuring feature independence and clear feature influence. This synthetic data generator deconstructs complexities associated with benchmarking on synthetic datasets and guarantees that models trained on these datasets adhere to the ground truth explanations—an assurance still lacking in previous works.
  2. Extensive Dataset and Model Collection: OpenXAI includes seven real-world and several synthetic datasets that span different domains. This diversity ensures robust benchmarking across various data types and model configurations, thereby making the framework particularly suitable for practical application assessments.
  3. Broad Range of Evaluation Metrics: The framework implements eleven quantitative metrics to evaluate post hoc explanation methods. It extends existing evaluation dimensions by considering faithfulness, stability, and fairness in model explanations, all of which are crucial for reliable AI systems. Faithfulness metrics determine how well explanations mimic the model's outputs; stability metrics evaluate robustness to data perturbations, and fairness metrics assess potential biases across subgroups.
  4. Systematic Benchmarking and Insights: OpenXAI benchmarks six leading feature attribution methods including LIME, SHAP, and several gradient-based methods. The analysis identifies the effectiveness and limitations of these methods with respect to specific metrics and datasets. Notably, there is significant variance in performance across metrics, underscoring the necessity of using a comprehensive evaluation suite when assessing XAI methods.

Practical and Theoretical Implications

The practical implications of OpenXAI are substantial. By providing a standardized and reproducible benchmarking pipeline, it elevates the rigor in evaluating post hoc model explanations, which is essential for fostering trust in ML models. This framework enables practitioners to effectively compare explanation methods and choose the most reliable one for their specific application domain.

Theoretically, OpenXAI promotes a deeper understanding of the interplay between different aspects of explanation reliability. Insights gleaned from systematic benchmarking may guide the development of new, more robust explanation methods, thereby contributing to the advancement of the XAI field.

Future Directions and the Role of OpenXAI

OpenXAI sets a solid foundation for future research in XAI by providing tools and metrics necessary for rigorous evaluation. As the framework evolves, it is poised to incorporate more complex datasets and support additional data modalities such as text and images. Furthermore, expanding the suite of benchmarked methods to include newer explanation techniques will enhance OpenXAI's utility and relevance. By fostering transparency and reproducibility, OpenXAI stands to significantly influence the trajectory of XAI research and deployment.

In conclusion, OpenXAI represents a pivotal step towards reliable and transparent evaluation of post hoc model explanations. It fills a critical void in the XAI landscape and offers a platform that researchers and practitioners can leverage to advance the field of interpretable and accountable AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Chirag Agarwal (39 papers)
  2. Satyapriya Krishna (27 papers)
  3. Eshika Saxena (5 papers)
  4. Martin Pawelczyk (21 papers)
  5. Nari Johnson (10 papers)
  6. Isha Puri (7 papers)
  7. Marinka Zitnik (79 papers)
  8. Himabindu Lakkaraju (88 papers)
  9. Dan Ley (11 papers)
Citations (118)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com