Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics (2302.12094v3)

Published 23 Feb 2023 in cs.LG and cs.AI

Abstract: The rapid integration of AI into various industries has introduced new challenges in governance and regulation, particularly regarding the understanding of complex AI systems. A critical demand from decision-makers is the ability to explain the results of machine learning models, which is essential for fostering trust and ensuring ethical AI practices. In this paper, we develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained. These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions, allowing for a comprehensive evaluation of how models generate their outputs. Furthermore, by computing our metrics, we can rank models in terms of explainability criteria such as importance concentration and consistency, prediction fluctuation, and surrogate fidelity and stability, offering a valuable tool for selecting models based not only on accuracy but also on transparency. We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Cristian Munoz (7 papers)
  2. Kleyton da Costa (3 papers)
  3. Bernardo Modenesi (5 papers)
  4. Adriano Koshiyama (18 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com