Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining the Predictions of Any Image Classifier via Decision Trees (1911.01058v2)

Published 4 Nov 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Despite outstanding contribution to the significant progress of AI, deep learning models remain mostly black boxes, which are extremely weak in explainability of the reasoning process and prediction results. Explainability is not only a gateway between AI and society but also a powerful tool to detect flaws in the model and biases in the data. Local Interpretable Model-agnostic Explanation (LIME) is a recent approach that uses an interpretable model to form a local explanation for the individual prediction result. The current implementation of LIME adopts the linear regression as its interpretable function. However, being so restricted and usually over-simplifying the relationships, linear models fail in situations where nonlinear associations and interactions exist among features and prediction results. This paper implements a decision Tree-based LIME approach, which uses a decision tree model to form an interpretable representation that is locally faithful to the original model. Tree-LIME approach can capture nonlinear interactions among features in the data and creates plausible explanations. Various experiments show that the Tree-LIME explanation of multiple black-box models can achieve more reliable performance in terms of understandability, fidelity, and efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sheng Shi (8 papers)
  2. Xinfeng Zhang (44 papers)
  3. Wei Fan (160 papers)
Citations (8)