Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Select Wisely and Explain: Active Learning and Probabilistic Local Post-hoc Explainability (2108.06907v2)

Published 16 Aug 2021 in cs.LG

Abstract: Albeit the tremendous performance improvements in designing complex AI systems in data-intensive domains, the black-box nature of these systems leads to the lack of trustworthiness. Post-hoc interpretability methods explain the prediction of a black-box ML model for a single instance, and such explanations are being leveraged by domain experts to diagnose the underlying biases of these models. Despite their efficacy in providing valuable insights, existing approaches fail to deliver consistent and reliable explanations. In this paper, we propose an active learning-based technique called UnRAvEL (Uncertainty driven Robust Active Learning Based Locally Faithful Explanations), which consists of a novel acquisition function that is locally faithful and uses uncertainty-driven sampling based on the posterior distribution on the probabilistic locality using Gaussian process regression(GPR). We present a theoretical analysis of UnRAvEL by treating it as a local optimizer and analyzing its regret in terms of instantaneous regrets over a global optimizer. We demonstrate the efficacy of the local samples generated by UnRAvEL by incorporating different kernels such as the Matern and linear kernels in GPR. Through a series of experiments, we show that UnRAvEL outperforms the baselines with respect to stability and local fidelity on several real-world models and datasets. We show that UnRAvEL is an efficient surrogate dataset generator by deriving importance scores on this surrogate dataset using sparse linear models. We also showcase the sample efficiency and flexibility of the developed framework on the Imagenet dataset using a pre-trained ResNet model.

Citations (12)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.