Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Defining Locality for Surrogates in Post-hoc Interpretablity (1806.07498v1)

Published 19 Jun 2018 in cs.LG, cs.AI, and stat.ML

Abstract: Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box. This paper highlights the importance of defining the right locality, the neighborhood on which a local surrogate is trained, in order to approximate accurately the local black-box decision boundary. Unfortunately, as shown in this paper, this issue is not only a parameter or sampling distribution challenge and has a major impact on the relevance and quality of the approximation of the local black-box decision boundary and thus on the meaning and accuracy of the generated explanation. To overcome the identified problems, quantified with an adapted measure and procedure, we propose to generate surrogate-based explanations for individual predictions based on a sampling centered on particular place of the decision boundary, relevant for the prediction to be explained, rather than on the prediction itself as it is classically done. We evaluate the novel approach compared to state-of-the-art methods and a straightforward improvement thereof on four UCI datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Thibault Laugel (18 papers)
  2. Xavier Renard (14 papers)
  3. Marie-Jeanne Lesot (22 papers)
  4. Christophe Marsala (9 papers)
  5. Marcin Detyniecki (41 papers)
Citations (72)

Summary

We haven't generated a summary for this paper yet.