Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the overlooked issue of defining explanation objectives for local-surrogate explainers (2106.05810v1)

Published 10 Jun 2021 in cs.LG, cs.AI, and stat.ML

Abstract: Local surrogate approaches for explaining machine learning model predictions have appealing properties, such as being model-agnostic and flexible in their modelling. Several methods exist that fit this description and share this goal. However, despite their shared overall procedure, they set out different objectives, extract different information from the black-box, and consequently produce diverse explanations, that are -- in general -- incomparable. In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation. We discuss the implications of the lack of agreement, and clarity, amongst the methods' objectives on the research and practice of explainability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Rafael Poyiadzi (14 papers)
  2. Xavier Renard (14 papers)
  3. Thibault Laugel (18 papers)
  4. Raul Santos-Rodriguez (70 papers)
  5. Marcin Detyniecki (41 papers)
Citations (6)