Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence (2104.13299v2)

Published 27 Apr 2021 in cs.AI and cs.LG

Abstract: We take inspiration from the study of human explanation to inform the design and evaluation of interpretability methods in machine learning. First, we survey the literature on human explanation in philosophy, cognitive science, and the social sciences, and propose a list of design principles for machine-generated explanations that are meaningful to humans. Using the concept of weight of evidence from information theory, we develop a method for generating explanations that adhere to these principles. We show that this method can be adapted to handle high-dimensional, multi-class settings, yielding a flexible framework for generating explanations. We demonstrate that these explanations can be estimated accurately from finite samples and are robust to small perturbations of the inputs. We also evaluate our method through a qualitative user study with machine learning practitioners, where we observe that the resulting explanations are usable despite some participants struggling with background concepts like prior class probabilities. Finally, we conclude by surfacing~design~implications for interpretability tools in general.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. David Alvarez-Melis (48 papers)
  2. Harmanpreet Kaur (3 papers)
  3. Hanna Wallach (48 papers)
  4. Jennifer Wortman Vaughan (52 papers)
  5. Hal Daumé III (76 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.