Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Evaluation of the Human-Interpretability of Explanation (1902.00006v2)

Published 31 Jan 2019 in cs.LG and stat.ML

Abstract: Recent years have seen a boom in interest in machine learning systems that can provide a human-understandable rationale for their predictions or decisions. However, exactly what kinds of explanation are truly human-interpretable remains poorly understood. This work advances our understanding of what makes explanations interpretable under three specific tasks that users may perform with machine learning systems: simulation of the response, verification of a suggested response, and determining whether the correctness of a suggested response changes under a change to the inputs. Through carefully controlled human-subject experiments, we identify regularizers that can be used to optimize for the interpretability of machine learning systems. Our results show that the type of complexity matters: cognitive chunks (newly defined concepts) affect performance more than variable repetitions, and these trends are consistent across tasks and domains. This suggests that there may exist some common design principles for explanation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Isaac Lage (9 papers)
  2. Emily Chen (16 papers)
  3. Jeffrey He (4 papers)
  4. Menaka Narayanan (2 papers)
  5. Been Kim (54 papers)
  6. Sam Gershman (4 papers)
  7. Finale Doshi-Velez (134 papers)
Citations (141)