Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-in-the-Loop Interpretability Prior (1805.11571v2)

Published 29 May 2018 in stat.ML and cs.LG

Abstract: We often desire our models to be interpretable as well as accurate. Prior work on optimizing models for interpretability has relied on easy-to-quantify proxies for interpretability, such as sparsity or the number of operations required. In this work, we optimize for interpretability by directly including humans in the optimization loop. We develop an algorithm that minimizes the number of user studies to find models that are both predictive and interpretable and demonstrate our approach on several data sets. Our human subjects results show trends towards different proxy notions of interpretability on different datasets, which suggests that different proxies are preferred on different tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Isaac Lage (9 papers)
  2. Andrew Slavin Ross (10 papers)
  3. Been Kim (54 papers)
  4. Samuel J. Gershman (25 papers)
  5. Finale Doshi-Velez (134 papers)
Citations (120)

Summary

We haven't generated a summary for this paper yet.