Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Price of Interpretability (1907.03419v1)

Published 8 Jul 2019 in cs.LG and stat.ML

Abstract: When quantitative models are used to support decision-making on complex and important topics, understanding a model's reasoning'' can increase trust in its predictions, expose hidden biases, or reduce vulnerability to adversarial attacks. However, the concept of interpretability remains loosely defined and application-specific. In this paper, we introduce a mathematical framework in which machine learning models are constructed in a sequence of interpretable steps. We show that for a variety of models, a natural choice of interpretable steps recovers standard interpretability proxies (e.g., sparsity in linear models). We then generalize these proxies to yield a parametrized family of consistent measures of model interpretability. This formal definition allows us to quantify theprice'' of interpretability, i.e., the tradeoff with predictive accuracy. We demonstrate practical algorithms to apply our framework on real and synthetic datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dimitris Bertsimas (96 papers)
  2. Arthur Delarue (9 papers)
  3. Patrick Jaillet (100 papers)
  4. Sebastien Martin (7 papers)
Citations (31)