Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory (1806.09710v1)

Published 25 Jun 2018 in stat.ML, cs.IT, cs.LG, and math.IT

Abstract: As artificial intelligence is increasingly affecting all parts of society and life, there is growing recognition that human interpretability of machine learning models is important. It is often argued that accuracy or other similar generalization performance metrics must be sacrificed in order to gain interpretability. Such arguments, however, fail to acknowledge that the overall decision-making system is composed of two entities: the learned model and a human who fuses together model outputs with his or her own information. As such, the relevant performance criteria should be for the entire system, not just for the machine learning component. In this work, we characterize the performance of such two-node tandem data fusion systems using the theory of distributed detection. In doing so, we work in the population setting and model interpretable learned models as multi-level quantizers. We prove that under our abstraction, the overall system of a human with an interpretable classifier outperforms one with a black box classifier.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kush R. Varshney (121 papers)
  2. Prashant Khanduri (29 papers)
  3. Pranay Sharma (26 papers)
  4. Shan Zhang (84 papers)
  5. Pramod K. Varshney (135 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.