Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teaching Meaningful Explanations (1805.11648v2)

Published 29 May 2018 in cs.AI

Abstract: The adoption of machine learning in high-stakes applications such as healthcare and law has lagged in part because predictions are not accompanied by explanations comprehensible to the domain user, who often holds the ultimate responsibility for decisions and outcomes. In this paper, we propose an approach to generate such explanations in which training data is augmented to include, in addition to features and labels, explanations elicited from domain users. A joint model is then learned to produce both labels and explanations from the input features. This simple idea ensures that explanations are tailored to the complexity expectations and domain knowledge of the consumer. Evaluation spans multiple modeling techniques on a game dataset, a (visual) aesthetics dataset, a chemical odor dataset and a Melanoma dataset showing that our approach is generalizable across domains and algorithms. Results demonstrate that meaningful explanations can be reliably taught to machine learning algorithms, and in some cases, also improve modeling accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Noel C. F. Codella (12 papers)
  2. Michael Hind (25 papers)
  3. Karthikeyan Natesan Ramamurthy (68 papers)
  4. Murray Campbell (27 papers)
  5. Amit Dhurandhar (62 papers)
  6. Kush R. Varshney (121 papers)
  7. Dennis Wei (64 papers)
  8. Aleksandra Mojsilovic (20 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.