Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Explanations: How much do explanations from the teacher aid students? (2012.00893v2)

Published 1 Dec 2020 in cs.CL and cs.LG

Abstract: While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated. In this work, we introduce a framework to quantify the value of explanations via the accuracy gains that they confer on a student model trained to simulate a teacher model. Crucially, the explanations are available to the student during training, but are not available at test time. Compared to prior proposals, our approach is less easily gamed, enabling principled, automatic, model-agnostic evaluation of attributions. Using our framework, we compare numerous attribution methods for text classification and question answering, and observe quantitative differences that are consistent (to a moderate to high degree) across different student model architectures and learning strategies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Danish Pruthi (28 papers)
  2. Rachit Bansal (9 papers)
  3. Bhuwan Dhingra (66 papers)
  4. Livio Baldini Soares (18 papers)
  5. Michael Collins (46 papers)
  6. Graham Neubig (342 papers)
  7. William W. Cohen (79 papers)
  8. Zachary C. Lipton (137 papers)
Citations (102)