Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Benchmarking the Utility of Explanations for Model Debugging (2105.04505v1)

Published 10 May 2021 in cs.AI, cs.HC, and cs.LG

Abstract: Post-hoc explanation methods are an important class of approaches that help understand the rationale underlying a trained model's decision. But how useful are they for an end-user towards accomplishing a given task? In this vision paper, we argue the need for a benchmark to facilitate evaluations of the utility of post-hoc explanation methods. As a first step to this end, we enumerate desirable properties that such a benchmark should possess for the task of debugging text classifiers. Additionally, we highlight that such a benchmark facilitates not only assessing the effectiveness of explanations but also their efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Maximilian Idahl (5 papers)
  2. Lijun Lyu (6 papers)
  3. Ujwal Gadiraju (28 papers)
  4. Avishek Anand (81 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.