Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Logic Traps in Evaluating Attribution Scores (2109.05463v2)

Published 12 Sep 2021 in cs.LG, cs.AI, and cs.CL

Abstract: Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict. This goal is usually approached with attribution method, which assesses the influence of features on model predictions. As an explanation method, the evaluation criteria of attribution methods is how accurately it re-reflects the actual reasoning process of the model (faithfulness). Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments. However, some crucial logic traps in these evaluation methods are ignored in most works, causing inaccurate evaluation and unfair comparison. This paper systematically reviews existing methods for evaluating attribution scores and summarizes the logic traps in these methods. We further conduct experiments to demonstrate the existence of each logic trap. Through both the theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores. Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yiming Ju (11 papers)
  2. Yuanzhe Zhang (20 papers)
  3. Zhao Yang (75 papers)
  4. Zhongtao Jiang (6 papers)
  5. Kang Liu (207 papers)
  6. Jun Zhao (469 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.