Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Shapley Explanation via Contributive Cooperator Selection (2206.08529v2)

Published 17 Jun 2022 in cs.LG, cs.AI, and cs.GT

Abstract: Even though Shapley value provides an effective explanation for a DNN model prediction, the computation relies on the enumeration of all possible input feature coalitions, which leads to the exponentially growing complexity. To address this problem, we propose a novel method SHEAR to significantly accelerate the Shapley explanation for DNN models, where only a few coalitions of input features are involved in the computation. The selection of the feature coalitions follows our proposed Shapley chain rule to minimize the absolute error from the ground-truth Shapley values, such that the computation can be both efficient and accurate. To demonstrate the effectiveness, we comprehensively evaluate SHEAR across multiple metrics including the absolute error from the ground-truth Shapley value, the faithfulness of the explanations, and running speed. The experimental results indicate SHEAR consistently outperforms state-of-the-art baseline methods across different evaluation metrics, which demonstrates its potentials in real-world applications where the computational resource is limited.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Guanchu Wang (33 papers)
  2. Yu-Neng Chuang (28 papers)
  3. Mengnan Du (90 papers)
  4. Fan Yang (878 papers)
  5. Quan Zhou (119 papers)
  6. Pushkar Tripathi (8 papers)
  7. Xuanting Cai (13 papers)
  8. Xia Hu (186 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com