Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Considerations When Learning Additive Explanations for Black-Box Models (1801.08640v4)

Published 26 Jan 2018 in stat.ML and cs.LG

Abstract: Many methods to explain black-box models, whether local or global, are additive. In this paper, we study global additive explanations for non-additive models, focusing on four explanation methods: partial dependence, Shapley explanations adapted to a global setting, distilled additive explanations, and gradient-based explanations. We show that different explanation methods characterize non-additive components in a black-box model's prediction function in different ways. We use the concepts of main and total effects to anchor additive explanations, and quantitatively evaluate additive and non-additive explanations. Even though distilled explanations are generally the most accurate additive explanations, non-additive explanations such as tree explanations that explicitly model non-additive components tend to be even more accurate. Despite this, our user study showed that machine learning practitioners were better able to leverage additive explanations for various tasks. These considerations should be taken into account when considering which explanation to trust and use to explain black-box models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sarah Tan (21 papers)
  2. Giles Hooker (59 papers)
  3. Paul Koch (8 papers)
  4. Albert Gordo (18 papers)
  5. Rich Caruana (42 papers)
Citations (17)
Github Logo Streamline Icon: https://streamlinehq.com