Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Shapley variable importance clouds for interpretable machine learning (2110.02484v1)

Published 6 Oct 2021 in cs.LG and cs.HC

Abstract: Interpretable machine learning has been focusing on explaining final models that optimize performance. The current state-of-the-art is the Shapley additive explanations (SHAP) that locally explains variable impact on individual predictions, and it is recently extended for a global assessment across the dataset. Recently, Dong and Rudin proposed to extend the investigation to models from the same class as the final model that are "good enough", and identified a previous overclaim of variable importance based on a single model. However, this method does not directly integrate with existing Shapley-based interpretations. We close this gap by proposing a Shapley variable importance cloud that pools information across good models to avoid biased assessments in SHAP analyses of final models, and communicate the findings via novel visualizations. We demonstrate the additional insights gain compared to conventional explanations and Dong and Rudin's method using criminal justice and electronic medical records data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yilin Ning (24 papers)
  2. Marcus Eng Hock Ong (21 papers)
  3. Bibhas Chakraborty (30 papers)
  4. Benjamin Alan Goldstein (4 papers)
  5. Daniel Shu Wei Ting (17 papers)
  6. Roger Vaughan (4 papers)
  7. Nan Liu (140 papers)
Citations (56)

Summary

We haven't generated a summary for this paper yet.