Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining individual predictions when features are dependent: More accurate approximations to Shapley values (1903.10464v3)

Published 25 Mar 2019 in stat.ML, cs.LG, and stat.ME

Abstract: Explaining complex or seemingly simple machine learning models is an important practical problem. We want to explain individual predictions from a complex machine learning model by learning simple, interpretable explanations. Shapley values is a game theoretic concept that can be used for this purpose. The Shapley value framework has a series of desirable theoretical properties, and can in principle handle any predictive model. Kernel SHAP is a computationally efficient approximation to Shapley values in higher dimensions. Like several other existing methods, this approach assumes that the features are independent, which may give very wrong explanations. This is the case even if a simple linear model is used for predictions. In this paper, we extend the Kernel SHAP method to handle dependent features. We provide several examples of linear and non-linear models with various degrees of feature dependence, where our method gives more accurate approximations to the true Shapley values. We also propose a method for aggregating individual Shapley values, such that the prediction can be explained by groups of dependent variables.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kjersti Aas (14 papers)
  2. Martin Jullum (17 papers)
  3. Anders Løland (11 papers)
Citations (539)

Summary

Explaining Individual Predictions with Dependent Features: Enhancements to Shapley Value Approximations

In recent research, the paper addresses a critical aspect of model interpretability in machine learning, focusing on the use of Shapley values for explaining individual predictions. This paper highlights the challenges of applying traditional Shapley value methods when the features are dependent. Shapley values, a game-theoretic construct, provide a way to attribute the output of a model to its input features. However, existing methods like Kernel SHAP often assume feature independence, potentially leading to erroneous explanations when dependencies exist.

Core Contributions

The paper's primary contribution lies in extending Kernel SHAP to accommodate feature dependencies. The extension allows for more accurate Shapley value approximations under conditions of feature dependence. The authors propose several methodologies to address this:

  1. Gaussian Approach: Assumes a Gaussian distribution for the feature set, enabling the computation of conditional expectations that account for dependencies.
  2. Gaussian Copula: Utilizes a Gaussian copula with empirical marginals to capture the dependence structure separately from the marginal distributions.
  3. Empirical Conditional Distribution: A non-parametric method inspired by kernel density estimation to approximate conditional expectations directly from the data, avoiding matrix inversion issues in high dimensions.
  4. Combined Approach: Leverages both parametric and non-parametric methods, using the empirical method for low-dimensional subsets and parametric methods for higher-dimensional ones.

Numerical Results

The paper presents a robust evaluation using simulated data with varying levels of feature dependency and model complexity. Key results demonstrate that:

  • All proposed methods outperform the traditional Kernel SHAP method when features are dependent.
  • The Gaussian approach shows strong performance in scenarios where features exhibit high correlation, thanks to its parametric assumptions.
  • The empirical and combined approaches offer flexibility and improved accuracy across a wide range of feature distributions, including heavy-tailed and skewed data.

Theoretical and Practical Implications

The development of Shapley value frameworks that handle dependent features is critical for applications in domains where interpretability is paramount, such as finance and healthcare. By providing more accurate attributions, these methods enhance trust in automated decision-making systems, potentially addressing regulatory requirements like GDPR, which demand the explicability of model outcomes.

Future Directions

The research opens several avenues for further exploration:

  • Scalability: While the methods show promise in low to moderate dimensions, exploring techniques to reduce computational overhead in high-dimensional spaces is essential.
  • Categorical Data Handling: Extending these methods to effectively manage categorical variables and mixed data types could broaden their applicability.
  • Integration with Graph Structures: Further investigation into leveraging feature graph structures could optimize computational efficiency and improve the robustness of Shapely approximations in complex models.

Conclusion

This paper significantly advances the field of explainable AI by tackling a previously understated problem in Shapley value computations. By enhancing the interpretability of complex models in the presence of dependent features, the proposed methods contribute to more transparent and accountable AI systems. These contributions are poised to be pivotal as machine learning models become increasingly pervasive across sensitive application areas.