Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 89 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 199 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Prediction via Shapley Value Regression (2505.04775v1)

Published 7 May 2025 in cs.LG

Abstract: Shapley values have several desirable, theoretically well-supported, properties for explaining black-box model predictions. Traditionally, Shapley values are computed post-hoc, leading to additional computational cost at inference time. To overcome this, a novel method, called ViaSHAP, is proposed, that learns a function to compute Shapley values, from which the predictions can be derived directly by summation. Two approaches to implement the proposed method are explored; one based on the universal approximation theorem and the other on the Kolmogorov-Arnold representation theorem. Results from a large-scale empirical investigation are presented, showing that ViaSHAP using Kolmogorov-Arnold Networks performs on par with state-of-the-art algorithms for tabular data. It is also shown that the explanations of ViaSHAP are significantly more accurate than the popular approximator FastSHAP on both tabular data and images.

Summary

Prediction via Shapley Value Regression: A Summary for Researchers

The paper "Prediction via Shapley Value Regression" introduces an innovative approach to enhance both predictive accuracy and interpretability of machine learning models through the integration of Shapley value computation directly into the prediction mechanism. This proposed method, ViaSHAP, diverges from traditional practices where Shapley values are computed post-hoc, thus incurring additional computational overhead during inference.

Core Contributions

The ViaSHAP method encapsulates several key contributions:

  1. Simultaneous Prediction and Explanation: ViaSHAP learns a function to compute Shapley values, which are traditionally calculated separately, ensuring predictions and their explanations are inherently linked and derived from the same model.
  2. Architectural Exploration: Two foundational theories were explored for implementing ViaSHAP:
    • The Universal Approximation Theorem, which asserts that a sufficiently wide neural network can approximate any continuous function.
    • The Kolmogorov-Arnold Representation Theorem, suggesting that multivariate continuous functions can be represented by sums of univariate functions, facilitating more efficient network architectures.
  3. Empirical Validation: An extensive empirical validation demonstrates that ViaSHAP models, particularly when employing Kolmogorov-Arnold Networks (KANs), achieve competitive prediction accuracy compared to leading algorithms like XGBoost and Random Forests, while significantly enhancing the accuracy of Shapley value explanation compared to FastSHAP.

Numerical Results and Findings

ViaSHAP models excelled in predictive performance across a diverse set of 25 datasets, achieving statistical parity with tree ensemble models in terms of AUC scores. This competitive outcome underscores the robustness of the KAN-based implementation, particularly with tabular datasets. Furthermore, the Shapley value approximations produced by ViaSHAP frequently surpassed those generated by FastSHAP, reinforcing ViaSHAP's efficacy in providing more accurate feature attributions.

Theoretical and Practical Implications

The integration of Shapley value computation within the model training process opens several avenues for theoretical and practical advancements:

  • Enhanced Interpretability: By directly linking predictions to their Shapley values, the model offers clearer insights into feature contributions and interactions.
  • Reduced Computation Overhead: Eliminating the post-hoc calculation of Shapley values significantly lowers the inference time, making ViaSHAP particularly promising for deployment in time-sensitive applications.

Future Directions

Future research may explore adapting ViaSHAP for more complex data structures, such as structured images in computer vision and sequences in natural language processing. Additionally, further refinement of neural architectures could address current limitations noted with marginal expectation approaches and perhaps extend applicability to non-neural-based prediction frameworks.

In summary, ViaSHAP represents a meaningful stride towards integrating interpretability deeply into model prediction, offering tangible benefits in both computational efficiency and explanation fidelity. With ongoing research, it holds potential to transform not only how predictions are generated but also how they are understood in machine learning contexts.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.