Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 89 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 199 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

VARSHAP: Addressing Global Dependency Problems in Explainable AI with Variance-Based Local Feature Attribution (2506.07229v1)

Published 8 Jun 2025 in cs.LG

Abstract: Existing feature attribution methods like SHAP often suffer from global dependence, failing to capture true local model behavior. This paper introduces VARSHAP, a novel model-agnostic local feature attribution method which uses the reduction of prediction variance as the key importance metric of features. Building upon Shapley value framework, VARSHAP satisfies the key Shapley axioms, but, unlike SHAP, is resilient to global data distribution shifts. Experiments on synthetic and real-world datasets demonstrate that VARSHAP outperforms popular methods such as KernelSHAP or LIME, both quantitatively and qualitatively.

Summary

  • The paper introduces VARSHAP, a method that uses variance reduction to quantify local feature importance and mitigate global dependency issues.
  • The approach employs an instance-based perturbation strategy and satisfies essential axioms such as shift invariance to ensure local sensitivity.
  • Empirical tests show VARSHAP delivers consistent and reliable feature explanations, outperforming traditional methods like SHAP and LIME.

Analysis of "VARSHAP: Addressing Global Dependency Problems in Explainable AI with Variance-Based Local Feature Attribution"

The paper "VARSHAP: Addressing Global Dependency Problems in Explainable AI with Variance-Based Local Feature Attribution" introduces VARSHAP, a novel method for feature attribution in Explainable Artificial Intelligence (XAI). VARSHAP seeks to address limitations associated with global dependencies present in existing attribution methods like SHAP (SHapley Additive exPlanations). Standard SHAP-based methods often falter in accurately revealing localized model behavior due to their reliance on global data distributions. This paper posits variance reduction as a pivotal metric for determining feature importance, offering a fresh perspective on local feature attribution within models.

Theoretical Foundation and Methodology

The foundation of VARSHAP lies in the adaptation of Shapley values, a concept borrowed from cooperative game theory. While SHAP leverages the Shapley value framework to quantify contributions of features to predictions, it generally assumes a global model of data distribution, which can lead to non-local, and thus potentially misleading, explanations. VARSHAP deviates from this by implementing an instance-based perturbation strategy, focusing purely on local model behavior and variance reduction to aid in explaining feature importance at the individual instance level.

The key innovation of VARSHAP is using variance reduction as a means to quantify a feature’s local importance. The paper formally establishes variance as the sole function that satisfies essential axioms for local feature attribution, including shift invariance. This approach ensures that VARSHAP attributions remain sensitive to the specific local behavior of a model, providing robustness against variations introduced by unrelated data distribution changes.

Empirical Evaluations

The empirical validation of VARSHAP involved comparisons with other model-agnostic methods like LIME (Local Interpretable Model-agnostic Explanations) and SHAP across various models and datasets—both synthetic and real-world. The experiments underscored VARSHAP's superiority in consistently explaining local model behavior while maintaining stability across differing datasets.

Notably, experiments on synthetic datasets designed to mimic healthcare applications demonstrated VARSHAP's ability to maintain consistent feature explanations even when global data distributions varied. This consistency was absent in SHAP, which notably produced differing attributions for identical local model behaviors due to its susceptibility to global distribution shifts. VARSHAP's resilience, attributed to its local perturbation method, suggests its applicability in high-stakes systems requiring transparency and precise local interpretations.

Moreover, VARSHAP's robustness was evident even in scenarios involving non-linear features and introduced noise—an area where many traditional methods falter. Where LIME incorrectly assigned importance to irrelevant features, VARSHAP provided concise, accurate feature discriminations, aligning with theoretical expectations.

Implications

The implications of VARSHAP are both practical and theoretical. Practically, VARSHAP's local focus ensures that it can provide explanations that are sensitive to individual cases, rather than being overly influenced by broad, potentially irrelevant data characteristics. This feature is particularly beneficial in domains like healthcare, where individual decisions can significantly impact outcomes.

Theoretically, VARSHAP enriches the discussion around the limitations of current attribution methods concerning local-global trade-offs. It opens pathways to further research in enhancing feature attribution methodologies, particularly applications that require stringent local sensitivity and accuracy.

Future Directions

While VARSHAP robustly addresses several existing limitations, it also paves the way for further exploration in enhancing perturbation mechanisms. The authors suggest investigating conditional probabilities to better simulate real-world scenarios and potentially increase resilience against adversarial attacks on explainability methods.

In summary, VARSHAP presents a significant advancement in local feature attribution, improving upon existing methodologies by reducing bias introduced by global distributions and enhancing the reliability of local explanations. Its innovative approach to balancing local precision with theoretical rigor makes it a valuable tool in the advancement of XAI, particularly for applications where individual prediction explanations are crucial.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.