Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation (1903.10992v4)

Published 26 Mar 2019 in cs.LG and stat.ML

Abstract: The problem of explaining the behavior of deep neural networks has recently gained a lot of attention. While several attribution methods have been proposed, most come without strong theoretical foundations, which raises questions about their reliability. On the other hand, the literature on cooperative game theory suggests Shapley values as a unique way of assigning relevance scores such that certain desirable properties are satisfied. Unfortunately, the exact evaluation of Shapley values is prohibitively expensive, exponential in the number of input features. In this work, by leveraging recent results on uncertainty propagation, we propose a novel, polynomial-time approximation of Shapley values in deep neural networks. We show that our method produces significantly better approximations of Shapley values than existing state-of-the-art attribution methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Marco Ancona (7 papers)
  2. Markus Gross (67 papers)
  3. Cengiz Ă–ztireli (12 papers)
Citations (208)

Summary

Deep Approximate Shapley Propagation: Evaluating and Approximating Shapley Values in Deep Neural Networks

The paper entitled "Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation" addresses a critically important challenge in the field of Explainable AI: the reliable attribution of output predictions to input features within deep neural networks (DNNs). While various attribution methods exist, they frequently lack robust theoretical underpinnings, which casts doubt on their reliability. Within this context, Shapley values, a construct from cooperative game theory, emerge as a principled method for distributing credits across input features, but computing them exactly is NP-hard, rendering it infeasible for complex models with numerous features.

The authors propose a novel approach, Deep Approximate Shapley Propagation (DASP), to approximate Shapley values in a computationally feasible manner by leveraging uncertainty propagation techniques. DASP provides polynomial time complexity for calculating Shapley value approximations, significantly outperforming previous biased attribution methods when evaluated on approximation quality, and requiring fewer evaluations than unbiased sampling-based techniques to achieve similar accuracies.

Key Contributions and Methodology

This work makes three principal contributions:

  1. Theoretical Motivation: The authors rigorously argue for the superiority of Shapley values over existing attribution methods, based on a set of desirable axiomatic properties in the non-linear model regime. These include completeness, null player, symmetry, linearity, continuity, and implementation invariance. Shapley values uniquely satisfy all these axioms, making them an ideal candidate for reliable local explanations.
  2. Algorithm Design: DASP's innovation lies in approximating the Shapley values without assuming linearity, using a method inspired by the sequential propagation of expected values through a deep network architecture. By considering each feature's marginal contribution to random coalitions, computed via a streamlined propagation of mean and variance statistics through Lightweight Probabilistic Networks, DASP provides a practical approach to navigate the intractably large coalition space.
  3. Empirical Benchmarking: Through evaluations on tasks such as Parkinson's disability assessment, DNA sequence classification, and digit recognition (MNIST), DASP effectively approximated Shapley values. The comparisons indicated that DASP consistently outperformed alternative biased methods and required significantly fewer model evaluations than unbiased variants, like Shapley sampling and KernelSHAP, to reach similar approximation fidelity.

Implications and Future Directions

The introduction of DASP represents a meaningful advancement towards enhancing the interpretability of DNNs by providing reliable, theoretically grounded explanations for their outputs. Beyond theoretical appeal, the method scales effectively, promising practical utility in real-world applications where transparency and accountability are increasingly mandated, notably in finance and healthcare sectors governed by regulations such as the European Union's right to explanation.

Future research can further extend the applicability of DASP by integrating advances in uncertainty propagation, potentially expanding to recurrent neural networks and transfer learning. Moreover, the exploration of hybrid models that leverage both uncertainty propagation and sampling techniques may offer pathways to the next echelon of Shapley value approximation methods, potentially bridging the gap between exact theoretical guarantees and feasible computational strategies. This opens vistas for development towards universally robust and interpretability-focused AI tools.

In conclusion, the paper presents a significant step forward in explainable AI by detailing an efficient, reliable method for approximating Shapley values in DNNs, underscoring the importance of leveraging cooperative game theory principles to build trustworthy machine learning models.