Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
55 tokens/sec
2000 character limit reached

Second-Order Shapley Values

Updated 19 July 2025
  • Second-order Shapley values are a method to quantify pairwise feature interactions using cooperative game theory principles.
  • They extend classical Shapley values by accounting for both main effects and non-additive interaction effects in models.
  • Efficient algorithms like sampling and regression approaches make computing these values practical for complex datasets.

Second-order Shapley values extend the classical Shapley value framework from individual (first-order) feature attributions to interactions between pairs of features, providing a principled approach to discerning and quantifying the effects of feature interactions in cooperative games, statistical modeling, and explainable machine learning. These values play a crucial role in modern scientific and applied settings that demand rigorous explanations of both main and interaction effects.

1. Theoretical Foundations of Second-Order Shapley Values

The standard Shapley value distributes the value generated by a coalition of players (features) among individual contributors based on their average marginal contributions across all coalitions. Formally, for a set of features N={1,...,d}N = \{1, ..., d\} and a characteristic function v:2NRv: 2^N \rightarrow \mathbb{R}, the Shapley value for feature ii is

ϕi(v)=1d!π[v(Siπ{i})v(Siπ)],\phi_{i}(v) = \frac{1}{d!} \sum_{\pi} \Big[v(S_i^\pi \cup \{i\}) - v(S_i^\pi)\Big],

where the sum is over all permutations π\pi and SiπS_i^\pi is the set of features preceding ii in π\pi.

Second-order Shapley values generalize this idea to measure the joint (interaction) effect of feature pairs. Given a subset SNS \subseteq N with S=2|S|=2, the second-order Shapley value (or index) for SS is constructed to isolate the effect that cannot be attributed to either feature alone or to less-than-pairwise interactions. This is formalized in frameworks such as n-Shapley values and faithful interaction indices, which provide closed-form expressions and ensure desirable axiomatic properties (Tsai et al., 2022, Bordt et al., 2022).

For the case n=2n=2, the attributions are defined as:

  • For singleton ii, the second-order value is

Φi2=Δ{i}12jiΔ{i,j}\Phi^{2}_{i} = \Delta_{\{i\}} - \frac{1}{2}\sum_{j \neq i} \Delta_{\{i, j\}}

  • For a pair {i,j}\{i,j\},

Φ{i,j}2=Δ{i,j}\Phi^{2}_{\{i,j\}} = \Delta_{\{i,j\}}

where Δ{i}\Delta_{\{i\}} and Δ{i,j}\Delta_{\{i, j\}} denote the first-order and second-order Shapley interaction indices, respectively.

2. Axiomatic Principles and Interaction Indices

A key challenge is to extend the Shapley axioms—efficiency, symmetry, linearity, and dummy player—to interaction attributions. The Faithful Shapley Interaction (Faith-Shap) index achieves this by recasting the problem as a least-squares regression over all coalitions, seeking coefficients for all singleton and pair subsets so that

v(S)TS T2ET(v,2).v(S) \approx \sum_{\substack{T \subseteq S \ |T| \le 2}} E_T(v, 2).

Under appropriate weighting schemes, these coefficients yield a unique solution that satisfies the extended axioms and exactly distributes the total value among all main and interaction effects (Tsai et al., 2022).

Second-order values inherit properties such as:

  • Interaction Linearity: Attributions scale linearly with the value function.
  • Interaction Symmetry: Interaction indices are unchanged under feature relabeling if the value function is invariant.
  • Interaction Efficiency: The sum of all singleton and pairwise attributions equals v(N)v()v(N) - v(\emptyset).
  • Interaction Dummy: Attributions involving "dummy" features vanish appropriately.

Faith-Shap indices also provide closed-form combinatorial expressions for the highest-order terms, e.g.,

ESF-Shap(v,2)=TNSw(T)ΔS(v(T)),E_S^{\text{F-Shap}}(v, 2) = \sum_{T \subseteq N \setminus S} w(T) \Delta_S(v(T)),

where w(T)w(T) are combinatorial weights.

3. Computation and Algorithms

Naive computation of second-order Shapley values is combinatorially expensive, as it requires consideration of all possible feature subsets. Several approaches improve efficiency:

  • Sampling and Monte Carlo Methods: Approximate marginal contributions by drawing a manageable number of random coalitions. The computational cost remains practical even when incorporating correlation corrections for combined effects (Basu et al., 2020).
  • Regression-based (Faithful) Methods: Reformulate the estimation as a constrained least-squares problem, exploiting closed-form solutions when the value function is limited to low-order interactions or by using polynomial time algorithms in practice (Tsai et al., 2022).
  • Model Architecture Design: Architectures such as HarsanyiNet directly encode Harsanyi interactions into the layers, enabling the computation of exact first- and second-order Shapley values via a single forward pass (Chen et al., 2023).

In practice, for tabular or low-dimensional data, second-order indices can be computed exactly. For high dimensionality, using approximation techniques or focusing only on potentially significant feature pairs are typical strategies.

4. Interpreting Second-Order Effects: Main Effects vs Interactions

Second-order Shapley values decompose the model output into additive contributions of individual features and their pairwise interactions: f(x)=iΦi2(x)+i<jΦ{i,j}2(x)f(x) = \sum_{i} \Phi_i^2(x) + \sum_{i < j} \Phi_{\{i, j\}}^2(x) This makes it possible to distinguish whether model behavior is driven primarily by main effects (individual features) or by interactions.

For example, in the XOR problem, features X1X_1 and X2X_2 have zero individual association with the output, but the pair (X1,X2)(X_1, X_2) completely determines it. Second-order Shapley values capture and allocate this interaction, assigning nonzero pairwise importance and zero main effects, consistent with the underlying functional dependence (Fryer et al., 2020).

5. Model-Independent and Model-Dependent Approaches

Second-order Shapley values can be constructed in both model-dependent and model-independent settings:

  • Model-dependent: Attributions are derived from a fitted model's output, relying on the model for the characteristic function (e.g., via expected prediction or residual dependence) (Bordt et al., 2022).
  • Model-independent: The attributions are based on statistical properties of the data, such as non-linear dependence measures (distance correlation, HSIC), capturing intrinsic (e.g., pairwise) feature-label dependencies, regardless of the form or presence of any fitted model (Fryer et al., 2020). This offers a diagnostic tool for detecting non-linear and interaction effects that models may fail to capture.

Several methodological challenges and extensions accompany the use of second-order Shapley values:

  • Multicollinearity Correction: Naively adding individual Shapley values to estimate the combined effect of correlated features can lead to misleading results. Matrix-based correlation adjustment enables accurate attributions for interacting feature sets, maintaining computational efficiency (Basu et al., 2020).
  • Functional Limitations: Second-order attributions cannot perfectly reconstruct functions with significant three-way or higher interactions; they provide an exact decomposition only when the model is (pairwise) additive (Bordt et al., 2022).
  • General Network Extensions: In cooperative networks, the second-order (and higher-order) distribution of value can be formalized through generalized edge flows and Hodge calculus, allowing attributions beyond the grand coalition and introducing new fairness paradigms (Lim, 2022).

7. Applications and Practical Tools

Second-order Shapley values facilitate advanced model interpretation and debugging in machine learning:

  • Explicitly identify and quantify pairwise interactions in black-box models.
  • Separate main effects and interaction contributions for fairness and accountability analyses.
  • Enable more faithful explanations in explainable AI, including for generalized additive models with interactions (GAMs with paired terms) (Bordt et al., 2022).
  • Provide tools for exploratory data analysis and robust model diagnostics by contrasting model-dependent and model-independent second-order attributions (Fryer et al., 2020).

Software implementations of second-order Shapley value estimation are available, for example, via the n-Shapley package (Bordt et al., 2022), supporting their practical use in contemporary explainability studies.


In summary, second-order Shapley values offer a principled, axiomatic, and computationally tractable mechanism for allocating credit to both individual features and their pairwise interactions in cooperative settings and statistical models. They are central to the modern theory of feature attribution, enabling a nuanced understanding of model behavior and intrinsic data dependencies.