Papers
Topics
Authors
Recent
Search
2000 character limit reached

Partition SHAP: Group-wise Model Explanations

Updated 22 February 2026
  • Partition SHAP is a model explanation method that extends SHAP by attributing outputs to feature blocks, capturing both main effects and interactions.
  • It employs an optimization framework that minimizes reconstruction error and penalizes complexity using statistical interaction tests to form interpretable partitions.
  • Applications include time series, multiplicative models, and other structured domains, offering high fidelity and reduced computational cost relative to traditional SHAP.

Partition SHAP refers to a rigorous class of Shapley-value-based model explanation methods that assign attributions not only to individual features, but to partitions—i.e., blocks or groups—of features, often to capture interaction effects, manage computational complexity, or align with structured data types. Partition SHAP methods include frameworks such as PartitionSHAP for interaction-aware explanations, mSHAP for two-part (multiplicative) models, and window-based partitioning for time series. Their key innovation is to yield interpretable, locally-accurate, and sometimes interaction-aware additive explanations of black-box model predictions, spanning high-dimensional and structured modalities (Xu et al., 2024, Matthews et al., 2021, Nayebi et al., 2022). These approaches generalize classical SHAP, with the goal of retaining efficiency, interpretability, and fidelity in settings where classical, feature-wise SHAP is inadequate.

1. Mathematical Foundations of Partition SHAP

Partition SHAP is grounded in the Shapley value from cooperative game theory, extended to operate on groups ("blocks") of features. Consider a black-box model f:RdRf : \mathbb{R}^d \to \mathbb{R} and a particular observation xRdx \in \mathbb{R}^d. Instead of only attributing model output to atomic features, Partition SHAP seeks an explanation in terms of a partition Π={S1,S2,,Sm}\Pi = \{S_1, S_2, \dots, S_m\} of the feature indices: each Si[d]S_i \subseteq [d], SiSj=S_i \cap S_j = \emptyset for iji \neq j, and iSi=[d]\bigcup_i S_i = [d].

A value function v:2[d]Rv: 2^{[d]} \to \mathbb{R} quantifies the model output when a feature subset SS is "present" (e.g., v(S)=E[f(X)XS=xS]v(S) = \mathbb{E}[f(X) | X_S = x_S]). The additive surrogate explanation takes the form

f^Π(x)=SΠv(S)f(x)\hat{f}_\Pi(x) = \sum_{S \in \Pi} v(S) \approx f(x)

where each term v(S)v(S) captures both main effects and interactions within block SS (Xu et al., 2024). The quality of a partition is then measured by a reconstruction error E(Π)=(f(x)SΠv(S))2E(\Pi) = (f(x) - \sum_{S \in \Pi} v(S))^2 and a complexity penalty C(Π)=SΠS(S1)2C(\Pi) = \sum_{S \in \Pi} \frac{|S|(|S|-1)}{2}, which counts the number of explicit pairwise interactions within each block.

Partition SHAP thus generalizes atomic-feature SHAP explanations, subsuming both single-feature and set-based (e.g., nSHAP) explanations, but with a succinctness constraint imposed via penalization of block size and number (Xu et al., 2024).

2. Algorithms and Statistical Pruning for Partition Discovery

The central computational challenge is to select the partition Π\Pi that yields an interpretable, representative, and succinct surrogate. In PartitionSHAP (Xu et al., 2024), this is formalized as the optimization

Π=argminΠ{E(Π)+λC(Π)}\Pi^* = \arg\min_{\Pi} \, \left\{E(\Pi) + \lambda C(\Pi)\right\}

for regularization parameter λ0\lambda \ge 0.

To manage the super-exponential search space of partitions, PartitionSHAP employs a statistical interaction test. For each pair of features i,ji, j, an interaction index

I(i,jS)=v(S{i})+v(S{j})v(S{i,j})v(S)I(i, j \mid S) = v(S \cup \{i\}) + v(S \cup \{j\}) - v(S \cup \{i, j\}) - v(S)

is averaged over randomly sampled contexts S[d]{i,j}S \subseteq [d] \setminus \{i, j\}. Pairs with statistically significant interaction (by Welch's tt-test at level α\alpha) are joined via an edge in an interaction graph GG. Only partitions in which each block forms a connected subgraph in GG are considered. This pruning dramatically shrinks the search space.

Partition search is conducted by exact enumeration for d15d \lesssim 15, or a greedy bottom-up merge: iteratively join blocks that most reduce objective, restricted to merges allowed by GG. Complexity is O(d4)O(d^4) for greedy search, with further gains from cached value functions and graph sparsity (Xu et al., 2024).

3. Partition SHAP in Structured Domains: Time Series and Multiplicative Models

Partition SHAP generalizes naturally to domains exhibiting structure or locality, such as time series and two-part models.

WindowSHAP for Time Series: (Nayebi et al., 2022) For time series XRD×LX \in \mathbb{R}^{D \times L}, treating each variable–timepoint pair as a feature is computationally intractable. WindowSHAP partitions the sequence into windows (fixed or adaptive), treats each window as a player, and computes Shapley values for these blocks. Three partitioning strategies are outlined:

  • Stationary: Non-overlapping windows of fixed length. Complexity drops from O(2DL)O(2^{DL}) to O(2DL/)O(2^{D L/\ell}), with window length \ell controlling the tradeoff.
  • Sliding: Overlapping windows, aggregating attributions for points occurring in multiple windows.
  • Dynamic: Successively subdivides windows with high attributions, focusing resolution adaptively.

Atomic-level attributions are recovered by uniform redistribution of window-level SHAP values. This method satisfies local accuracy and Shapley axioms at the partition level.

mSHAP for Two-Part Models: (Matthews et al., 2021) In multiplicative models f(x)=f1(x)f2(x)f(x) = f_1(x) \cdot f_2(x), each fif_i has its own SHAP decomposition. Direct computation of SHAP for ff is intractable; mSHAP constructs feature contributions by expanding the product algebraically, collecting main and cross-terms, and applying a bias correction for empirical-mean discrepancy. Simulation shows mSHAP approximates kernelSHAP closely but is orders of magnitude faster.

4. Empirical Studies and Applications

Experiments on synthetic and real data demonstrate the practical advantages of Partition SHAP methods.

Representative Results

Scenario Method Fidelity (R2R^2) Runtime Interpretability
Bikesharing (10 features) PartitionSHAP High Seconds Compact, blocks
COVID-19 Survival PartitionSHAP High Seconds Explicit inter.
Time series (D=62, L=120) WindowSHAP High \downarrow80% CPU Window groups
Auto insurance (20M samples) mSHAP Local accuracy Scales to 20M Splits freq/sev

PartitionSHAP outperforms nSHAP and SHAP in F1 score for recovering true interacting feature blocks and obtains high surrogate fidelity (R2>0.9R^2 > 0.9) in linear and nonlinear model families (Xu et al., 2024). WindowSHAP demonstrates superior performance in perturbation metrics compared to kernelSHAP and TimeSHAP, doubling the loss increase upon perturbation of top-attributed time windows in RNN clinical models, while reducing computation by over 80% (Nayebi et al., 2022). mSHAP achieves close agreement in sign and rank with kernelSHAP at a fraction of the cost (Matthews et al., 2021).

5. Interpretability, Complexity, and Tradeoffs

Partition SHAP methods explicitly balance explanation fidelity with complexity, formalized by the penalization of allowed interactions. Blocks with many features yield fewer terms but increase explanation complexity; singletons reduce interactions but may miss key effects. The optimal partition trades off these concerns, modulated by λ\lambda. Statistical interaction testing ensures that blocks represent genuine dependencies, not spurious co-influence.

Uniform redistribution of block-level attributions (as in WindowSHAP) preserves local accuracy but may obscure within-group heterogeneity—a plausible implication is that partition granularity should be tuned to the application and validation metrics.

Partition SHAP encompasses and generalizes special cases:

  • Group-SHAP: Treats pre-defined feature groups as units, without adaptively optimizing the partition.
  • nSHAP: Reports all 2d2^d subset attributions, but leads to exponentially sized outputs.
  • TimeSHAP: Prunes or groups early history in time-series under heuristic assumptions about irrelevance; WindowSHAP generalizes by allowing arbitrary, possibly adaptive, partitions.

Partition SHAP retains the local accuracy, efficiency, symmetry, and linearity properties of the original Shapley framework at the partition level, as exact Shapley values are computed on the partitioned player set (Nayebi et al., 2022, Xu et al., 2024).

7. Implementation, Scalability, and Further Directions

Implementations of Partition SHAP methods are available for specific domains, notably the "mshap" R package for mSHAP (Matthews et al., 2021). PartitionSHAP's greedy merge algorithm handles hundreds of features within seconds; exact enumeration is feasible for d15d \leq 15 (Xu et al., 2024). In time-series settings, appropriate window length or block size must be tuned to balance computational efficiency and explanatory resolution.

A plausible implication is that Partition SHAP methodology is applicable to other structured domains with feature correlations or hierarchical organizations (e.g., images, graphs). The current statistical pruning approach could be extended to non-pairwise interactions, or to causal value functions. Validation by task-specific perturbation metrics remains essential to ensure explanations reflect the underlying model dependence structure.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Partition SHAP.