Interaction Tensor SHAP
- Interaction Tensor SHAP is an advanced explainable AI method that computes exact higher-order Shapley–Taylor interactions using tensor train contractions.
- The framework reformulates discrete derivatives into tensor network contractions, achieving polynomial time complexity and NC² parallelism to overcome exponential barriers.
- Its tensor train decomposition preserves core Shapley axioms while scaling efficiently to high-dimensional models through bounded TT-ranks.
Interaction Tensor SHAP (IT SHAP) is a framework for computing exact Shapley‐Taylor interaction indices of arbitrary order in high‐dimensional machine learning models, formulated as a tensor network contraction to achieve polynomial time and polylogarithmic depth complexity under tensor train (TT) assumptions. IT SHAP is designed to overcome the exponential computational barrier inherent in existing approaches for higher‐order feature interactions, while maintaining the axiomatic exactness of the Shapley family of attribution methods (Hasegawa et al., 5 Dec 2025).
1. Limitations of Existing Shapley Interaction Methods
Traditional feature attribution techniques in explainable AI largely rely on Shapley value–based formulations, which decompose a model’s output into contributions from individual features. The Shapley–Taylor Interaction Index (STII) generalizes Shapley values to quantify main effects and interactions of arbitrary order by applying discrete derivatives on the value function . The central expressions are:
- Discrete derivative:
- Closed-form STII:
However, enumerating all subsets for fixed and yields complexity for general order , which is prohibitive in high‐dimensional settings. Marginal SHAP Tensor (MST) methods recast first‐order effects as tensor contractions under TT structure, but do not extend to higher‐order interactions.
2. Mathematical Formalism: Value and Weight Tensors
IT SHAP reformulates STII exactly in terms of tensor contractions involving two central objects:
- Value Tensor encodes the model output under feature interventions. For routing index , which selects which features to keep or impute,
- Weight Tensor , a modified weighted coalitional tensor (MWCT), assigns the appropriate combinatorial weights rolled out from the closed‐form STII expression.
The contraction for order‐ interactions is:
where indexes all routing choices.
3. Tensor Train Representation and Polynomial Complexity
A key advance of IT SHAP is showing that the weight tensor admits an exact TT decomposition provided its combinatorial weighting function is computable by finite-state prefix recursion. Specifically, the TT format expresses an -mode tensor as a chain , where each core , and are the TT‐ranks.
Finite‐state prefix construction ensures all TT‐ranks of are bounded by (Lemma 4.6), dramatically reducing complexity compared to the exponential scaling of naïve enumeration.
4. Algorithmic Workflow for IT SHAP Computation
Under TT assumptions for the model, background distribution tensor, and weight tensor, IT SHAP computation proceeds by parallel contraction of per‐mode TT cores:
- Inputs: TT decompositions for model, background, and
- Precomputation: Router cores selecting or imputing each feature.
- For each mode , compute:
- Mode‐ MST contraction:
- Incorporate weighting: (contracting routing with TT core)
- Output: Chain is the TT representation of .
Individual interaction terms are extracted via TT slice and chain product with complexity sequential or NC parallel time.
Complexity Table
| Item | STII (Naïve) | IT SHAP (TT) |
|---|---|---|
| Time complexity | (poly ) | |
| Space complexity | (poly ) | |
| TT‐ranks | Exponential in | for weight/overall poly |
| Parallel complexity | Exponential barrier | NC (depth ) |
| Essential operation | subset enumeration | TT contraction, rank‐dependent |
5. Axiomatic Exactness and Guarantees
IT SHAP retains the five core axiomatic properties of STII—linearity, dummy, symmetry, efficiency, and interaction distribution—ensuring rigorous attribution semantics. For first‐order (), it exactly matches MST/Shapley value decomposition. The framework thus provides a unified formalization for both main and higher‐order effects with exactness inherited from STII.
6. Illustrative Construction and Scaling Implications
For explicit illustration, consider features and : the routing index yields eight configurations. For , the MWCT expansion recovers the expected discrete derivative, and the TT‐core construction yields ranks . The finite state at position is determined by the running count and flags for membership in , demonstrating polynomial TT‐rank scaling.
A plausible implication is that, whenever empirical models and data distributions exhibit TT‐ranks in the low hundreds, IT SHAP is practical up to hundreds of dimensions on commodity hardware.
7. Practical Impact and Future Directions
By reformulating Shapley–Taylor interactions as TT‐structured tensor contractions, IT SHAP renders tractable the analysis of higher‐order interactions in large black box models. The approach is foundational for scalable interaction‐aware explainable AI (XAI), enabling a granular decomposition of feature interplay previously infeasible in deep and high‐dimensional architectures. Theoretical results guarantee polynomial time and NC parallelism under TT‐rank conditions (Theorem 4.8). In general tensor networks (TN), complexity is #P‐hard, so TT structure is essential for scalability.
Future research directions include characterizing TT‐rank bounds for real model/distribution pairs, error analysis for background estimation, empirical benchmarking for large , and extensions to alternative tensor formats such as Tucker and CP. Preliminary results indicate feasibility for dimensions in the hundreds, provided TT ranks remain manageable.
In summary, Interaction Tensor SHAP establishes the first provably exact, scalable, and axiomatically consistent framework for high‐order Shapley–Taylor interactions, formulated as polynomial‐time tensor train contractions (Hasegawa et al., 5 Dec 2025).