Papers
Topics
Authors
Recent
2000 character limit reached

Tensorization & Network Decompositions

Updated 26 December 2025
  • Tensorization and tensor network decompositions are techniques that reshape dense matrices into high-order tensors, unlocking methods like TT, Tucker, and CP for efficient, interpretable modeling.
  • These decompositions leverage latent multilinear structures to achieve dramatic parameter compression while maintaining high performance in tasks like neural network optimization.
  • Advanced algorithms such as TT-SVD and ALS facilitate scalable implementations, offering practical solutions for compression, feature tracking, and quantum simulation applications.

Tensorization and tensor network decompositions form the mathematical and algorithmic backbone for compressing, analyzing, and interpreting high-dimensional data and neural networks. Tensorization refers to transforming dense matrices or vectors into higher-order tensors, thereby enabling the application of powerful tensor network (TN) methods such as Tensor Train (TT/MPS), Tucker, Canonical Polyadic (CP), and more general TN topologies. These decompositions capitalize on latent multilinear structure, achieving extreme parameter compression, inducing interpretable internal representations, and offering new algorithmic levers distinct from classical dense models (Hamreras et al., 26 May 2025).

1. Fundamentals of Tensorization and Network Decomposition

Tensorization is the process by which a dense weight matrix W∈Rm×nW\in\mathbb{R}^{m\times n} is reshaped into a higher-order tensor W∈RI1×⋯×Id\mathcal{W}\in\mathbb{R}^{I_1\times \cdots\times I_d} with I1⋯Id=m⋅nI_1\cdots I_d = m\cdot n, typically by splitting the row and column indices into multi-indices. This facilitates structured decomposition and allows representation via tensor networks—collections of low-order core tensors contracted over internal indices ("bonds") (Hamreras et al., 26 May 2025, Sengupta et al., 2022, Phan et al., 2016, Cichocki, 2014, Cichocki, 2014).

Canonical decompositions include:

  • Tensor Train (TT/MPS):

Wi1,…,id=∑α1=1r1⋯∑αd−1=1rd−1Gi1,α1(1)Gα1,i2,α2(2)⋯Gαd−1,id(d)\mathcal{W}_{i_1,\dots,i_d} = \sum_{\alpha_1=1}^{r_1}\cdots\sum_{\alpha_{d-1}=1}^{r_{d-1}} G^{(1)}_{i_1,\alpha_1} G^{(2)}_{\alpha_1,i_2,\alpha_2} \cdots G^{(d)}_{\alpha_{d-1}, i_d}

with TT-ranks rkr_k, and G(k)G^{(k)} being rank-3 cores.

  • Tucker:

W=G×1U(1)×2U(2)⋯×dU(d)\mathcal{W} = \mathcal{G}\times_1 U^{(1)} \times_2 U^{(2)} \cdots\times_d U^{(d)}

with core G\mathcal{G} of shape r1×⋯×rdr_1\times \cdots \times r_d and factor matrices U(k)U^{(k)}.

  • Canonical Polyadic (CP/PARAFAC):

W=∑r=1Rλr ur(1)∘ur(2)∘⋯∘ur(d)\mathcal{W} = \sum_{r=1}^R \lambda_r\, u_r^{(1)}\circ u_r^{(2)}\circ\cdots\circ u_r^{(d)}

where each ur(k)∈RIku_r^{(k)}\in\mathbb{R}^{I_k} is a factor vector.

  • Tensor Ring (TR): a cyclic generalization of TT, defined as

X(i1,…,id)=Tr[G1(:,i1,:)G2(:,i2,:)⋯Gd(:,id,:)]X(i_1,\dots,i_d) = \mathrm{Tr}[G_1(:, i_1, :) G_2(:, i_2, :) \cdots G_d(:, i_d, :)]

with cyclic contraction and permutation invariance (Zhao et al., 2016).

Tensor networks are visualized as graphs where nodes are tensors and edges are contracted indices. This graphical notation clarifies both structure and contraction sequence (Evenbly, 2022, Sengupta et al., 2022).

2. Bond Dimensions, Latent Spaces, and Internal Representations

Central to all TNs is the concept of bond indices—summed internal indices with associated bond dimensions rkr_k. In the TT format, each rkr_k determines the correlation capacity between left and right groupings of tensor modes and induces a novel latent space not present in the original dense formulation. This introduces rich intermediate representations: every bond in the decomposition carries a latent feature vector through the network (Hamreras et al., 26 May 2025, Sengupta et al., 2022).

The mathematical structure of the TT network enables inspection of bond activation trajectories for input batches, allowing the study of progressive feature formation at various granularities. Gauge transformations (local basis changes along bonds) and variable matricizations (reshaping choices) provide multiple, equally valid but interpretively distinct decompositions, which are valuable for mechanistic interpretability (Hamreras et al., 26 May 2025, Phan et al., 2016).

In practice, the TT decomposition can be interpreted as a sequence ("stack") of sparse linear maps between bond spaces and the corresponding physical (data) indices, with bond activations representing latent evolution at each step.

3. Parameter Compression and Model Scaling

Tensor network decompositions yield substantial parameter reductions:

  • Dense Layer: mâ‹…nm \cdot n parameters.
  • TT/MPO Layer: ∑k=1drk−1Ikrk\sum_{k=1}^d r_{k-1} I_k r_k (with r0=rd=1r_0 = r_d = 1).
  • Tucker Kernel: storage is the sum of core and factors: (Ixrx+Iyry+Iwrw+Ihrh)+(rxryrwrh)(I_x r_x + I_y r_y + I_w r_w + I_h r_h) + (r_x r_y r_w r_h).

For small bond/rank values relative to the full tensor dimensions, the compression ratio can be orders of magnitude smaller than the corresponding dense layer (Hamreras et al., 26 May 2025, Cichocki, 2014, Zhao et al., 2016).

Tensorized layers admit unique scaling strategies:

  • Increase width via physical dimensions IkI_k or by adding cores (TT).
  • Increase correlation capacity via bond dimensions rkr_k.
  • Model depth by stacking separate TN layers, each with independent geometry.
  • Dynamic bond inflation (increasing rkr_k mid-training as accuracy plateaus) is a lever absent in conventional architectures (Hamreras et al., 26 May 2025).

4. Interpretability, Feature Tracking, and Inductive Bias

The presence of internal bonds and the modular decomposition structure enables tracking of the evolution of internal feature representations:

  • Bond-space trajectories: By recording intermediate bond activations across inputs, the emergence, bifurcation, and recombination of features can be studied in detail—an interpretability tool not available in standard dense architectures.
  • Gauge and ordering choices: Alternative gauge fixings and internal unfoldings enable different "time-lines" of feature decomposition, potentially correlating with meaningful algorithmic or semantic sub-processes (Hamreras et al., 26 May 2025).
  • Empirical studies: CNNs compressed with Tucker or CP factorizations typically retain >90% accuracy on ImageNet with 5–10× fewer parameters. TT-embedded Transformer models (e.g., TT-GPT) maintain low perplexity with high compression, and block-term decompositions yield 10–20× compression for LLM components with <1% performance loss (Hamreras et al., 26 May 2025, Singh et al., 21 Mar 2024).

Mechanistic insight is further enhanced by introducing gauge-obfuscating transformations (random orthogonal transformations on TT bonds) which decouple internal parameter structure from observed input-output behavior while maintaining output invariance—an asset for both interpretability and privacy (Monturiol et al., 10 Jan 2025).

5. Algorithmic and Computational Properties

Standard algorithms underpinning tensor network decompositions include:

  • TT-SVD: Sequential truncated SVD on mode unfoldings, extracting TT-cores recursively left-to-right or right-to-left, with guaranteed approximation error bounded by d−1 ϵ\sqrt{d-1}\,\epsilon for target truncation ϵ\epsilon (Phan et al., 2016, Cichocki, 2014, Cichocki, 2014).
  • ALS/DMRG: Iterative core (single or block) updates using least squares or SVDs, supporting global or local rank adaptivity, and offering robust convergence properties (Phan et al., 2016, Zhao et al., 2016).
  • Sampling-based ALS: Recent ALS algorithms utilize leverage-score sampling to reduce per-iteration cost below input size for arbitrary TN topologies, achieving sublinear scaling in data size, and matching deterministic ALS convergence rates under mild conditions (Malik et al., 2022).
  • TT Contraction Product: TT representations reduce the contraction of two high-order tensors along one mode from exponential to linear cost in dimension, independent of tensor order (Kisil et al., 2021).
  • Semi-Tensor Product Variants: Recent advances employ semi-tensor products to generalize mode products, yielding even more compact decompositions—e.g., semi-tensor train (STT) or semi-tensor ring (STR)—at negligible accuracy loss for deep networks (Zhao et al., 2021).

The impact on memory and runtime is dramatic: for example, TT/CP/Tucker decompositions can routinely reduce neural net weights by 5–100×, with matching or superior memory–FLOP–accuracy trade-offs compared to standard dense compression methods (Hamreras et al., 26 May 2025, Monturiol et al., 10 Jan 2025, Singh et al., 21 Mar 2024).

6. Generalizations, Advanced Topologies, and Connections

Tensor network decompositions admit a host of extensions for quantum, statistical, and machine learning applications:

  • Tensor Ring (TR): Removes TT endpoint constraints, achieves cyclic permutation invariance, represents every TT-compatible model and more, and often yields better parameter efficiency and compression under permutation or noise (Zhao et al., 2016).
  • Fully Connected TN (FCTN) and Latent Matrix TN (LMTN): The FCTN allows full inter-mode coupling at the cost of exponential parameter scaling; LMTN introduces latent-mode reduction matrices to achieve parameter and computation reduction while preserving the expressive power of FCTN (Yang et al., 2022).
  • Subset/Interaction Degree Decompositions: Interaction decomposition of polynomial feature maps enables explicit control over which degrees (feature monomials) contribute—supporting network design that eschews over-parameterization in favor of concise, informative subspaces (Convy et al., 2022).
  • Quantum/Entangled Topologies: Tensor network architectures underpin many quantum codes and maximally entangled state constructions (Pozsgay et al., 2023), and are widely used in quantum simulation (MPS, PEPS, MERA).

Algorithmic frameworks generalize seamlessly to other TN topologies (tree, hierarchical, PEPS, MPO) under mild contraction and sample-tractability assumptions (Malik et al., 2022).

7. Challenges and Future Directions

Despite these advantages, tensorization and tensor network decompositions face practical and theoretical challenges:

  • Hardware and Software Bottlenecks: Mainstream libraries and accelerators (e.g., GPU BLAS) are optimized for dense and simple sparse patterns, with general TN contractions often bottlenecked by unoptimized einsum routines (Hamreras et al., 26 May 2025).
  • Model Selection and Hyperparameter Proliferation: Design space is combinatorially large (topology, ordering, core sizes, bond dimensions), with current practice relying on expensive heuristic exploration.
  • Theory of Inductive Bias: The circumstances under which TN inductive bias confers generalization benefit for specific modalities remain insufficiently characterized.
  • Integration with Quantization/Pruning: Standard quantization and pruning schemes are not directly compatible with tensorized weights, requiring co-designed algorithms (Hamreras et al., 26 May 2025).
  • End-to-End Tensorized Architectures: Achieving fully tensorized forward passes with activations, nonlinearities, normalization, and even token streams remaining in TN form demands new activation/normalization layer designs, local, truncation-stable non-linear operations, and TN-native hardware.

Open research directions include automated format/rank selection, co-designed hardware-software stacks for TN contraction, and the translation of theoretical insights regarding latent spaces, correlation structure, and information-theoretic compressibility into deployable frameworks (Hamreras et al., 26 May 2025).


References:

These papers form the foundational and contemporary basis for tensorization and tensor network decomposition research, detailing both the underlying mathematical architectures and the practical considerations for modern machine learning and large-scale data analysis.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Tensorization and Tensor Network Decompositions.