Papers
Topics
Authors
Recent
Search
2000 character limit reached

TR-ALSAR Algorithm Overview

Updated 2 December 2025
  • TR-ALSAR is a family of algorithms that fits low-rank tensor ring decompositions to multi-dimensional data using efficient, numerically stable ALS routines.
  • It mitigates issues such as intermediate data explosion and instability by incorporating QR-based techniques and chain contractions in the solution process.
  • Empirical evaluations demonstrate that variants like TR-ALS-SC and TR-ALS-QR offer faster convergence and robust performance on large-scale, ill-conditioned datasets.

Tensor Ring Alternating Least Squares with Advanced Reduction (TR-ALSAR) algorithms form a family of practical, numerically stable routines for fitting low-rank tensor ring (TR) decompositions to multi-dimensional data. They address computational challenges inherent to classical TR-ALS—including intermediate data explosion and numerical instability—by exploiting algebraic structure and QR-based stabilization. This approach yields efficient, scalable solutions applicable to large-scale and ill-conditioned tensor decomposition problems (Yu et al., 2022).

1. Mathematical Formulation of Tensor Ring Decomposition

Given an NNth-order tensor XRI1×I2××INX \in \mathbb{R}^{I_1 \times I_2 \times \cdots \times I_N}, the TR decomposition expresses XX as the trace over a product of NN third-order core tensors: X(i1,,iN)=Trace[G1(i1)G2(i2)GN(iN)]X(i_1, \ldots, i_N) = \operatorname{Trace}[G_1(i_1) \cdot G_2(i_2) \cdots G_N(i_N)] where GnRRn×In×Rn+1G_n \in \mathbb{R}^{R_n \times I_n \times R_{n+1}} for n=1,,Nn=1,\ldots,N with RN+1=R1R_{N+1}=R_1, and Gn(in):=Gn(:,in,:)G_n(i_n) := G_n(:,i_n,:).

The goal is to minimize the Frobenius reconstruction error: minG1,,GNTR({Gn})XF2\min_{G_1,\ldots,G_N} \| \operatorname{TR}(\{G_n\}) - X \|_F^2 Alternating Least Squares (ALS) is performed by cyclically updating one core GnG_n at a time, keeping the others fixed, via a least-squares subproblem formulated on appropriate unfoldings of XX and subchains GnG^{\ne n} of the remaining cores.

2. Normal Equations and Subproblem Structure

For core GnG_n, the subproblem in unfolded form is: minGn(2)X[n]Gn(2)(G[2]n)TF2\min_{G_{n(2)}} \| X_{[n]} - G_{n(2)} \cdot (G^{\ne n}_{[2]})^T \|_F^2 where X[n]X_{[n]} is the mode-nn unfolding of XX and Gn(2)G_{n(2)} is the mode-2 unfolding of GnG_n.

Setting the derivative to zero yields the normal equations: Gn(2)(G[2]n(G[2]n)T)=X[n]G[2]nG_{n(2)} \cdot (G^{\ne n}_{[2]} (G^{\ne n}_{[2]})^T) = X_{[n]} G^{\ne n}_{[2]} Solving this (RnRn+1)×(RnRn+1)(R_n R_{n+1}) \times (R_n R_{n+1}) linear system is efficient for small RR, but direct formation of G[2]nG^{\ne n}_{[2]} is generally computationally prohibitive for large-scale tensors.

3. Coefficient Matrix Simplification: TR-ALS-SC

TR-ALS-SC leverages TR algebraic structure to factorize and contract the Gram matrices required in the normal equations, thus avoiding explicit computation of large unfoldings and their associated data explosion.

  • For each jnj \ne n, define the Gram-tensor:

Pj:=ij=1IjGj(ij)TGj(ij)TP_j := \sum_{i_j=1}^{I_j} G_j(i_j)^T \circ G_j(i_j)^T

with PjRRj+1×Rj+1×Rj×RjP_j \in \mathbb{R}^{R_{j+1} \times R_{j+1} \times R_j \times R_j}, where \circ denotes the matrix outer product.

  • The cumulative Gram matrix is built via chains of contractions:

Mn=(Pn1×2,41,3Pn2×2,41,3Pn+1)<2>M_n = (P_{n-1} \times_{2,4}^{1,3} P_{n-2} \cdots \times_{2,4}^{1,3} P_{n+1})_{<2>}

with contraction operator ×2,41,3\times_{2,4}^{1,3} merging matching RR-modes.

The right-hand side of the normal equations is computed implicitly by means of Matricized-Tensor-Times-Subchain-Product (MTTSP), implemented as a sequence of small matrix multiplications. All steps avoid forming intermediate objects larger than the input tensor XX or the cores. The resulting system

Gn(2)Mn=RHSnG_{n(2)} M_n = \text{RHS}_n

is solved per core update.

4. QR-Based Numerical Stabilization: TR-ALS-QR

For ill-conditioned or collinear core scenario, TR-ALS-QR stabilizes the ALS subproblems by casting them in orthogonal bases via QR factorizations:

  • Compute a mode-2 QR of each core: Gn=Rn×2QnG_n = R_n \times_2 Q_n, where QnQ_n is orthonormal and RnR_n is triangular in its mode-2 unfolding.
  • Construct the subchain VnV_n of concatenated RjR_j (excluding nn), then obtain its QR factorization Vn=R0×2Q0V_n = R_0 \times_2 Q_0.
  • Form the projected tensor YY by mode-wise multiplying XX with the conjugate transposes QjTQ_j^T for all jnj \ne n.
  • The TR-ALS-QR update for Gn(2)G_{n(2)} solves the triangular system:

Gn(2)R0[2]T=Y[n]Q0G_{n(2)} R_{0[2]}^T = Y_{[n]} Q_0

This approach guarantees that the least-squares solves are well-conditioned. The computational overhead from QR factorizations is moderate in practice, especially when IR2I \gg R^2 and explicit formation of VnV_n is avoided.

A fourth hybrid, QRNE, combines coefficient simplification from SC with QR stabilization by interleaving these strategies within the update, further improving speed and stability.

5. Algorithmic Descriptions

The following table summarizes key steps in the three principal TR-ALSAR variants:

Variant Gram Construction Solve Type Stabilization
TR-ALS Explicit unfolding Normal equations None
TR-ALS-SC Chain contraction of PjP_j Normal equations None
TR-ALS-QR (Not needed) Triangular system QR orthogonalization

Pseudocode for each variant cycles over the cores, forming the appropriate subchain data (either explicitly, via chain contractions, or QR bases), computes the right-hand side by MTTSP, performs a solve (normal equations or triangular system), and updates the core.

6. Computational Complexity and Memory

Three main cost components are considered: upfront initialization, per-iteration update cost, and memory footprint. Let NN be the tensor order, II the uniform mode size, and RR the uniform TR rank.

Time complexity per sweep:

Part / Method TR-ALS TR-ALS-SC TR-ALS-QR
Upfront init O(NIR2)O(NIR^2) O(NIR2+NIR4)O(NIR^2 + NIR^4) O(NIR2+NIR4)O(NIR^2 + NIR^4)
MTTSP O(NINR2)O(NI^N R^2) O(NINR2)O(NI^N R^2) O(NINR2)O(NI^N R^2)
Gram construction O(NIN1R4)O(NI^{N-1} R^4) O(NR6)O(NR^6)
System solve O(NIR6)O(NIR^6) O(NIR6)O(NIR^6) O(NIR4)O(NIR^4)
QR factorization O(NR2N+2)O(NR^{2N+2})^*
Other O(NIN1R3)O(NI^{N-1}R^3) O(NR6+NIN1R3)O(NR^6+NI^{N-1}R^3) O(NR2N+1)O(NR^{2N+1})

^*Typically avoided for large II by implicit computation.

Memory footprint:

  • Data tensor XX: O(IN)O(I^N)
  • Cores: O(NIR2)O(NIR^2)
  • Gram-tensors PnP_n: O(NR4)O(NR^4)
  • Temporaries: O(IN1R2)O(I^{N-1}R^2) (baseline), O(R2N)O(R^{2N}) (QR).

7. Empirical Properties and Trade-Offs

Extensive experiments on synthetic and real data reveal the following characteristics:

  • TR-ALS-SC reduces per-iteration cost by roughly half compared to baseline TR-ALS, converges in the same number of iterations, and can achieve 2–3×\times faster wall-clock time for large II and NN.
  • TR-ALS-QR exhibits significant stability advantages on ill-conditioned or collinear core problems, where TR-ALS and TR-ALS-SC can stagnate or produce inaccurate solutions, while TR-ALS-QR maintains robust convergence and lower errors.
  • On real imaging and video datasets (e.g., DC-Mall hyperspectral, "Park Bench," "Tabby Cat"), all methods yield identical reconstruction errors for R=310R=3\ldots10, while TR-ALS-SC and TR-ALS-QRNE run 5–10×\times faster than TR-ALS; TR-ALS-QRNE is the fastest stable variant.

The selection of algorithmic variant is thus dictated by the desired balance of speed and numerical robustness:

  • TR-ALS-SC is optimal for well-conditioned data and maximizes speed.
  • TR-ALS-QR (and QRNE) ensure numerical stability, crucial when data are noisy or core collinearity is high, with only moderate computational overhead.
  • The hybrid QRNE provides near-SC speed and QR stability, without formation of large intermediate tensors (Yu et al., 2022).
Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to TR-ALSAR Algorithm.