Papers
Topics
Authors
Recent
2000 character limit reached

Rank-Adaptive HOOI Algorithm

Updated 21 November 2025
  • The paper introduces an adaptive HOOI framework that automatically selects the minimal multilinear ranks to ensure the tensor approximation meets a specified relative error bound.
  • The method integrates an SVD-based per-mode rank truncation into the alternating update process, optimizing factor matrix estimation without prior rank knowledge.
  • The algorithm guarantees monotonic convergence of ranks and competitive computational performance compared to classical HOOI and ALS approaches.

The rank-adaptive Higher-Order Orthogonal Iteration (HOOI) algorithm is a methodology for computing the truncated Tucker decomposition of higher-order tensors, enforcing a user-specified relative error bound and automatically selecting the minimal multilinear ranks necessary for the prescribed accuracy. The fundamental advance of the rank-adaptive HOOI approach is its ability to determine, at each iteration, the smallest mode-wise ranks that guarantee the approximation error remains within tolerance, without requiring prior specification of these ranks. This technique builds upon and extends the classical HOOI paradigm by embedding an SVD-based adaptive rank-truncation mechanism within the alternating update process (Xiao et al., 2021).

1. Problem Formulation: Truncated Tucker Decomposition with Error Control

Given a tensor XRI1×I2××IN\mathcal{X} \in \mathbb{R}^{I_1 \times I_2 \times \cdots \times I_N}, the objective is to construct factor matrices U(n)RIn×rnU^{(n)} \in \mathbb{R}^{I_n \times r_n} (with (U(n))U(n)=Irn(U^{(n)})^\top U^{(n)} = I_{r_n}) and a core tensor GRr1××rN\mathcal{G} \in \mathbb{R}^{r_1 \times \cdots \times r_N} so that the Tucker reconstructed tensor

X^=G×1U(1)×2U(2)×NU(N)\hat{\mathcal{X}} = \mathcal{G} \times_1 U^{(1)} \times_2 U^{(2)} \cdots \times_N U^{(N)}

obeys the prescribed relative Frobenius norm error bound:

XX^FεXF,\|\mathcal{X} - \hat{\mathcal{X}}\|_F \leq \varepsilon \|\mathcal{X}\|_F,

for given tolerance ε(0,1)\varepsilon\in(0,1). Crucially, neither the multilinear ranks r1,,rNr_1,\dots,r_N, nor the factor matrices are fixed in advance; instead, the algorithm adaptively determines the minimal ranks such that the approximation satisfies the error constraint (Xiao et al., 2021).

2. Classical HOOI and the Need for Rank Adaptivity

The classical HOOI algorithm alternates the updates of each mode-nn factor matrix, fixing all other factors and solving an orthogonality-constrained maximization problem to obtain the optimal U(n)U^{(n)} for a specified rank rnr_n. The update seeks

U(n)SVDrn(B(n)),U^{(n)} \leftarrow \operatorname{SVD}_{r_n}\left(B_{(n)}\right),

where B(n)B_{(n)} is the mode-nn unfolding of the projected tensor after contracting X\mathcal{X} along all but the nn-th mode with the respective factor matrices. This operation retains only the leading rnr_n singular vectors of B(n)B_{(n)}. After a sweep updating all modes, the core tensor is recomputed by contracting X\mathcal{X} with the transpose of all updated mode matrices. In classical HOOI, the tuple (r1,...,rN)(r_1, ..., r_N) is held fixed throughout the iterations; thus, guaranteeing a target error level relies on either overestimating these ranks or prior knowledge, both of which are often unavailable or inefficient (Xiao et al., 2021).

3. Rank Selection Mechanism in Adaptive HOOI

Rank-adaptive HOOI augments the standard alternating update scheme by introducing a per-mode rank-selection phase rooted in the SVD of the mode-unfolded projected tensor. For each mode-nn and at each iteration kk, before updating U(n)U^{(n)}, the following steps are performed:

  • Form the projected tensor B\mathcal{B} by contracting X\mathcal{X} along all modes except nn with the current factors.
  • Compute the SVD of its nn-mode unfolding B(n)B_{(n)} to obtain singular values {σi}\{\sigma_i\}.
  • Select the minimal rank rr such that

i>rσi2BF2(1ε2)XF2.\sum_{i>r} \sigma_i^2 \leq \|\mathcal{B}\|_F^2 - (1-\varepsilon^2)\|\mathcal{X}\|_F^2.

  • Set rn(k+1)=rr_n^{(k+1)} = r, and update U(n)U^{(n)} to be the first rr left singular vectors.

This mechanism ensures that the reconstructed tensor remains feasible for the original error constraint, while mode-wise ranks are non-increasing and automatically adapt to the data (Xiao et al., 2021).

4. Convergence Properties and Theoretical Guarantees

The rank-adaptive HOOI algorithm possesses two key theoretical properties:

  • Local Optimality of Rank Selection: For each mode-nn, with other factors fixed, the specific rank choice above yields the smallest mode-nn rank that still achieves the global Frobenius norm error bound.
  • Monotonic Convergence of Ranks: The rank tuples R(k)R^{(k)} observed after each full sweep over all modes satisfy

R(k+1)R(k)R^{(k+1)} \leq R^{(k)}

componentwise, and the sequence stabilizes in finitely many iterations. The underpinning arguments rely on orthogonal-invariance of the Frobenius norm and the Eckart–Young theorem, which shows that the adaptive rank selection corresponds to the minimal-rank truncation for feasibility at each step (Xiao et al., 2021).

5. Algorithmic Workflow

The rank-adaptive HOOI algorithm proceeds as follows:

  1. Initialization: Start with initial U0(n)U_0^{(n)} and rn0r_n^0 (e.g., from t-HOSVD or randomly).
  2. Core Formation: Compute initial core as G0=X×1(U0(1))×N(U0(N))\mathcal{G}_0 = \mathcal{X} \times_1 (U_0^{(1)})^\top \cdots \times_N (U_0^{(N)})^\top.
  3. Iteration: While GkF>1ε2XF\|\mathcal{G}_k\|_F > \sqrt{1 - \varepsilon^2}\|\mathcal{X}\|_F:
    • For each mode n=1n=1 to NN:
      • Compute contracted tensor B\mathcal{B} using latest factors.
      • Unfold B\mathcal{B} in mode nn, compute its SVD.
      • Select minimal rnr_n such that the truncated tail energy in the singular values satisfies the feasibility constraint.
      • Update Uk+1(n)U_{k+1}^{(n)} with the leading singular vectors.
    • Update the core tensor.
    • Increment iteration counter.
  4. Termination: When the core norm criterion holds, output all factors, ranks, and core.

The combination of adaptive rank shrinkage and SVD-based subproblem solution at each step yields an algorithm that is both locally optimal in its rank assignment and globally efficient in convergence to a feasible truncated Tucker representation (Xiao et al., 2021).

6. Computational Complexity and Storage

Each iteration involves, for every mode nn:

  • Tensor Contractions: O(InJnrˉ)O(I_n J_n \bar{r}) per mode, where I=nInI = \prod_n I_n and Jn=I/InJ_n = I/I_n; rˉ\bar{r} is a typical rank.
  • SVD Computations: For mode-nn unfolding of size In×JnI_n \times J_n, cost is O(min{In,Jn}max{In,Jn}2)O(\min\{I_n, J_n\}\max\{I_n, J_n\}^2) per mode.
  • As the algorithm proceeds and ranks decrease monotonically, the computation per sweep becomes cheaper.
  • Storage Requirement: Dominated by either the full tensor X\mathcal{X} (if dense) or the compressed form:

O(r1r2rN+nInrn)O\left(r_1 r_2 \cdots r_N + \sum_n I_n r_n\right)

depending on representation (Xiao et al., 2021).

7. Comparative Analysis with Classical HOOI and ALS

The table below summarizes the distinctions between the classical HOOI, the Alternating Least Squares (ALS) method, and the rank-adaptive HOOI:

Algorithm Rank Selection Method Orthonormality Constraint Adaptivity to Error Tolerance
Classical HOOI Fixed in advance Enforced via SVD No
ALS Not enforced per iteration Not strictly enforced No
Rank-adaptive HOOI SVD-based, per mode per iteration Enforced via SVD Yes

Unlike classical HOOI, which operates with static preassigned ranks, the rank-adaptive variant dynamically minimizes mode-wise ranks while maintaining the feasibility of the error bound. In contrast to ALS, which solves unconstrained least squares and may lack strict orthogonality or principled in situ rank adaptation, the adaptive HOOI achieves its SVD-based rank trimming due to its constrained least squares transformation. Empirical results indicate that the rank-adaptive strategy produces (i) smaller final multilinear ranks, (ii) closer approximation to the input tensor for given ε\varepsilon, (iii) monotonic non-increasing updates of tensor ranks, and (iv) timing performance that is competitive with or superior to t-HOSVD, s-HOSVD, greedy HOSVD, and classical ALS across tested benchmarks (Xiao et al., 2021).

In summary, rank-adaptive HOOI extends the orthogonality-preserving and convergence properties of HOOI to the regime where error tolerance is specified and ranks must be minimized, providing a provably locally optimal and monotonically convergent framework for truncated Tucker decomposition (Xiao et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Rank-Adaptive Higher-Order Orthogonal Iteration (HOOI) Algorithm.