Rank-Adaptive HOOI Algorithm
- The paper introduces an adaptive HOOI framework that automatically selects the minimal multilinear ranks to ensure the tensor approximation meets a specified relative error bound.
- The method integrates an SVD-based per-mode rank truncation into the alternating update process, optimizing factor matrix estimation without prior rank knowledge.
- The algorithm guarantees monotonic convergence of ranks and competitive computational performance compared to classical HOOI and ALS approaches.
The rank-adaptive Higher-Order Orthogonal Iteration (HOOI) algorithm is a methodology for computing the truncated Tucker decomposition of higher-order tensors, enforcing a user-specified relative error bound and automatically selecting the minimal multilinear ranks necessary for the prescribed accuracy. The fundamental advance of the rank-adaptive HOOI approach is its ability to determine, at each iteration, the smallest mode-wise ranks that guarantee the approximation error remains within tolerance, without requiring prior specification of these ranks. This technique builds upon and extends the classical HOOI paradigm by embedding an SVD-based adaptive rank-truncation mechanism within the alternating update process (Xiao et al., 2021).
1. Problem Formulation: Truncated Tucker Decomposition with Error Control
Given a tensor , the objective is to construct factor matrices (with ) and a core tensor so that the Tucker reconstructed tensor
obeys the prescribed relative Frobenius norm error bound:
for given tolerance . Crucially, neither the multilinear ranks , nor the factor matrices are fixed in advance; instead, the algorithm adaptively determines the minimal ranks such that the approximation satisfies the error constraint (Xiao et al., 2021).
2. Classical HOOI and the Need for Rank Adaptivity
The classical HOOI algorithm alternates the updates of each mode- factor matrix, fixing all other factors and solving an orthogonality-constrained maximization problem to obtain the optimal for a specified rank . The update seeks
where is the mode- unfolding of the projected tensor after contracting along all but the -th mode with the respective factor matrices. This operation retains only the leading singular vectors of . After a sweep updating all modes, the core tensor is recomputed by contracting with the transpose of all updated mode matrices. In classical HOOI, the tuple is held fixed throughout the iterations; thus, guaranteeing a target error level relies on either overestimating these ranks or prior knowledge, both of which are often unavailable or inefficient (Xiao et al., 2021).
3. Rank Selection Mechanism in Adaptive HOOI
Rank-adaptive HOOI augments the standard alternating update scheme by introducing a per-mode rank-selection phase rooted in the SVD of the mode-unfolded projected tensor. For each mode- and at each iteration , before updating , the following steps are performed:
- Form the projected tensor by contracting along all modes except with the current factors.
- Compute the SVD of its -mode unfolding to obtain singular values .
- Select the minimal rank such that
- Set , and update to be the first left singular vectors.
This mechanism ensures that the reconstructed tensor remains feasible for the original error constraint, while mode-wise ranks are non-increasing and automatically adapt to the data (Xiao et al., 2021).
4. Convergence Properties and Theoretical Guarantees
The rank-adaptive HOOI algorithm possesses two key theoretical properties:
- Local Optimality of Rank Selection: For each mode-, with other factors fixed, the specific rank choice above yields the smallest mode- rank that still achieves the global Frobenius norm error bound.
- Monotonic Convergence of Ranks: The rank tuples observed after each full sweep over all modes satisfy
componentwise, and the sequence stabilizes in finitely many iterations. The underpinning arguments rely on orthogonal-invariance of the Frobenius norm and the Eckart–Young theorem, which shows that the adaptive rank selection corresponds to the minimal-rank truncation for feasibility at each step (Xiao et al., 2021).
5. Algorithmic Workflow
The rank-adaptive HOOI algorithm proceeds as follows:
- Initialization: Start with initial and (e.g., from t-HOSVD or randomly).
- Core Formation: Compute initial core as .
- Iteration: While :
- For each mode to :
- Compute contracted tensor using latest factors.
- Unfold in mode , compute its SVD.
- Select minimal such that the truncated tail energy in the singular values satisfies the feasibility constraint.
- Update with the leading singular vectors.
- Update the core tensor.
- Increment iteration counter.
- For each mode to :
- Termination: When the core norm criterion holds, output all factors, ranks, and core.
The combination of adaptive rank shrinkage and SVD-based subproblem solution at each step yields an algorithm that is both locally optimal in its rank assignment and globally efficient in convergence to a feasible truncated Tucker representation (Xiao et al., 2021).
6. Computational Complexity and Storage
Each iteration involves, for every mode :
- Tensor Contractions: per mode, where and ; is a typical rank.
- SVD Computations: For mode- unfolding of size , cost is per mode.
- As the algorithm proceeds and ranks decrease monotonically, the computation per sweep becomes cheaper.
- Storage Requirement: Dominated by either the full tensor (if dense) or the compressed form:
depending on representation (Xiao et al., 2021).
7. Comparative Analysis with Classical HOOI and ALS
The table below summarizes the distinctions between the classical HOOI, the Alternating Least Squares (ALS) method, and the rank-adaptive HOOI:
| Algorithm | Rank Selection Method | Orthonormality Constraint | Adaptivity to Error Tolerance |
|---|---|---|---|
| Classical HOOI | Fixed in advance | Enforced via SVD | No |
| ALS | Not enforced per iteration | Not strictly enforced | No |
| Rank-adaptive HOOI | SVD-based, per mode per iteration | Enforced via SVD | Yes |
Unlike classical HOOI, which operates with static preassigned ranks, the rank-adaptive variant dynamically minimizes mode-wise ranks while maintaining the feasibility of the error bound. In contrast to ALS, which solves unconstrained least squares and may lack strict orthogonality or principled in situ rank adaptation, the adaptive HOOI achieves its SVD-based rank trimming due to its constrained least squares transformation. Empirical results indicate that the rank-adaptive strategy produces (i) smaller final multilinear ranks, (ii) closer approximation to the input tensor for given , (iii) monotonic non-increasing updates of tensor ranks, and (iv) timing performance that is competitive with or superior to t-HOSVD, s-HOSVD, greedy HOSVD, and classical ALS across tested benchmarks (Xiao et al., 2021).
In summary, rank-adaptive HOOI extends the orthogonality-preserving and convergence properties of HOOI to the regime where error tolerance is specified and ranks must be minimized, providing a provably locally optimal and monotonically convergent framework for truncated Tucker decomposition (Xiao et al., 2021).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free