Hankel-Structured Tensor Completion
- Hankel-structured tensor completion models are defined by tensors whose entries depend on the sum of indices, enabling recovery of multidimensional signals through low-rank factorization and Hankel regularization.
- They integrate techniques such as CP, BTD, Tucker, and SOS to address applications in spectroscopy, wireless channel estimation, and spectral compressed sensing.
- Optimization algorithms like ADMM and scaled gradient methods ensure convergence and robustness, efficiently recovering data from incomplete or noisy observations.
Hankel-structured tensor completion models are a class of approaches that exploit the specific algebraic and spectral properties of Hankel tensors—tensors whose entries depend linearly on the sum of their indices—for the completion of multidimensional arrays from partial or noisy observations. These models harness combinations of low-rank tensor factorizations and explicit Hankel structure regularization (typically via nuclear norm or related convex surrogates) to recover structured signals in applications such as signal processing, spectroscopy, traffic state estimation, and spectral compressed sensing.
1. Mathematical Formulation of Hankel-structured Tensors
A Hankel tensor is a multidimensional array in which each entry is a function of the sum of its indices. For an -way, -dimensional real symmetric tensor , there exists a generating vector such that
where for all . The Hankel property encodes shift-invariance and is ubiquitous in systems characterized by exponential or sinusoidal dynamics, including multidimensional harmonic retrieval and time–frequency analysis (Li et al., 2014).
For generic tensors (not restricted to symmetry), Hankelization operators map one or more tensor modes to Hankel matrices (or higher-order analogs), facilitating low-rank constraints compatible with harmonic or exponential signal priors (Ying et al., 2016, Li et al., 7 Jul 2025).
2. Model Classes and Factorizations
CP and BTD-based Completion
Early models focused on the CANDECOMP/PARAFAC (CP) decomposition, representing a tensor as a sum of rank-1 components. For multidimensional exponential signal completion, the factor vectors are themselves structured as sampled exponentials, and thus their Hankelizations are (approximately) rank-1 (Ying et al., 2016). The completion objective then imposes low (CP) rank and nuclear-norm regularization on the Hankel matrices of factor vectors:
Block Term Decomposition (BTD) models generalize this structure by allowing each component to be a multilinear block (rather than a rank-one outer product), capturing more sophisticated one-to-many and groupwise harmonic correspondences across modes (Wang et al., 25 Jan 2025). In BTD-based Hankel tensor completion, the objective jointly fits the observed entries, minimizes the nuclear norm of Hankelized components, and includes Frobenius norm regularization:
Tucker-based and SOS Models
When additional constraints are important (e.g., positive semidefiniteness in sum-of-squares contexts), models are formulated as optimization problems over generating vectors and Hankel matrices, with nuclear norm (“trace norm”) surrogates for rank and cone constraints for PSD property (Li et al., 2014). Tucker-based approaches, as in multi-measurement spectral compressed sensing, impose low multilinear (Tucker) rank on Hankel lifts of signal collections (Li et al., 7 Jul 2025).
3. Optimization Algorithms
Most state-of-the-art Hankel-structured tensor completion algorithms are based on variants of the Alternating Direction Method of Multipliers (ADMM) or scaled/accelerated gradient methods adapted to tensor and Hankel algebra.
- ADMM for CP/BTD models: Auxiliary variables are introduced for Hankelized factors, with nuclear norm penalization and dual multipliers enforcing consistency. Updates cycle via:
- Least-squares or ALS/Gauss–Newton optimization of decomposition factors
- Singular value thresholding (SVT) on Hankelized auxiliary variables
- Multiplier (dual variable) and (optional) penalty parameter updates (Ying et al., 2016, Wang et al., 25 Jan 2025, Wang et al., 2021).
Scaled Gradient Methods for Hankel-Tucker Models: Preconditioned gradient descent steps with efficient multilinear projection and fast Hankel operations exploiting block FFTs or specialized projection properties. Each update minimizes a penalized data-fidelity plus Hankel-structure mismatch, realizing complexity per iteration (Li et al., 7 Jul 2025).
The following table contrasts representative approaches:
| Model Class | Structure Exploited | Main Algorithm |
|---|---|---|
| CP + Hankel Nuclear | Exponential factor vectors | ADMM w/ SVT (Ying et al., 2016) |
| BTD + Hankel Nuclear | Groupwise harmonic blocks | ADMM w/ ALS/GN (Wang et al., 25 Jan 2025) |
| Hankel-Tucker | Multilinear (Tucker) rank | Scaled GD (Li et al., 7 Jul 2025) |
| SOS-Hankel | PSD symmetric Hankel | Convex ADMM (Li et al., 2014) |
| Spatiotemporal Hankel | Local spatiotemporal corr. | ADMM/SVT (Wang et al., 2021) |
4. Theoretical Guarantees and Convergence Properties
Convergence guarantees are derived where the relaxation yields a convex or two-block convex subproblem. For instance, in the SOS-Hankel PSD completion, two-block ADMM converges to the global minimizer, with per-iteration complexity dominated by linear system solves and eigendecompositions (Li et al., 2014). In CP-Hankel and BTD-Hankel models with ADMM, the convergence is to KKT points under standard conditions, even in nonconvex regimes (Ying et al., 2016, Wang et al., 25 Jan 2025).
For Hankel-Tucker models, the ScalHT algorithm in (Li et al., 7 Jul 2025) is proven to achieve linear convergence and exact recovery under incoherence, spectral initialization, and appropriate sample complexity. The main results show:
- Linear contraction rate per iteration with high probability
- Robustness to sub-Gaussian noise (explicit recovery bounds)
- Storage and computation scale linearly or near-linearly in the ambient dimensions and measurement count
A plausible implication is that, as Hankel-structured models advance, rigorous non-asymptotic guarantees can be expected for broader classes of tensor factorizations and applications, provided identifiability and incoherence conditions.
5. Empirical Performance and Applications
Hankel-structured tensor completion methods have been validated across multiple application domains:
- Spectroscopy and NMR: CP-Hankel approaches recover -dimensional exponential signals in magnetic resonance with >90% reduction in acquisition time and robustness to rank overestimation, sharply outperforming matrix and plain tensor-nuclear-norm baselines (Ying et al., 2016).
- Wireless Channel Estimation: BTD-Hankel models realize 30–50% reductions in reconstruction error or sampling fraction for multidimensional harmonic signals and “space–time–frequency” CSI in Sub-6 GHz settings, dominating CP and non-harmonic competitors especially at low SNR and low sampling (Wang et al., 25 Jan 2025).
- Spectral Compressed Sensing: ScalHT achieves up to -fold improvements in per-iteration runtime and memory over atomic norm minimization and state-of-the-art matrix-lifted approaches, with empirical success in traffic data and sparse DOA estimation (Li et al., 7 Jul 2025).
- Spatiotemporal Data: Delay-embedding Hankelization coupled with truncated-nuclear-norm minimization (STH-LRTC) yields superior performance for traffic speed estimation, recovering global and local patterns from heavily incomplete data (e.g., 88% missing rate) and outperforming both matrix and tensor competitors in RMSE and runtime (Wang et al., 2021).
6. Structural Considerations and Model Flexibility
An essential distinction among Hankel-structured completion models is the granularity and type of algebraic structure enforced:
- Per-factor nuclear norm promotes exponentials in CP/BTD factors, directly leveraging physical signal structure (e.g., superpositions of sinusoids) (Ying et al., 2016, Wang et al., 25 Jan 2025).
- Block term decompositions allow more complex harmonic relationships and one-to-many mappings not captured by CP models (Wang et al., 25 Jan 2025).
- Global multilinear low-rank enforcement through Hankel lifting and Tucker decompositions targets spectral sparsity and cross-observation correlations (Li et al., 7 Jul 2025).
- Sum-of-squares and PSD constraints are critical for instances where convexity, nonnegativity, or robust polynomial structure is necessary (Li et al., 2014).
- Spatiotemporal patch unfoldings explicitly balance local windowing and location, trading off expressivity and computational tractability (Wang et al., 2021).
This variety suggests that model choice should be tuned to domain priors, sampling regime, anticipated noise, and available computational resources.
7. Limitations and Open Problems
Despite significant progress, some theoretical questions remain unresolved:
- The existence of positive semidefinite (PSD) Hankel tensors that are not sum-of-squares (SOS) is open; a negative answer would imply tractable PSD verification for even-order Hankel tensors (Li et al., 2014).
- Identifiability and sample complexity bounds for general BTD-Hankel models require further study, particularly under heavy under-sampling or nonexpansive harmonic mappings.
- Scalability for extremely high-order tensors or for asymmetric, nonuniformly sampled data may benefit from new factorization strategies or distributed optimization.
Overall, Hankel-structured tensor completion provides an expressive and computationally viable framework for structured recovery in high-dimensional, incomplete data scenarios, with application-driven developments continually informing advances in both modeling and optimization theory.