Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Quaternion Tensor Completion

Updated 3 November 2025
  • Quaternion tensor completion is a method that extends tensor completion to the quaternion algebra, effectively capturing color channel correlations in multidimensional data.
  • It integrates global low-rank modeling via QTT decomposition with local sparsity enforced through transform-domain regularization and weighted nuclear norms.
  • Empirical results demonstrate superior PSNR and SSIM in image and video inpainting tasks, proving its effectiveness over state-of-the-art methods.

Quaternion tensor completion is a class of methodologies for recovering multidimensional data (typically color images and videos) encoded as tensors over the quaternion algebra, with missing or corrupted entries. This framework leverages quaternion-valued representations to exploit color channel correlation and spatial-temporal structure, with completion accuracy enhanced by the integration of global low-rank modeling and local sparsity constraints. The following article provides a technical exposition of the mathematical foundations, decomposition strategies, optimization models, and implementation details, together with a comparison of empirical performance and state-of-the-art results.

1. Mathematical Foundations: Quaternions, Tensors, and TT Decomposition

Quaternions form a four-dimensional non-commutative division algebra H\mathbb{H}, with basis $\{1, \bi, \bj, \bk\}$ and multiplication rules $\bi^2 = \bj^2 = \bk^2 = \bi\bj\bk = -1$. Pixel-wise color information is naturally encoded in a quaternion by mapping RGB channels onto the imaginary parts: $q = t_r \bi + t_g \bj + t_b \bk$.

Given an NN-th order quaternion tensor X˙HI1××IN\dot{\mathcal{X}} \in \mathbb{H}^{I_1 \times \cdots \times I_N}, the quaternion tensor train (QTT) decomposition generalizes the classical TT decomposition to the quaternion domain. Each tensor entry is factored as

x˙i1,,iN=G˙1(:,i1,:) G˙2(:,i2,:)G˙N(:,iN,:)\dot{x}_{i_1,\ldots,i_N} = \dot{\mathcal{G}}_1(:,i_1,:) \ \dot{\mathcal{G}}_2(:,i_2,:) \cdots \dot{\mathcal{G}}_N(:,i_N,:)

where G˙nHrn1×In×rn\dot{\mathcal{G}}_n \in \mathbb{H}^{r_{n-1} \times I_n \times r_n} are third-order quaternion cores, with r0=rN=1r_0 = r_N = 1.

The QTT-rank is defined as the tuple

rankQTT(X˙)=(rank(X˙[1]),,rank(X˙[N1]))\mathrm{rank}_{\mathrm{QTT}}(\dot{\mathcal{X}}) = \big(\mathrm{rank}(\dot{\mathbf{X}}_{[1]}), \ldots, \mathrm{rank}(\dot{\mathbf{X}}_{[N-1]})\big)

with X˙[n]\dot{\mathbf{X}}_{[n]} the mode-nn canonical unfolding, grouping the first nn modes versus remaining NnN-n modes. The QTT-rank quantifies the minimal size of the inter-core links, reflecting global structural dependencies more effectively than the Tucker rank, which is based on mode-wise unfoldings.

2. Model Formulation: Joint Low-Rank and Sparse Regularization

Visual data (color images/videos) manifest both global redundancies and localized structure, motivating models that combine global low-rankness and local sparsity. The general quaternion tensor completion model,

$\begin{split} &\min_{\dot{\mathcal{X}}} \ \mathrm{rank}_{\mathrm{QTT}}(\dot{\mathcal{X}}) + \lambda \| \mathfrak{T}(\dot{\mathcal{X}}) \|_1 \ &\text{s.t.}\quad P_\Omega(\dot{\mathcal{X}}) = P_\Omega(\dot{\mathcal{T}}) \end{split}$

incorporates:

  • QTT-rank minimization for global structure,
  • 1\ell_1-norm regularization in a general transformed domain T\mathfrak{T} (e.g., QDFT, QDCT, QDWHT) to promote local sparsity.

Since direct rank minimization is intractable, the quaternion weighted nuclear norm (QWNN) is used as a convex surrogate: k=1N1αkM˙k[k]w,+λkE˙k1\sum_{k=1}^{N-1} \alpha_k \| \dot{\mathcal{M}}_{k[k]} \|_{w, *} + \lambda_k \| \dot{\mathcal{E}}_k \|_1 where M˙k[k]w,\| \dot{\mathcal{M}}_{k[k]} \|_{w, *} is the weighted nuclear norm of the mode-kk canonical unfolding in the quaternion domain, based on quaternion SVD (QSVD) singular values, and E˙k1\| \dot{\mathcal{E}}_k \|_1 enforces sparsity for the transformed domain coefficients.

3. Optimization Framework: ADMM in Quaternion Space

The completion problem is solved using an alternating minimization strategy, specifically the quaternion-based ADMM framework. Key points include:

  • Variable-splitting introduces auxiliary variables for each QWNN and sparsity term, enabling efficient separable subproblem updates.
  • The nuclear norm minimization of quaternion matrices employs quaternion singular value thresholding (QSVT):

    • For unfolding X˙[n]=U˙ΣV˙H\dot{\mathbf{X}}_{[n]} = \dot{\mathbf{U}} \mathbf{\Sigma} \dot{\mathbf{V}}^H (QSVD), soft-thresholding applies to singular values,

    Sξ(X˙[n])=U˙diag(max(σkξ,0))V˙H\mathfrak{S}_\xi(\dot{\mathbf{X}}_{[n]}) = \dot{\mathbf{U}} \operatorname{diag}(\max(\sigma_k - \xi, 0)) \dot{\mathbf{V}}^H

  • Transform-domain 1\ell_1 regularization supports various quaternion transforms for enhanced flexibility in capturing sparsity.

The ADMM iterations consist of:

  • Update X˙\dot{\mathcal{X}} consistent with current auxiliary variables and observed entries (PΩ)(P_\Omega),
  • Mode-wise nuclear norm minimization for each unfolding,
  • Transform-domain sparsity enforcement via soft-thresholding of quaternion transform coefficients,
  • Multiplier and penalty update for convergence acceleration.

4. Preprocessing: Quaternion Ket Augmentation (QKA)

The "ket augmentation" (KA) technique reshapes lower-order tensors into higher-order structures, originally devised for TT-based methods in real tensors, here generalized to the quaternionic context (QKA). For a color image (originally M×NM \times N), QKA produces an 8th-order quaternion tensor with more balanced mode sizes, while color videos (originally M×N×FM \times N \times F) are augmented into 5th-order tensors with the frame index as an additional mode.

This augmentation substantially exposes and exploits global correlations, thereby enhancing the efficacy of the QTT decomposition and the corresponding completion accuracy.

5. Empirical Performance: Benchmarks and Evaluation

The QTT-SRTD method is evaluated on standard color image and video inpainting tasks featuring random and structural missing-data scenarios. Comparative analysis includes best-in-class techniques: t-SVD, SiLRTC-TT, TMac-TT, LRQA-2, LRQMC, TQLNA, and LRC-QT, all applicable to real, complex, and quaternion domains.

Experiments use peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as evaluation metrics. Across varying known-entry ratios (as low as 10%), QTT-SRTD consistently achieves superior PSNR and SSIM compared to competitors, with visual fidelity maintained in challenging cases. Recovery of images and videos is noted for preservation of detail and smoothness.

6. Contextual Significance and Extensions

The integration of TT decomposition concepts with quaternion algebra enables principled exploitation of color channel coupling and higher-order correlations. The approach generalizes previous tensor train completion methods, and the ADMM-based optimization ensures scalability and practical tractability. The QTT framework and kernel transform regularization support extensibility to other quaternion or hypercomplex representations.

Advancing beyond channel-wise or scalar-based completion, quaternion tensor completion serves applications in color image/video restoration, 3D/4D signal processing, and multi-component sensor data imputation. The flexibility of the transform domain further accommodates a spectrum of local structural priors.

7. Summary Table: Core Components

Component Mathematical Notation Functional Role
QTT Decomposition x˙i1,,iN=nG˙n(:,in,:)\dot{x}_{i_1,\ldots,i_N} = \prod_n \dot{\mathcal{G}}_n(:,i_n,:) Global structure modeling
QTT-rank (r1,,rN1)=(rank(X˙[1]),)(r_1,\ldots,r_{N-1}) = (\mathrm{rank}(\dot{\mathbf{X}}_{[1]}),\ldots) Minimal representation, rank measure
QWNN M˙k[k]w,\| \dot{\mathcal{M}}_{k[k]} \|_{w,*} (QSVD) Convex surrogate for QTT-rank
Transform-domain sparsity T(X˙)1\| \mathfrak{T}(\dot{\mathcal{X}}) \|_1 Local structure, noise regularization
QKA High-order tensorization via ket augmentation Reveals high-order global correlations

Quaternion tensor completion by QTT-rank and transform-domain sparsity minimization establishes a rigorous and empirically validated approach that maximizes both global and local structure recovery in color image and video inpainting problems (Miao et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Quaternion Tensor Completion.