Papers
Topics
Authors
Recent
2000 character limit reached

Quaternion Nuclear Norm Over Frobenius (QNOF)

Updated 19 December 2025
  • Quaternion Nuclear Norm Over Frobenius (QNOF) is a parameter-free, scale-invariant regularizer for quaternion matrices, combining nuclear and Frobenius norms to enforce low-rank structure via an L1/L2 penalty.
  • It reformulates quaternion matrix recovery problems into a singular-value L1/L2 optimization, thereby improving performance in applications like matrix completion, robust PCA, and color image classification.
  • Empirical studies show that QNOF leads to superior PSNR, SSIM, and computational efficiency compared to conventional nuclear norm and tensor-based methods.

Quaternion Nuclear Norm Over Frobenius (QNOF) is a parameter-free, scale-invariant regularizer defined for quaternion matrices, designed to induce low-rank structure in problems such as matrix completion, robust principal component analysis, and color image classification. QNOF combines the sum of quaternion singular values (quaternion nuclear norm) with the spectral energy (quaternion Frobenius norm) and has the crucial property of reducing rank surrogacy to a singular-value L1/L2L_1/L_2 problem. The QNOF penalty is often leveraged in combination with data fidelity terms and solved via ADMM or block-coordinate minimization, with empirical superiority over state-of-the-art nuclear-norm and tensor-based methods in both performance and computational efficiency (Guo et al., 30 Apr 2025, Miao et al., 2019, Chen et al., 9 Dec 2025, Miao et al., 2020).

1. Mathematical Definition and Structural Properties

Let X˙Hm×n\dot X\in\mathbb H^{m\times n} be a quaternion matrix, admitting a quaternion singular value decomposition (QSVD):

X˙=U˙diag(σ1,,σr)V˙,\dot X = \dot U \,\mathrm{diag}(\sigma_1,\ldots,\sigma_r) \,\dot V^*,

where σ1σr>0\sigma_1\geq \cdots\geq \sigma_r>0 and r=rank(X˙)r = \mathrm{rank}(\dot X). The quaternion nuclear norm is:

X˙=i=1rσi,\|\dot X\|_* = \sum_{i=1}^r \sigma_i,

and the quaternion Frobenius norm is:

X˙F=i=1rσi2.\|\dot X\|_F = \sqrt{\sum_{i=1}^r \sigma_i^2}.

The QNOF norm is then defined as:

X˙QNOF:=X˙X˙F.\|\dot X\|_{\rm QNOF} := \frac{\|\dot X\|_*}{\|\dot X\|_F}.

Key properties include scale-invariance (cX˙QNOF=X˙QNOF,c0\|c\,\dot X\|_{\rm QNOF} = \|\dot X\|_{\rm QNOF},\, c\neq 0), unitary invariance, and boundedness:

1X˙QNOFrmin{m,n}.1 \leq \|\dot X\|_{\rm QNOF} \leq \sqrt{r} \leq \min\{\sqrt m,\sqrt n\}.

This norm serves as a continuous, nonconvex, parameter-free surrogate for rank(X˙)(\dot X), promoting low-rank solutions by penalizing the L1/L2L_1/L_2 ratio of the singular value spectrum (Guo et al., 30 Apr 2025).

2. Reduction to Singular-Value L1/L2L_1/L_2 Optimization and Justification

In matrix recovery and completion problems, the QNOF penalty reduces the optimization over quaternion matrices to an equivalent L1/L2L_1/L_2 minimization on their singular value vectors. For the problem:

minX˙12Y˙X˙F2+λX˙QNOF,\min_{\dot X} \tfrac12\|\dot Y - \dot X\|_F^2 + \lambda \|\dot X\|_{\rm QNOF},

the optimal alignment of singular bases ensures an equivalent reformulation:

minσ0{12σYσ22+λσ1σ2},\min_{\sigma \geq 0} \left\{ \tfrac12\|\sigma^Y - \sigma\|_2^2 + \lambda \frac{\|\sigma\|_1}{\|\sigma\|_2} \right\},

where σY\sigma^Y are singular values of the observed Y˙\dot Y. This reduction exploits the von Neumann trace inequality in the quaternion setting. The L1/L2L_1/L_2 ratio is a sharper, more discriminating surrogate of rank than either nuclear or Frobenius norm alone, and is parameter-free and continuous (Guo et al., 30 Apr 2025).

3. QNOF in Low-Rank Quaternion Matrix Completion and Inpainting

In color image completion, missing data is modeled as a pure quaternion matrix THM×N\mathbb T\in\mathbb H^{M\times N}; the QNOF-based completion formulation is:

minX˙ 12TΩX˙ΩF2+λX˙X˙F,\min_{\dot X} ~ \tfrac12\|\mathbb T_{\Omega} - \dot X_{\Omega}\|_F^2 + \lambda\, \frac{\|\dot X\|_*}{\|\dot X\|_F},

where Ω\Omega indexes observed entries. This objective can be solved efficiently via block-coordinate descent or ADMM, often by mapping to an equivalent complex field and leveraging the Frobenius penalty of low-rank factors. Alternating minimization is standard:

  • Update quaternion factors via linear solves;
  • Threshold singular values in nuclear norm step;
  • Project to observed/missing pixels in the reconstruction (Miao et al., 2019, Miao et al., 2020). Empirical results show PSNR and SSIM gains of 2–4 dB and 0.05–0.10, respectively, over tensor-based approaches, with running times competitive or superior to t-SVD, due to small-scale linear algebra (Miao et al., 2019).

4. Extensions: Robust Completion and Sparse Noise Models

QNOF has been adapted for robust quaternion matrix completion (RMC), accommodating both missing entries and sparse corruption:

minX˙,Z˙λX˙X˙F+ρZ˙1s.t. PΩ(X˙+Z˙)=PΩ(Y˙).\min_{\dot X, \dot Z} \lambda \frac{\|\dot X\|_*}{\|\dot X\|_F} + \rho \|\dot Z\|_1 \quad\text{s.t.}~\mathcal P_{\Omega}(\dot X+\dot Z) = \mathcal P_{\Omega}(\dot Y).

ADMM is employed, alternating minimization steps for X˙\dot X (via singular value thresholding and L1/L2L_1/L_2 reduction) and for Z˙\dot Z (soft-thresholding for corruption), with dual updates enforcing masking constraints. Convergence to stationary points is achieved if penalty parameters diverge and iterates remain bounded away from zero (Guo et al., 30 Apr 2025). In simulation, QNOF exactly recovers low-rank matrices and achieves top PSNR/SSIM on benchmark color images with up to 80%80\% missing entries. Robust PCA and joint missing+noise recovery scenarios similarly show marked improvements in fidelity (Guo et al., 30 Apr 2025).

5. QNOF as a Surrogate for Low-Rank Classification

In the context of color image classification, QNOF can serve as a penalty in support quaternion matrix machines (SQMM), yielding:

minW,b,ξ012WF2+λWQ+Ci=1Nξi,\min_{W, b, \xi \geq 0} \frac12\|W\|_F^2 + \lambda \|W\|_*^{Q} + C\sum_{i=1}^N \xi_i,

subject to hinge-loss constraints on sample labels and trace inner products. The sum WF2+λWQ\|W\|_F^2 + \lambda \|W\|_*^{Q} enforces both margin regularization and strict low-rankness to exploit the coupling of RGB channels (Chen et al., 9 Dec 2025). Classification accuracy, robustness to Gaussian noise, and computational efficiency are improved compared to Frobenius-only regularization and tensor field machines. For six real-world color datasets, LSQMM achieves superior accuracy and noise resistance, with convergence in \approx10 ADMM iterations and per-iteration costs dominated by quaternion SVD of moderate scale (Chen et al., 9 Dec 2025).

6. Bilinear Factorization Variants: Q-FNN and Schatten-pp Approximations

Several extensions of QNOF are built on bilinear factorizations. Notably, the quaternion Frobenius/nuclear norm (Q-FNN) defines:

X˙QFNN:=minX˙=U˙V˙H(U˙F2+2V˙3)3/2\|\dot X\|_{\mathrm{Q-FNN}} := \min_{\dot X = \dot U\dot V^H} \left( \frac{\|\dot U\|_F^2 + 2\|\dot V\|_*}{3} \right)^{3/2}

and is shown to reproduce the Schatten-$2/3$ quasi-norm:

X˙QFNN=(iσi2/3)3/2.\|\dot X\|_{\mathrm{Q-FNN}} = \left( \sum_i \sigma_i^{2/3} \right)^{3/2}.

This penalty is nonconvex yet tighter than the convex QNN, more aggressively shrinking small singular values and promoting low rank. Optimization proceeds through ADMM, alternating updates of factors and nuclear terms, with only the small factor matrices requiring QSVD at each step—yielding dramatic computational gains for high-dimensional image data (Miao et al., 2020). Empirical parameter guidelines include regularization λ=0.05max(M,N)\lambda = 0.05\sqrt{\max(M,N)}, penalty progression μ\mu from 10310^{-3} upward, and adaptive rank selection by spectral gap.

7. Empirical Performance and Practical Impact

Extensive evaluations demonstrate that QNOF and its factorized nonconvex surrogates attain state-of-the-art results in color image completion, robust PCA, and classification. Across a wide range of missing entry rates, noise levels, and image sizes, QNOF consistently outperforms conventional nuclear norm, Schatten-pp, and tensor/training based methods in PSNR, SSIM, and runtime (Guo et al., 30 Apr 2025, Miao et al., 2019, Miao et al., 2020, Chen et al., 9 Dec 2025). Visual reconstructions exhibit sharper edges and reduced artifacts, while classification models demonstrate superior generalization and stability under noise. A plausible implication is that QNOF's parameter-free, scale-invariant design and efficient spectral reduction provide key advantages in multidimensional data modeling regimes characteristic of quaternion domains.

Table: Quaternion Norms in Matrix Regularization (as defined in the referenced literature)

Norm/Model Definition Surrogate for Rank
QNOF (/F\|\cdot\|_* / \|\cdot\|_F) iσi/iσi2\sum_i \sigma_i / \sqrt{\sum_i \sigma_i^2} L1/L2L_1/L_2 ratio (sharp, nonconvex)
Q-FNN (U˙F2+2V˙3)3/2\left( \frac{\|\dot U\|_F^2 + 2\|\dot V\|_*}{3} \right)^{3/2} Schatten-$2/3$ quasi-norm
QNN (\|\cdot\|_*) iσi\sum_i \sigma_i Convex envelope
Frobenius (F\|\cdot\|_F) i=1rσi2\sqrt{\sum_{i=1}^r \sigma_i^2} Scale control, not low-rank

QNOF and its computational variants form a rapidly consolidating paradigm for high-fidelity, efficient low-rank modeling over quaternion-valued data, with key impacts in color image processing and robust data analysis (Guo et al., 30 Apr 2025, Miao et al., 2019, Miao et al., 2020, Chen et al., 9 Dec 2025).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Quaternion Nuclear Norm Over Frobenius (QNOF).