Quaternion Nuclear Norm Over Frobenius (QNOF)
- Quaternion Nuclear Norm Over Frobenius (QNOF) is a parameter-free, scale-invariant regularizer for quaternion matrices, combining nuclear and Frobenius norms to enforce low-rank structure via an L1/L2 penalty.
- It reformulates quaternion matrix recovery problems into a singular-value L1/L2 optimization, thereby improving performance in applications like matrix completion, robust PCA, and color image classification.
- Empirical studies show that QNOF leads to superior PSNR, SSIM, and computational efficiency compared to conventional nuclear norm and tensor-based methods.
Quaternion Nuclear Norm Over Frobenius (QNOF) is a parameter-free, scale-invariant regularizer defined for quaternion matrices, designed to induce low-rank structure in problems such as matrix completion, robust principal component analysis, and color image classification. QNOF combines the sum of quaternion singular values (quaternion nuclear norm) with the spectral energy (quaternion Frobenius norm) and has the crucial property of reducing rank surrogacy to a singular-value problem. The QNOF penalty is often leveraged in combination with data fidelity terms and solved via ADMM or block-coordinate minimization, with empirical superiority over state-of-the-art nuclear-norm and tensor-based methods in both performance and computational efficiency (Guo et al., 30 Apr 2025, Miao et al., 2019, Chen et al., 9 Dec 2025, Miao et al., 2020).
1. Mathematical Definition and Structural Properties
Let be a quaternion matrix, admitting a quaternion singular value decomposition (QSVD):
where and . The quaternion nuclear norm is:
and the quaternion Frobenius norm is:
The QNOF norm is then defined as:
Key properties include scale-invariance (), unitary invariance, and boundedness:
This norm serves as a continuous, nonconvex, parameter-free surrogate for rank, promoting low-rank solutions by penalizing the ratio of the singular value spectrum (Guo et al., 30 Apr 2025).
2. Reduction to Singular-Value Optimization and Justification
In matrix recovery and completion problems, the QNOF penalty reduces the optimization over quaternion matrices to an equivalent minimization on their singular value vectors. For the problem:
the optimal alignment of singular bases ensures an equivalent reformulation:
where are singular values of the observed . This reduction exploits the von Neumann trace inequality in the quaternion setting. The ratio is a sharper, more discriminating surrogate of rank than either nuclear or Frobenius norm alone, and is parameter-free and continuous (Guo et al., 30 Apr 2025).
3. QNOF in Low-Rank Quaternion Matrix Completion and Inpainting
In color image completion, missing data is modeled as a pure quaternion matrix ; the QNOF-based completion formulation is:
where indexes observed entries. This objective can be solved efficiently via block-coordinate descent or ADMM, often by mapping to an equivalent complex field and leveraging the Frobenius penalty of low-rank factors. Alternating minimization is standard:
- Update quaternion factors via linear solves;
- Threshold singular values in nuclear norm step;
- Project to observed/missing pixels in the reconstruction (Miao et al., 2019, Miao et al., 2020). Empirical results show PSNR and SSIM gains of 2–4 dB and 0.05–0.10, respectively, over tensor-based approaches, with running times competitive or superior to t-SVD, due to small-scale linear algebra (Miao et al., 2019).
4. Extensions: Robust Completion and Sparse Noise Models
QNOF has been adapted for robust quaternion matrix completion (RMC), accommodating both missing entries and sparse corruption:
ADMM is employed, alternating minimization steps for (via singular value thresholding and reduction) and for (soft-thresholding for corruption), with dual updates enforcing masking constraints. Convergence to stationary points is achieved if penalty parameters diverge and iterates remain bounded away from zero (Guo et al., 30 Apr 2025). In simulation, QNOF exactly recovers low-rank matrices and achieves top PSNR/SSIM on benchmark color images with up to missing entries. Robust PCA and joint missing+noise recovery scenarios similarly show marked improvements in fidelity (Guo et al., 30 Apr 2025).
5. QNOF as a Surrogate for Low-Rank Classification
In the context of color image classification, QNOF can serve as a penalty in support quaternion matrix machines (SQMM), yielding:
subject to hinge-loss constraints on sample labels and trace inner products. The sum enforces both margin regularization and strict low-rankness to exploit the coupling of RGB channels (Chen et al., 9 Dec 2025). Classification accuracy, robustness to Gaussian noise, and computational efficiency are improved compared to Frobenius-only regularization and tensor field machines. For six real-world color datasets, LSQMM achieves superior accuracy and noise resistance, with convergence in 10 ADMM iterations and per-iteration costs dominated by quaternion SVD of moderate scale (Chen et al., 9 Dec 2025).
6. Bilinear Factorization Variants: Q-FNN and Schatten- Approximations
Several extensions of QNOF are built on bilinear factorizations. Notably, the quaternion Frobenius/nuclear norm (Q-FNN) defines:
and is shown to reproduce the Schatten-$2/3$ quasi-norm:
This penalty is nonconvex yet tighter than the convex QNN, more aggressively shrinking small singular values and promoting low rank. Optimization proceeds through ADMM, alternating updates of factors and nuclear terms, with only the small factor matrices requiring QSVD at each step—yielding dramatic computational gains for high-dimensional image data (Miao et al., 2020). Empirical parameter guidelines include regularization , penalty progression from upward, and adaptive rank selection by spectral gap.
7. Empirical Performance and Practical Impact
Extensive evaluations demonstrate that QNOF and its factorized nonconvex surrogates attain state-of-the-art results in color image completion, robust PCA, and classification. Across a wide range of missing entry rates, noise levels, and image sizes, QNOF consistently outperforms conventional nuclear norm, Schatten-, and tensor/training based methods in PSNR, SSIM, and runtime (Guo et al., 30 Apr 2025, Miao et al., 2019, Miao et al., 2020, Chen et al., 9 Dec 2025). Visual reconstructions exhibit sharper edges and reduced artifacts, while classification models demonstrate superior generalization and stability under noise. A plausible implication is that QNOF's parameter-free, scale-invariant design and efficient spectral reduction provide key advantages in multidimensional data modeling regimes characteristic of quaternion domains.
Table: Quaternion Norms in Matrix Regularization (as defined in the referenced literature)
| Norm/Model | Definition | Surrogate for Rank |
|---|---|---|
| QNOF () | ratio (sharp, nonconvex) | |
| Q-FNN | Schatten-$2/3$ quasi-norm | |
| QNN () | Convex envelope | |
| Frobenius () | Scale control, not low-rank |
QNOF and its computational variants form a rapidly consolidating paradigm for high-fidelity, efficient low-rank modeling over quaternion-valued data, with key impacts in color image processing and robust data analysis (Guo et al., 30 Apr 2025, Miao et al., 2019, Miao et al., 2020, Chen et al., 9 Dec 2025).