Papers
Topics
Authors
Recent
2000 character limit reached

Quaternion Nuclear Norm Minus Frobenius Norm

Updated 19 December 2025
  • QNMF is a regularization framework that uses quaternions to jointly process RGB channels, effectively preserving inter-channel correlations in color images.
  • It employs a hybrid non-convex penalty combining the quaternion nuclear norm and a scaled Frobenius norm, closely approximating true rank minimization.
  • The ADMM-based optimization framework in QNMF delivers state-of-the-art performance in denoising, deblurring, inpainting, and impulse noise removal with strong theoretical guarantees.

Quaternion Nuclear Norm Minus Frobenius Norm (QNMF) is a regularization framework designed to improve color image reconstruction by leveraging quaternion algebra to jointly process RGB channels. QNMF addresses inter-channel correlation and enforces low-rank structure through a non-convex penalty. Its formulation enables accurate recovery in tasks such as denoising, deblurring, inpainting, and random impulse noise removal, consistently demonstrating state-of-the-art results in both synthetic and real-world scenarios (Guo et al., 12 Sep 2024).

1. Quaternion Representation of Color Images

Quaternion algebra offers a natural mechanism for encoding color images holistically. A quaternion is written as

a˙=a0+a1i+a2j+a3k\dot a = a_0 + a_1\,\mathbf{i} + a_2\,\mathbf{j} + a_3\,\mathbf{k}

with the imaginary units satisfying i2=j2=k2=ijk=1\mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = \mathbf{i}\mathbf{j}\mathbf{k} = -1. The conjugate is a˙=a0a1ia2ja3k\dot a^* = a_0 - a_1\,\mathbf{i} - a_2\,\mathbf{j} - a_3\,\mathbf{k}, and the modulus is a˙=a02+a12+a22+a32|\dot a| = \sqrt{a_0^2 + a_1^2 + a_2^2 + a_3^2}.

A quaternion matrix X˙Qm×n\dot X \in \mathbb Q^{m\times n} expands as

X˙=X0+X1i+X2j+X3k\dot X = X_0 + X_1\,\mathbf{i} + X_2\,\mathbf{j} + X_3\,\mathbf{k}

with XRm×nX_\ell \in \mathbb R^{m\times n}. The trace inner product is X˙,Y˙=tr(X˙TY˙)\langle \dot X, \dot Y \rangle = \Re\,\mathrm{tr}(\dot X^T\dot Y), and the Frobenius norm is X˙F=i,jX˙ij2=k=1min{m,n}σk(X˙)2\|\dot X\|_F = \sqrt{\sum_{i,j}|\dot X_{ij}|^2} = \sqrt{\sum_{k=1}^{\min\{m,n\}}\sigma_k(\dot X)^2}.

RGB images are encoded as pure quaternion matrices: X˙RGB=XRi+XGj+XBk\dot X_{RGB} = X_R\,\mathbf{i} + X_G\,\mathbf{j} + X_B\,\mathbf{k} where XR,XG,XBX_R, X_G, X_B denote red, green, and blue channel matrices, respectively, and the real part is zero.

2. The QNMF Regularizer: Formulation and Properties

The QNMF regularization term penalizes the difference of the quaternion nuclear norm and a scaled Frobenius norm: R(X˙)=X˙αX˙FR(\dot X) = \|\dot X\|_* - \alpha \|\dot X\|_F with α>0\alpha > 0. Here, the quaternion nuclear norm is defined by the sum of singular values, X˙=kσk(X˙)\|\dot X\|_* = \sum_k \sigma_k(\dot X), and the Frobenius norm by X˙F=kσk(X˙)2\|\dot X\|_F = \sqrt{\sum_k \sigma_k(\dot X)^2}.

This penalty is non-convex, constructed as a difference of convex functions. The nuclear norm component encourages singular value sparsity (lower effective rank), while the negative Frobenius norm selectively preserves large singular values, yielding a closer approximation to true rank minimization compared to the nuclear norm alone.

3. Optimization and Algorithmic Framework

Color image reconstruction under QNMF is formulated as a regularized inverse problem. For denoising, the minimization is: minX˙ 12Y˙X˙F2+λ(X˙αX˙F)\min_{\dot X}~\frac{1}{2}\|\dot Y-\dot X\|_F^2 + \lambda(\|\dot X\|_* - \alpha\|\dot X\|_F) and for general linear inverse problems (e.g., deblurring),

minX˙ γ2A˙X˙Y˙F2+λ(X˙αX˙F)\min_{\dot X}~\frac{\gamma}{2}\|\dot A \dot X - \dot Y\|_F^2 + \lambda(\|\dot X\|_* - \alpha\|\dot X\|_F)

An ADMM splitting is employed, introducing an auxiliary variable Z˙\dot Z with X˙=Z˙\dot X = \dot Z. The augmented Lagrangian is

L(X˙,Z˙,η˙)=γ2A˙X˙Y˙F2+λ(Z˙αZ˙F)+β2X˙Z˙F2+η˙,X˙Z˙\mathcal L(\dot X, \dot Z, \dot\eta) = \frac{\gamma}{2}\|\dot A \dot X - \dot Y\|_F^2 + \lambda(\|\dot Z\|_* - \alpha\|\dot Z\|_F) + \frac{\beta}{2}\|\dot X - \dot Z\|_F^2 + \langle\dot\eta, \dot X - \dot Z\rangle

where variables are updated as follows:

  • X-subproblem: Closed-form update via quaternion FFT,

X˙k+1=(γA˙A˙+βI)1(γA˙Y˙+βZ˙kη˙k)\dot X^{k+1} = (\gamma \dot A^* \dot A + \beta I)^{-1}(\gamma \dot A^* \dot Y + \beta \dot Z^k - \dot\eta^k)

  • Z-subproblem (QNMF proximal step):

Given QSVD W˙=X˙k+1+η˙k/β=U˙diag(σ)V˙T\dot W = \dot X^{k+1} + \dot\eta^k/\beta = \dot U\,\mathrm{diag}(\sigma)\,\dot V^T, the new singular values are updated by

σ~i={0,σiλ/β K(σiλ/2β),λ/β<σiKλ/β2(K1) σi,σi>Kλ/β2(K1)\tilde\sigma_i = \begin{cases} 0, & \sigma_i \le \lambda/\beta \ K(\sigma_i - \lambda/2\beta), & \lambda/\beta < \sigma_i \le \frac{K\lambda/\beta}{2(K-1)} \ \sigma_i, & \sigma_i > \frac{K\lambda/\beta}{2(K-1)} \end{cases}

with K=1+αλ/βmax(σλ/β,0)2K=1+\frac{\alpha\lambda/\beta}{\|\max(\sigma-\lambda/\beta,0)\|_2}. The new iterate is Z˙k+1=U˙ diag(σ~) V˙T\dot Z^{k+1} = \dot U~\mathrm{diag}(\tilde\sigma)~\dot V^T.

  • Multiplier and penalty updates: η˙k+1=η˙k+β(X˙k+1Z˙k+1)\dot\eta^{k+1}=\dot\eta^k+\beta(\dot X^{k+1}-\dot Z^{k+1}), βk+1=μβk,μ>1\beta^{k+1}=\mu\beta^k,\,\mu>1.

Convergence is guaranteed under monotonic penalty update (βk\beta^k\to\infty), with subproblem Z having a global solution and overall iterates converging such that X˙k+1Z˙k+10\|\dot X^{k+1}-\dot Z^{k+1}\| \to 0 and X˙k+1X˙k0\|\dot X^{k+1}-\dot X^k\| \to 0 (Guo et al., 12 Sep 2024).

4. Parameterization and Theoretical Guarantees

Parameter selection is data- and task-dependent. For denoising, recommended settings are patch size mm, number of similar patches nn dependent on noise standard deviation σ\sigma, λ=2c\lambda=2c with c=52nσc = \sqrt{5\sqrt{2n}\,\sigma}, and α=4\alpha = 4. For deblurring, penalty and weighting (γ\gamma, β\beta) are tuned per blur kernel. In inpainting and RPCA, α\alpha is kept fixed and ρ\rho is adapted for the 1\ell_1 error term. The framework is theoretically supported by optimality results for the Z-subproblem and general non-convex recovery guarantees.

QNMF is non-convex but structured as a difference of convex functions, enabling tractable optimization. The singular value shrinkage in the Z-step admits closed-form evaluation.

5. Empirical Evaluation and Comparative Results

The efficacy of QNMF is established across multiple benchmarks:

  • Synthetic Gaussian Denoising: On CSet12, McMaster, and Kodak datasets, QNMF achieves average PSNR/SSIM improvements over CBM3D, McWNNM, SV-TV, QLRMA, QWNNM, and QWSNM across all noise levels. For CSet12, QNMF yields 31.36/0.8764 vs. QWNNM 31.25/0.8748 and QWSNM 31.30/0.8715.
  • Real Image Denoising: On CC, PolyU, and SIDD, QNMF attains leading performance, e.g., 36.53 dB/0.9166 (SIDD).
  • Deblurring: QNMF gives best or equivalent PSNR/SSIM for uniform, Gaussian, and motion blur settings, visually minimizing ringing artifacts and producing sharper edges.
  • Matrix Completion (MC): At 80% missing, QNMF (patch-based) secures PSNR 31.80/SSIM 0.9397 compared to nearest baseline 25.47/0.6993.
  • RPCA: With 10% impulse noise, QNMF-G records 29.34/0.8895 vs. TRPCA 28.80/0.9170 (a trade-off in SSIM).
  • Runtime Considerations: For denoising 256×256256 \times 256 images, QNMF runs in ≈620 s versus QWNNM 580 s and QWSNM 955 s; for deblurring, QNMF (≈2155 s) is notably faster than QWNNM (3831 s). For matrix completion, QNMF (≈10.8 s) is substantially faster than QMC (168 s) (Guo et al., 12 Sep 2024).

6. Strengths, Limitations, and Potential Extensions

QNMF's principal strength lies in its quaternion-based joint processing of RGB channels, preserving color structure and minimizing channel-specific artifacts. The hybrid nuclear–Frobenius penalty approximates rank more effectively than conventional convex relaxations, and the ADMM framework accommodates a range of low-level vision tasks with mathematically guaranteed convergence to stationary points.

However, the approach is constrained by the high computational complexity of QSVD and requires manual parameter tuning (patch size, λ\lambda, etc.). Potential future directions include the development of QSVD-free quaternion factorization techniques, exploration of additional hybrid norms (e.g., other nuclear/Frobenius norm combinations), and integration of quaternion low-rank priors into deep neural network architectures.

7. Context and Research Impact

QNMF advances low-rank color image modeling by embedding the intrinsic correlation of RGB channels via quaternion algebra and leveraging a non-convex low-rank surrogate. The method attains state-of-the-art quantitative and visual outcomes across a comprehensive array of color restoration tasks while upholding rigorous mathematical guarantees (Guo et al., 12 Sep 2024). Its modular optimization scheme and adaptability across different degradation modalities position QNMF as a significant development in color image reconstruction research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Quaternion Nuclear Norm Minus Frobenius Norm (QNMF).