Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Truncated Quaternion Tensor Nuclear Norm

Updated 3 November 2025
  • The QT-RNN is a truncated nuclear norm for 3-way quaternion tensors that preserves dominant RGB components in color video data.
  • It integrates TQt-SVD with ADMM and QTDCT-based sparsity penalties to efficiently handle high missing data rates and noise.
  • Empirical results demonstrate improved PSNR and structural similarity, validating its effectiveness over conventional tensor completion methods.

The Truncated Quaternion Tensor Nuclear Norm (QT-RNN) is a non-convex prior for low-rank recovery in quaternion-valued tensor data, principally color video and multidimensional visual signals. It extends the truncated nuclear norm concept into the quaternion tensor regime, enabling algorithms that preserve inter-channel (RGB) structure while favoring reconstructions dominated by essential, non-noisy components. QT-RNN is optimized in conjunction with quaternion transform sparsity penalties, typically via augmented Lagrangian and ADMM frameworks, and has demonstrated empirical superiority over conventional nuclear norm approaches for the completion of color videos with high missing data rates.

1. Mathematical Formulation and Properties

QT-RNN is defined for a 3-way quaternion tensor T˙HI1×I2×I3\dot{\mathcal{T}}\in\mathbb{H}^{I_1\times I_2\times I_3}, wherein each pixel is encoded as a pure quaternion:

t˙=0+tri+tgj+tbk\dot{t} = 0 + t_r i + t_g j + t_b k

with tr,tg,tbt_r, t_g, t_b as red, green, and blue channel values. Each color video frame is stacked along the third dimension, resulting in a tensor representation T˙\dot{\mathcal{T}}.

The transform-based quaternion tensor singular value decomposition (TQt-SVD) yields for each frontal slice T˙(i)\dot{\mathcal{T}}^{(i)} a set of singular values σj(T˙(i))\sigma_j(\dot{\mathcal{T}}^{(i)}). The standard quaternion tensor nuclear norm (QTNN) sums all such singular values:

T˙=i=1I3j=1min{I1,I2}σj(T˙(i))\left\| \dot{\mathcal{T}} \right\|_* = \sum_{i=1}^{I_3} \sum_{j=1}^{\min\{I_1, I_2\}} \sigma_j(\dot{\mathcal{T}}^{(i)})

The truncated quaternion tensor nuclear norm is defined as:

T˙r=i=1I3j=r+1min{I1,I2}σj(T˙(i))\left\| \dot{\mathcal{T}} \right\|_r = \sum_{i=1}^{I_3} \sum_{j=r+1}^{\min\{I_1, I_2\}} \sigma_j(\dot{\mathcal{T}}^{(i)})

where, for each frontal slice, only the singular values from rank r+1r+1 to min(I1,I2)\min(I_1,I_2) are summed, excluding the dominant rr terms. This promotes model fit to data that have a few strong modes (signal) and many weak modes (noise or missingness), whereas the classic nuclear norm over-regularizes the major components.

The quaternion tensor TQtTQt-rank is related as:

rankTQt(T˙)=#{kD˙(k,k,:)F>0}\mathrm{rank}_{TQt}(\dot{\mathcal{T}}) = \#\{ k \mid \|\dot{\mathcal{D}}(k,k,:)\|_F > 0 \}

with D˙\dot{\mathcal{D}} the f-diagonal tensor from TQt-SVD (kk indexes the tubes along the third mode).

2. Integration in Tensor Completion Models

In color video completion, the goal is to estimate the true tensor T˙\dot{\mathcal{T}}^* given observations O˙\dot{\mathcal{O}} and observation mask Ω\Omega. Direct minimization of nuclear (or truncated nuclear) norms promotes low-rankness globally. The QT-RNN-based completion objective is:

minT˙T˙r+λS˙1\min_{\dot{\mathcal{T}}} \left\| \dot{\mathcal{T}} \right\|_r + \lambda \left\| \dot{\mathcal{S}} \right\|_1

subject to

PΩ(T˙)=PΩ(O˙),S˙=C(T˙)LP_\Omega(\dot{\mathcal{T}}) = P_\Omega(\dot{\mathcal{O}}), \quad \dot{\mathcal{S}} = \mathcal{C}(\dot{\mathcal{T}})_L

where PΩP_\Omega is the mask projection and C()L\mathcal{C}(\cdot)_L is the left-handed quaternion tensor discrete cosine transform (QTDCT). The 1\ell_1 norm on S˙\dot{\mathcal{S}} enforces local sparsity in the transformed domain, complementing the global low-rank prior.

The preservation of RGB structure arises from encoding all channels in the imaginary components of the quaternions, a property lost in real/complex-valued tensor decompositions which typically unfold or separate channels, destroying inter-channel correlation.

3. Optimization via ADMM and Truncated SVD

The model employs a two-stage alternating optimization based on the Alternating Direction Method of Multipliers:

  1. SVD/Truncation Update: For each iterate, perform TQt-SVD and truncate the largest rr singular values per frontal slice, updating auxiliary factors to enforce low-rankness according to the QT-RNN.
  2. ADMM Block Updates: Introduce auxiliary variables for the truncated term, nuclear norm, and QTDCT sparsity penalty; formulate the augmented Lagrangian. Updates proceed by:
    • Soft-thresholding in the transformed domain (QTDCT) for the sparse component
    • Quaternion singular value thresholding for the low-rank update
    • Closed-form update for auxiliary variable H˙\dot{\mathcal{H}}
    • Gradient-based or logarithmic nuclear norm variants for increased non-convexity handling
    • Lagrange multiplier and penalty parameter updates

This procedure leverages the structure of quaternion algebra for efficient computation and enforces the joint low-rank/sparsity model. The ADMM-based optimization is robust to missing data patterns and does not require explicit tuning for different video sizes or content types.

4. Relationship to Weighted and Truncated Nuclear Norms

QT-RNN stands in direct analogy to truncated nuclear norms in real and complex domains (Yang et al., 2021). It is also closely related to the quaternion weighted nuclear norm (QWNN) (Miao et al., 2022), used in tensor-train rank minimization; QWNN can emulate truncation by setting weights wi=0w_i=0 for the largest rr singular values. However, QT-RNN is specifically formulated for cases where dominating singular values represent genuine signal, and its explicit truncation avoids suppression of those vital components.

A plausible implication is that, whereas QWNN provides continuous adaptive weighting, QT-RNN offers explicit control over the number of unpenalized components, which may sharpen control for practical video data with a few dominant modes.

5. Empirical Evaluation and Practical Impact

Experimental results (Yang et al., 2022) demonstrate that QT-RNN is effective for video completion tasks, particularly when the observation mask retains only 10–20% of data. Compared to tensor and quaternionic nuclear norm or matrix completion methods, QT-RNN yields higher PSNR and average structural similarity (ASSIM), and reconstructed color frames preserve local detail and cross-channel fidelity better. This suggests robust performance in scenarios with high missing data rates and complex spatial-temporal dependencies.

Table: Summary of Key Operators and Their Roles

Operator/Term Mathematical Expression Role
TQt-rank #{kD(k,k,:)F>0}\#\{k\mid \|D(k,k,:)\|_F>0\} Rank for quaternion tensor tubes
QTNN i,jσj(T˙(i))\sum_{i,j} \sigma_j(\dot{\mathcal{T}}^{(i)}) Promotes low-rankness
QT-RNN i=1I3j=r+1min{I1,I2}σj()\sum_{i=1}^{I_3}\sum_{j=r+1}^{\min\{I_1, I_2\}}\sigma_j(\cdot) Low-rankness, robust to dominant outliers
QTDCT 1\ell_1 Sparsity C(T˙)L1\|\mathcal{C}(\dot{\mathcal{T}})_L\|_1 Preserves local detail, reduces artifacts

6. Connections to Alternative Norms and Future Research

Recent work (Zahir et al., 16 Sep 2024) expands low-rank quaternion tensor approximation using non-convex quasi-norms (e.g., Geman, Laplace, logarithmic), which may offer even tighter relaxations for the rank function compared to QT-RNN or QTNN. These alternatives can be implemented with difference-of-convex programming in singular value thresholding stages.

A plausible implication is that hybrid models combining QT-RNN with non-convex surrogate penalties or adaptive weighting (as in QWNN) may further improve robustness and fidelity in color video or hyperspectral data recovery. Extensions to higher-order tensors (beyond video, e.g., multichannel spatiotemporal data) utilizing TT-rank or Tucker-rank formulations with truncated or weighted quaternion nuclear norms are under active investigation.

7. Limitations and Considerations

While QT-RNN enhances preservation of dominant signal components and color correlations, its non-convexity increases optimization complexity and may yield local minima in pathological cases. Careful initialization and parameter selection (truncation parameter rr and penalty weight λ\lambda) remain necessary. Nevertheless, strong empirical performance is reported for a range of color video datasets and sampling regimes, supporting its adoption in practical low-rank quaternion tensor completion pipelines.

In summary, QT-RNN is a specialized non-convex norm for exploiting structural and spectral properties of quaternion-valued tensors, delivering state-of-the-art recovery in color video completion and serving as a basis for future research in multiway quaternion data analysis.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Truncated Quaternion Tensor Nuclear Norm (QT-RNN).