Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

GTCTV DPC: Tensor Completion via Deep Denoising

Updated 16 October 2025
  • GTCTV DPC is a tensor completion method that fuses generalized tensor correlated total variation with deep pseudo-contractive denoisers in a unified monotone inclusion framework.
  • The algorithm employs a Davis–Yin splitting scheme to guarantee convergence without requiring the denoiser to be a true proximal operator.
  • Empirical evaluations demonstrate that GTCTV DPC outperforms state-of-the-art methods, especially at low observation rates on multispectral images, videos, and spatiotemporal data.

The GTCTV DPC algorithm is a tensor completion method that combines generalized low-rank priors, expressed as generalized tensor correlated total variation (GTCTV), with deep pseudo-contractive (DPC) denoisers in a unified monotone inclusion framework. This approach, leveraging the Davis–Yin splitting scheme, addresses the limitations of previous plug-and-play tensor completion methods by rigorously establishing global convergence without the restrictive assumption that deep denoisers act as true proximal operators. GTCTV DPC demonstrates superior performance to state-of-the-art alternatives, particularly at low observation rates, for high-dimensional data such as multispectral images, color videos, and spatiotemporal matrices.

1. Algorithmic Framework

GTCTV DPC formulates the tensor completion problem as finding a solution X\mathcal{X} to a monotone inclusion involving three principal operators:

  1. Data Consistency Operator: Enforced via the (sub)differential of the indicator function δY,Ω\delta_{\mathcal{Y},\Omega} over the set of observed entries (i.e., δY,Ω\partial\delta_{\mathcal{Y},\Omega}), ensuring that the estimated tensor matches the known values at observed indices.
  2. Generalized Low-Rank Prior: Introduced through the GTCTV penalty, which applies a weakly convex function ff (e.g., x|x|, MCP, SCAD) to the singular values of directional tensor gradients dA\nabla_d\mathcal{A} under an invertible transformation L\mathcal{L}, and is summed over a set of tensor directions Γ\Gamma:

AGTCTV=1γdΓdAf,L\|\mathcal{A}\|_{\mathrm{GTCTV}} = \frac{1}{\gamma} \sum_{d \in \Gamma} \|\nabla_d \mathcal{A}\|_{f,\mathcal{L}}

  1. Deep Pseudo-Contractive Denoiser: Incorporated as a monotone operator C=α(IdDs)C = \alpha(\mathrm{Id} - D_s), where DsD_s is a deep denoising network that is trained only to satisfy the pseudo-contractive property:

Ds(x)Ds(y)2xy2+k(IDs)(x)(IDs)(y)2,for k(0,1)\|D_s(x) - D_s(y)\|^2 \leq \|x - y\|^2 + k\|(I - D_s)(x) - (I - D_s)(y)\|^2, \quad\text{for }k\in(0,1)

This relaxes the firm non-expansiveness required for standard plug-and-play denoisers.

The overall model imposes global structure through GTCTV while preserving local and high-frequency features via the deep denoiser.

2. Monotone Inclusion Formulation

In contrast to penalized optimization approaches that require the denoiser to be a true proximal operator, the GTCTV DPC methodology recasts the problem as solving:

0δY,Ω(X)+(XGTCTV+2μXF2)+α(IdDs)(X)0 \in \partial\delta_{\mathcal{Y},\Omega}(\mathcal{X}) + \partial\Big(\|\mathcal{X}\|_{\mathrm{GTCTV}} + 2\mu\|\mathcal{X}\|_F^2\Big) + \alpha(\mathrm{Id} - D_s)(\mathcal{X})

where μ>0\mu>0 stabilizes the weakly convex prior and α>0\alpha>0 is a coupling parameter. This operator splitting paradigm allows each term (data consistency, global low-rank structure, deep local prior) to be individually monotone/cocoercive, consolidating global expressivity with local detail preservation.

This formulation avoids the need for DsD_s to be a proximal map and enables the use of a wider class of denoisers within a convergence-guaranteed framework.

3. Davis–Yin Splitting and Convergence Guarantees

The monotone inclusion is addressed using the Davis–Yin splitting (DYS) scheme, which defines an iterative fixed-point mapping:

T=IdJτB+JτA(2JτBIdτC)T = \mathrm{Id} - J_{\tau B} + J_{\tau A}\big(2J_{\tau B} - \mathrm{Id} - \tau C\big)

where JτAJ_{\tau A} and JτBJ_{\tau B} are resolvent (proximal) operators for AA and BB, and CC is the cocoercive operator associated with the denoiser.

Under the pseudo-contractive condition, the operator TT is shown to be strictly pseudo-contractive on an enlarged stepsize interval τ(0,4β)\tau\in(0,4\beta), where CC is β\beta-cocoercive with β=1k2α\beta=\frac{1-k}{2\alpha}. This relaxation allows for more aggressive stepsizes relative to historic plug-and-play analyses.

Iterates {zt}\{z_t\} are updated via:

zt+1=(1λt)zt+λtT(zt)z_{t+1} = (1-\lambda_t)z_t + \lambda_t T(z_t)

with a relaxation sequence {λt}\{\lambda_t\} satisfying λt=\sum \lambda_t = \infty and λt2<\sum\lambda_t^2 < \infty. Convergence to a solution of the original problem is rigorously established.

4. Empirical Performance and Quantitative Results

Experimental evaluation demonstrates that GTCTV DPC consistently outperforms contemporary tensor completion methods on multiple data types:

  • Multispectral Images and Color Videos: Using MPSNR and MSSIM, GTCTV DPC achieves substantial gains; at a sampling rate SR=0.05\mathrm{SR}=0.05, improvements of up to 0.8 dB in MPSNR are reported compared to previous methods.
  • Traffic Data Completion: Metrics such as MAPE and RMSE validate superior reconstruction accuracy, with both GTCTV and its variant (using SCAD as ff) yielding minimal prediction errors.
  • Visual Analysis: Reconstructed images exhibit both globally coherent geometry and preservation of fine-scale texture, even under extreme missing-data scenarios.

Residual curves for the monotone inclusion diminish steadily, evidencing effective convergence.

5. Mathematical Structure and Operator Analysis

A summary table of principal mathematical components follows:

Component Operator/Functional Role
Data consistency δY,Ω\partial\delta_{\mathcal{Y},\Omega} Enforces XΩ=YΩ\mathcal{X}_\Omega = \mathcal{Y}_\Omega
Low-rank prior (GTCTV+2μF2)\partial(\|\cdot\|_{\mathrm{GTCTV}} + 2\mu\|\cdot\|_F^2) Promotes low-rank structure in gradient domain
DPC denoiser α(IdDs)\alpha(\mathrm{Id} - D_s) Preserves details, ensures pseudo-contractivity

This operator splitting mechanism provides architectural interpretability and extends applicability to a broader array of denoisers.

6. Distinctions from Prior Approaches

Earlier plug-and-play methods typically require the denoiser to be a proximal mapping, a condition that does not generally hold for deep neural denoisers. GTCTV DPC overcomes this limitation by:

  • Employing the monotone inclusion framework to decouple operator requirements.
  • Relying only on pseudo-contractivity for the denoiser, allowing for training with realistic neural network models.
  • Enabling a principled and globally convergent coupling of deep learning and generalized low-rank structures.

This suggests that a wider class of flexible, empirically powerful denoisers can be safely adopted without compromising convergence guarantees.

7. Practical Implications and Outlook

GTCTV DPC delivers robust tensor completion for high-dimensional, real-world datasets with extensive missing values, maintaining both structural and textural fidelity at very low observation rates. The algorithm’s convergence guarantee, architectural modularity (with independently tunable priors), and empirical effectiveness make it well-suited for applications in imaging, spatiotemporal analytics, and scientific data analysis.

A plausible implication is that, by broadening the class of admissible deep denoisers through monotone inclusion and strictly pseudo-contractive splitting, GTCTV DPC forms a general template for integrating nonconvex or learned regularizers into other structured inverse problems beyond completion, potentially facilitating new advances in computational imaging and signal recovery.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to GTCTV DPC Algorithm.