GTCTV DPC: Tensor Completion via Deep Denoising
- GTCTV DPC is a tensor completion method that fuses generalized tensor correlated total variation with deep pseudo-contractive denoisers in a unified monotone inclusion framework.
- The algorithm employs a Davis–Yin splitting scheme to guarantee convergence without requiring the denoiser to be a true proximal operator.
- Empirical evaluations demonstrate that GTCTV DPC outperforms state-of-the-art methods, especially at low observation rates on multispectral images, videos, and spatiotemporal data.
The GTCTV DPC algorithm is a tensor completion method that combines generalized low-rank priors, expressed as generalized tensor correlated total variation (GTCTV), with deep pseudo-contractive (DPC) denoisers in a unified monotone inclusion framework. This approach, leveraging the Davis–Yin splitting scheme, addresses the limitations of previous plug-and-play tensor completion methods by rigorously establishing global convergence without the restrictive assumption that deep denoisers act as true proximal operators. GTCTV DPC demonstrates superior performance to state-of-the-art alternatives, particularly at low observation rates, for high-dimensional data such as multispectral images, color videos, and spatiotemporal matrices.
1. Algorithmic Framework
GTCTV DPC formulates the tensor completion problem as finding a solution to a monotone inclusion involving three principal operators:
- Data Consistency Operator: Enforced via the (sub)differential of the indicator function over the set of observed entries (i.e., ), ensuring that the estimated tensor matches the known values at observed indices.
- Generalized Low-Rank Prior: Introduced through the GTCTV penalty, which applies a weakly convex function (e.g., , MCP, SCAD) to the singular values of directional tensor gradients under an invertible transformation , and is summed over a set of tensor directions :
- Deep Pseudo-Contractive Denoiser: Incorporated as a monotone operator , where is a deep denoising network that is trained only to satisfy the pseudo-contractive property:
This relaxes the firm non-expansiveness required for standard plug-and-play denoisers.
The overall model imposes global structure through GTCTV while preserving local and high-frequency features via the deep denoiser.
2. Monotone Inclusion Formulation
In contrast to penalized optimization approaches that require the denoiser to be a true proximal operator, the GTCTV DPC methodology recasts the problem as solving:
where stabilizes the weakly convex prior and is a coupling parameter. This operator splitting paradigm allows each term (data consistency, global low-rank structure, deep local prior) to be individually monotone/cocoercive, consolidating global expressivity with local detail preservation.
This formulation avoids the need for to be a proximal map and enables the use of a wider class of denoisers within a convergence-guaranteed framework.
3. Davis–Yin Splitting and Convergence Guarantees
The monotone inclusion is addressed using the Davis–Yin splitting (DYS) scheme, which defines an iterative fixed-point mapping:
where and are resolvent (proximal) operators for and , and is the cocoercive operator associated with the denoiser.
Under the pseudo-contractive condition, the operator is shown to be strictly pseudo-contractive on an enlarged stepsize interval , where is -cocoercive with . This relaxation allows for more aggressive stepsizes relative to historic plug-and-play analyses.
Iterates are updated via:
with a relaxation sequence satisfying and . Convergence to a solution of the original problem is rigorously established.
4. Empirical Performance and Quantitative Results
Experimental evaluation demonstrates that GTCTV DPC consistently outperforms contemporary tensor completion methods on multiple data types:
- Multispectral Images and Color Videos: Using MPSNR and MSSIM, GTCTV DPC achieves substantial gains; at a sampling rate , improvements of up to 0.8 dB in MPSNR are reported compared to previous methods.
- Traffic Data Completion: Metrics such as MAPE and RMSE validate superior reconstruction accuracy, with both GTCTV and its variant (using SCAD as ) yielding minimal prediction errors.
- Visual Analysis: Reconstructed images exhibit both globally coherent geometry and preservation of fine-scale texture, even under extreme missing-data scenarios.
Residual curves for the monotone inclusion diminish steadily, evidencing effective convergence.
5. Mathematical Structure and Operator Analysis
A summary table of principal mathematical components follows:
| Component | Operator/Functional | Role |
|---|---|---|
| Data consistency | Enforces | |
| Low-rank prior | Promotes low-rank structure in gradient domain | |
| DPC denoiser | Preserves details, ensures pseudo-contractivity |
This operator splitting mechanism provides architectural interpretability and extends applicability to a broader array of denoisers.
6. Distinctions from Prior Approaches
Earlier plug-and-play methods typically require the denoiser to be a proximal mapping, a condition that does not generally hold for deep neural denoisers. GTCTV DPC overcomes this limitation by:
- Employing the monotone inclusion framework to decouple operator requirements.
- Relying only on pseudo-contractivity for the denoiser, allowing for training with realistic neural network models.
- Enabling a principled and globally convergent coupling of deep learning and generalized low-rank structures.
This suggests that a wider class of flexible, empirically powerful denoisers can be safely adopted without compromising convergence guarantees.
7. Practical Implications and Outlook
GTCTV DPC delivers robust tensor completion for high-dimensional, real-world datasets with extensive missing values, maintaining both structural and textural fidelity at very low observation rates. The algorithm’s convergence guarantee, architectural modularity (with independently tunable priors), and empirical effectiveness make it well-suited for applications in imaging, spatiotemporal analytics, and scientific data analysis.
A plausible implication is that, by broadening the class of admissible deep denoisers through monotone inclusion and strictly pseudo-contractive splitting, GTCTV DPC forms a general template for integrating nonconvex or learned regularizers into other structured inverse problems beyond completion, potentially facilitating new advances in computational imaging and signal recovery.