Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Residual Contrastive Learning for Image Reconstruction: Learning Transferable Representations from Noisy Images (2106.10070v2)

Published 18 Jun 2021 in cs.CV and cs.LG

Abstract: This paper is concerned with contrastive learning (CL) for low-level image restoration and enhancement tasks. We propose a new label-efficient learning paradigm based on residuals, residual contrastive learning (RCL), and derive an unsupervised visual representation learning framework, suitable for low-level vision tasks with noisy inputs. While supervised image reconstruction aims to minimize residual terms directly, RCL alternatively builds a connection between residuals and CL by defining a novel instance discrimination pretext task, using residuals as the discriminative feature. Our formulation mitigates the severe task misalignment between instance discrimination pretext tasks and downstream image reconstruction tasks, present in existing CL frameworks. Experimentally, we find that RCL can learn robust and transferable representations that improve the performance of various downstream tasks, such as denoising and super resolution, in comparison with recent self-supervised methods designed specifically for noisy inputs. Additionally, our unsupervised pre-training can significantly reduce annotation costs whilst maintaining performance competitive with fully-supervised image reconstruction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nanqing Dong (34 papers)
  2. Matteo Maggioni (11 papers)
  3. Yongxin Yang (73 papers)
  4. Eduardo Pérez-Pellitero (25 papers)
  5. Ales Leonardis (84 papers)
  6. Steven McDonagh (43 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.