Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images (2101.02824v3)

Published 8 Jan 2021 in eess.IV and cs.CV

Abstract: In the last few years, image denoising has benefited a lot from the fast development of neural networks. However, the requirement of large amounts of noisy-clean image pairs for supervision limits the wide use of these models. Although there have been a few attempts in training an image denoising model with only single noisy images, existing self-supervised denoising approaches suffer from inefficient network training, loss of useful information, or dependence on noise modeling. In this paper, we present a very simple yet effective method named Neighbor2Neighbor to train an effective image denoising model with only noisy images. Firstly, a random neighbor sub-sampler is proposed for the generation of training image pairs. In detail, input and target used to train a network are images sub-sampled from the same noisy image, satisfying the requirement that paired pixels of paired images are neighbors and have very similar appearance with each other. Secondly, a denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance. The proposed Neighbor2Neighbor framework is able to enjoy the progress of state-of-the-art supervised denoising networks in network architecture design. Moreover, it avoids heavy dependence on the assumption of the noise distribution. We explain our approach from a theoretical perspective and further validate it through extensive experiments, including synthetic experiments with different noise distributions in sRGB space and real-world experiments on a denoising benchmark dataset in raw-RGB space.

Overview of "Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images"

The paper presents a novel self-supervised framework called Neighbor2Neighbor, targeting the problem of image denoising where no clean reference images are available for traditional supervised training. Instead of relying on the conventional requirement of noisy-clean image pairs, this method leverages single noisy images to efficiently train denoising models.

Key Contributions

  1. Neighbor Sub-Sampling Strategy: The authors introduce a random neighbor sub-sampler that generates training pairs from single noisy images. This approach effectively circumvents the dependency on multiple noisy observations or accurate noise modeling, which are prevalent in prior self-supervised denoising methods.
  2. Denoising Network Training and Regularization: The paper proposes a training procedure that employs these sub-sampled pairs and integrates a regularization term into the loss function. This regularization term addresses potential discrepancies in pixel appearance, which are inherent when training with sub-sampled images. The method aims to reduce over-smoothing, a typical issue in many denoising networks.
  3. Extension of Noise2Noise: The Neighbor2Neighbor concept can be viewed as an extension of the Noise2Noise framework. While Noise2Noise requires paired noisy images from the same scene, Neighbor2Neighbor generates such paired data from a single noisy image by exploiting pixel neighborhood similarity, thus achieving independence assumptions at the sub-pixel level.
  4. Theoretical and Empirical Validation: The authors provide a theoretical analysis supporting the feasibility of using neighbor-based sampling for effective network training. Furthermore, comprehensive experiments demonstrate the efficacy of their method, outperforming or aligning closely with various state-of-the-art self-supervised and traditional denoising approaches on both synthetic noise and real-world noise benchmark datasets.

Experimental Evaluation

The experimental results substantiate the performance benefits of the proposed framework across multiple noise conditions:

  • Synthetic Noise Scenarios: Neighbor2Neighbor shows competitive performance against established baselines in handling Gaussian and Poisson noise. The method also provides robustness across varying noise levels, which highlights its potential in practical applications where noise characteristics may not be constant.
  • Real-World Scenarios: The approach achieves favorable results on real-world datasets without explicit noise modeling, a notable improvement over existing methods that heavily depend on accurate noise distribution estimation, which is often challenging in diverse real-world settings.

Theoretical Insights and Practical Implications

The primary theoretical contribution lies in adapting a strategy originally tailored to paired measurements (as in Noise2Noise) to operate effectively with sub-sampled data from a single image. This adaptation provides significant flexibility in training on datasets where obtaining clean or paired noisy data is impractical. Practically, the methodology allows for the adoption of existing sophisticated denoising architectures without modification, thereby ensuring that advances in denoising network designs can be seamlessly integrated into this framework.

Future Directions

Looking forward, the research could extend to more complex use cases such as handling spatially-correlated noise or adapting this framework to extremely low-light conditions. Additionally, further exploration into optimizing the sampling strategy and the regularizer's implementation could unveil additional performance improvements and broader generalizability in various vision tasks beyond denoising.

By circumventing reliance on clean images or precise noise models, the Neighbor2Neighbor framework represents a significant stride in self-supervised learning, offering both theoretical innovation and practical utility in image restoration fields.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tao Huang (203 papers)
  2. Songjiang Li (3 papers)
  3. Xu Jia (57 papers)
  4. Huchuan Lu (199 papers)
  5. Jianzhuang Liu (91 papers)
Citations (245)