Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noise Robust Generative Adversarial Networks (1911.11776v2)

Published 26 Nov 2019 in cs.CV, cs.LG, eess.IV, and stat.ML

Abstract: Generative adversarial networks (GANs) are neural networks that learn data distributions through adversarial training. In intensive studies, recent GANs have shown promising results for reproducing training images. However, in spite of noise, they reproduce images with fidelity. As an alternative, we propose a novel family of GANs called noise robust GANs (NR-GANs), which can learn a clean image generator even when training images are noisy. In particular, NR-GANs can solve this problem without having complete noise information (e.g., the noise distribution type, noise amount, or signal-noise relationship). To achieve this, we introduce a noise generator and train it along with a clean image generator. However, without any constraints, there is no incentive to generate an image and noise separately. Therefore, we propose distribution and transformation constraints that encourage the noise generator to capture only the noise-specific components. In particular, considering such constraints under different assumptions, we devise two variants of NR-GANs for signal-independent noise and three variants of NR-GANs for signal-dependent noise. On three benchmark datasets, we demonstrate the effectiveness of NR-GANs in noise robust image generation. Furthermore, we show the applicability of NR-GANs in image denoising. Our code is available at https://github.com/takuhirok/NR-GAN/.

Noise Robust Generative Adversarial Networks

The paper "Noise Robust Generative Adversarial Networks" introduces an innovative approach to improving the robustness of Generative Adversarial Networks (GANs) against noise within training datasets. This problem is pertinent because traditional GANs often replicate training data too faithfully, including any noise present, thereby limiting their potential for generating clean outputs especially in domains where acquiring noise-free data is demanding or impractical.

Overview of Contributions

The core contribution of this work is the development of Noise Robust GANs (NR-GANs), a novel family of GANs that includes a noise generator alongside the standard image generator. Importantly, these models operate without requiring complete prior knowledge about the noise characteristics such as distribution type or noise level. By integrating both distribution and transformation constraints, NR-GANs adeptly isolate noise from the signal, ensuring that only clean images are generated.

The paper introduces five variants of NR-GANs:

  1. SI-NR-GAN-I and II: Tailored for signal-independent noise, these models utilize assumptions about noise invariance or distribution characteristics.
  2. SD-NR-GAN-I, II, and III: Designed for signal-dependent noise. SD-NR-GANs either explicitly use prior noise knowledge or learn signal-noise relationships implicitly, enabling them to handle various noise types adaptively.

Numerical Results and Evaluation

The NR-GAN framework is evaluated on three benchmark datasets: CIFAR-10, LSUN Bedroom, and FFHQ. The results showcase that NR-GANs consistently outperform both traditional GAN models and denoiser-preprocessing GAN systems across 152 experimental conditions. Among the variants, SI-NR-GAN-II and SD-NR-GAN-II perform robustly across various noise settings, demonstrating their flexibility and effectiveness even when noise assumptions are partially violated.

Implications and Future Directions

In practice, NR-GANs open new avenues for applications that demand clean image generation from noisy datasets, which are common in medical imaging, astronomy, and other domains with inherent observational noise. Theoretically, this work contributes significantly to the understanding of noise modeling within generative frameworks, suggesting that noise and data generators can be decoupled using adversarial training and thoughtful constraints.

Looking forward, future research could address the training dynamics where weak constraints lead to ineffective noise isolation, particularly in complex datasets. Moreover, exploring NR-GAN integration with other generative models, such as autoregressive or flow-based models, could widen their applicability and enhance their robustness further.

In summary, this paper provides a substantive advancement in enhancing noise robustness in GANs, offering practical solutions to longstanding challenges in generative modeling due to noisy training data. The methodological innovations and comprehensive evaluation highlight a significant stride towards more resilient and reliable GAN systems in noise-prone environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Takuhiro Kaneko (40 papers)
  2. Tatsuya Harada (142 papers)
Citations (30)