Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmentation-Aware Self-Supervision for Data-Efficient GAN Training (2205.15677v5)

Published 31 May 2022 in cs.LG, cs.AI, and cs.CV

Abstract: Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting. Previously proposed differentiable augmentation demonstrates improved data efficiency of training GANs. However, the augmentation implicitly introduces undesired invariance to augmentation for the discriminator since it ignores the change of semantics in the label space caused by data transformation, which may limit the representation learning ability of the discriminator and ultimately affect the generative modeling performance of the generator. To mitigate the negative impact of invariance while inheriting the benefits of data augmentation, we propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data. Particularly, the prediction targets of real data and generated data are required to be distinguished since they are different during training. We further encourage the generator to adversarially learn from the self-supervised discriminator by generating augmentation-predictable real and not fake data. This formulation connects the learning objective of the generator and the arithmetic $-$ harmonic mean divergence under certain assumptions. We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures on data-limited CIFAR-10, CIFAR-100, FFHQ, LSUN-Cat, and five low-shot datasets. Experimental results demonstrate significant improvements of our method over SOTA methods in training data-efficient GANs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Liang Hou (24 papers)
  2. Qi Cao (57 papers)
  3. Yige Yuan (17 papers)
  4. Songtao Zhao (9 papers)
  5. Chongyang Ma (52 papers)
  6. Siyuan Pan (7 papers)
  7. Pengfei Wan (86 papers)
  8. Zhongyuan Wang (105 papers)
  9. Huawei Shen (119 papers)
  10. Xueqi Cheng (274 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com