Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Turning Waste into Wealth: Leveraging Low-Quality Samples for Enhancing Continuous Conditional Generative Adversarial Networks (2308.10273v3)

Published 20 Aug 2023 in cs.CV and cs.LG

Abstract: Continuous Conditional Generative Adversarial Networks (CcGANs) enable generative modeling conditional on continuous scalar variables (termed regression labels). However, they can produce subpar fake images due to limited training data. Although Negative Data Augmentation (NDA) effectively enhances unconditional and class-conditional GANs by introducing anomalies into real training images, guiding the GANs away from low-quality outputs, its impact on CcGANs is limited, as it fails to replicate negative samples that may occur during the CcGAN sampling. We present a novel NDA approach called Dual-NDA specifically tailored for CcGANs to address this problem. Dual-NDA employs two types of negative samples: visually unrealistic images generated from a pre-trained CcGAN and label-inconsistent images created by manipulating real images' labels. Leveraging these negative samples, we introduce a novel discriminator objective alongside a modified CcGAN training algorithm. Empirical analysis on UTKFace and Steering Angle reveals that Dual-NDA consistently enhances the visual fidelity and label consistency of fake images generated by CcGANs, exhibiting a substantial performance gain over the vanilla NDA. Moreover, by applying Dual-NDA, CcGANs demonstrate a remarkable advancement beyond the capabilities of state-of-the-art conditional GANs and diffusion models, establishing a new pinnacle of performance. Our codes can be found at https://github.com/UBCDingXin/Dual-NDA.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In International Conference on Learning Representations.
  2. Chen, S. 2018. The Steering Angle Dataset @ONLINE. https://github.com/SullyChen/driving-datasets. Accessed: 2020-12-01.
  3. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552.
  4. Diffusion Models Beat GANs on Image Synthesis. Advances in Neural Information Processing Systems, 34: 8780–8794.
  5. Distilling and Transferring Knowledge via cGAN-Generated Samples for Image Classification and Regression. Expert Systems with Applications, 213: 119060.
  6. CcGAN: Continuous Conditional Generative Adversarial Networks for Image Generation. In International Conference on Learning Representations.
  7. Continuous Conditional Generative Adversarial Networks: Novel Empirical Losses and Label Input Mechanisms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7): 8143–8158.
  8. Contrastive Learning with Continuous Proxy Meta-Data for 3D MRI Classification. In International Conference on Medical Image Computing and Computer Assisted Intervention, 58–68.
  9. Diverse 3D Auxetic Unit Cell Inverse Design with Deep Learning. Applied Physics Reviews, 10(3).
  10. SAR Image Synthesis with GAN and Continuous Aspect Angle and Class Constraints. In European Conference on Synthetic Aperture Radar, 1–6.
  11. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27, 2672–2680.
  12. PcDGAN: A Continuous Conditional Diverse Generative Adversarial Network for Inverse Design. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 606–616.
  13. Classifier-Free Diffusion Guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications.
  14. Conditional GANS with Auxiliary Discriminative Classifier. In International Conference on Machine Learning, 8888–8902.
  15. Deceive D: Adaptive pseudo augmentation for gan training with limited data. Advances in Neural Information Processing Systems, 34: 21655–21667.
  16. Continuous Conditional Generative Adversarial Networks for Data-Driven Solutions of Poroelasticity with Heterogeneous Material Properties. Computers & Geosciences, 167: 105212.
  17. Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training. Advances in neural information processing systems, 34: 23505–23518.
  18. Training generative adversarial networks with limited data. Advances in neural information processing systems, 33: 12104–12114.
  19. Geometric GAN. arXiv preprint arXiv:1705.02894.
  20. Combating Mode Collapse via Offline Manifold Entropy Estimation. In Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI’23/IAAI’23/EAAI’23. AAAI Press. ISBN 978-1-57735-880-0.
  21. Conditional Generative Adversarial Nets. arXiv preprint arXiv:1411.1784.
  22. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Processing Letters, 20(3): 209–212.
  23. cGANs with Projection Discriminator. In International Conference on Learning Representations.
  24. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision, 69–84.
  25. Regression-Oriented Knowledge Distillation for Lightweight Ship Orientation Angle Prediction with Optical Remote Sensing Images. arXiv preprint arXiv:2307.06566.
  26. Negative Data Augmentation. In International Conference on Learning Representations.
  27. Continuous Conditional Generative Adversarial Networks for Data-Driven Modelling of Geologic CO2 Storage and Plume Evolution. Gas Science and Engineering, 115: 204982.
  28. On Data Augmentation for GAN Training. IEEE Transactions on Image Processing, 30: 1882–1897.
  29. Point Cloud Generation with Continuous Conditioning. In International Conference on Artificial Intelligence and Statistics, 4462–4481.
  30. Regularizing generative adversarial networks under limited data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7921–7931.
  31. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, 6023–6032.
  32. mixup: Beyond Empirical Risk Minimization. In International Conference on Learning Representations.
  33. Self-Attention Generative Adversarial Networks. In International Conference on Machine Learning, 7354–7363.
  34. Age Progression/Regression by Conditional Adversarial Autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5810–5818.
  35. Differentiable Augmentation for Data-Efficient GAN Training. Advances in Neural Information Processing Systems, 33: 7559–7570.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xin Ding (23 papers)
  2. Yongwei Wang (24 papers)
  3. Zuheng Xu (12 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com