Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KD-DLGAN: Data Limited Image Generation via Knowledge Distillation (2303.17158v1)

Published 30 Mar 2023 in cs.CV and eess.IV

Abstract: Generative Adversarial Networks (GANs) rely heavily on large-scale training data for training high-quality image generation models. With limited training data, the GAN discriminator often suffers from severe overfitting which directly leads to degraded generation especially in generation diversity. Inspired by the recent advances in knowledge distillation (KD), we propose KD-DLGAN, a knowledge-distillation based generation framework that introduces pre-trained vision-LLMs for training effective data-limited generation models. KD-DLGAN consists of two innovative designs. The first is aggregated generative KD that mitigates the discriminator overfitting by challenging the discriminator with harder learning tasks and distilling more generalizable knowledge from the pre-trained models. The second is correlated generative KD that improves the generation diversity by distilling and preserving the diverse image-text correlation within the pre-trained models. Extensive experiments over multiple benchmarks show that KD-DLGAN achieves superior image generation with limited training data. In addition, KD-DLGAN complements the state-of-the-art with consistent and substantial performance gains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kaiwen Cui (13 papers)
  2. Yingchen Yu (24 papers)
  3. Fangneng Zhan (53 papers)
  4. Shengcai Liao (46 papers)
  5. Shijian Lu1 (1 paper)
  6. Eric Xing (127 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.