Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling portable Generative Adversarial Networks for Image Translation (2003.03519v1)

Published 7 Mar 2020 in cs.CV, cs.LG, eess.IV, and stat.ML

Abstract: Despite Generative Adversarial Networks (GANs) have been widely used in various image-to-image translation tasks, they can be hardly applied on mobile devices due to their heavy computation and storage cost. Traditional network compression methods focus on visually recognition tasks, but never deal with generation tasks. Inspired by knowledge distillation, a student generator of fewer parameters is trained by inheriting the low-level and high-level information from the original heavy teacher generator. To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators. An adversarial learning process is therefore established to optimize student generator and student discriminator. Qualitative and quantitative analysis by conducting experiments on benchmark datasets demonstrate that the proposed method can learn portable generative models with strong performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hanting Chen (52 papers)
  2. Yunhe Wang (145 papers)
  3. Han Shu (14 papers)
  4. Changyuan Wen (2 papers)
  5. Chunjing Xu (66 papers)
  6. Boxin Shi (64 papers)
  7. Chao Xu (283 papers)
  8. Chang Xu (323 papers)
Citations (80)

Summary

We haven't generated a summary for this paper yet.