Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data Upcycling Knowledge Distillation for Image Super-Resolution (2309.14162v4)

Published 25 Sep 2023 in cs.CV and cs.AI

Abstract: Knowledge distillation (KD) compresses deep neural networks by transferring task-related knowledge from cumbersome pre-trained teacher models to compact student models. However, current KD methods for super-resolution (SR) networks overlook the nature of SR task that the outputs of the teacher model are noisy approximations to the ground-truth distribution of high-quality images (GT), which shades the teacher model's knowledge to result in limited KD effects. To utilize the teacher model beyond the GT upper-bound, we present the Data Upcycling Knowledge Distillation (DUKD), to transfer the teacher model's knowledge to the student model through the upcycled in-domain data derived from training data. Besides, we impose label consistency regularization to KD for SR by the paired invertible augmentations to improve the student model's performance and robustness. Comprehensive experiments demonstrate that the DUKD method significantly outperforms previous arts on several SR tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yun Zhang (103 papers)
  2. Wei Li (1122 papers)
  3. Simiao Li (6 papers)
  4. Jie Hu (187 papers)
  5. Hanting Chen (52 papers)
  6. Zhijun Tu (32 papers)
  7. Wenjia Wang (68 papers)
  8. Bingyi Jing (15 papers)
  9. Shaohui Lin (45 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.