Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models (2011.14554v1)

Published 30 Nov 2020 in cs.LG and cs.AI

Abstract: This paper aims to provide a selective survey about knowledge distillation(KD) framework for researchers and practitioners to take advantage of it for developing new optimized models in the deep neural network field. To this end, we give a brief overview of knowledge distillation and some related works including learning using privileged information(LUPI) and generalized distillation(GD). Even though knowledge distillation based on the teacher-student architecture was initially devised as a model compression technique, it has found versatile applications over various frameworks. In this paper, we review the characteristics of knowledge distillation from the hypothesis that the three important ingredients of knowledge distillation are distilled knowledge and loss,teacher-student paradigm, and the distillation process. In addition, we survey the versatility of the knowledge distillation by studying its direct applications and its usage in combination with other deep learning paradigms. Finally we present some future works in knowledge distillation including explainable knowledge distillation where the analytical analysis of the performance gain is studied and the self-supervised learning which is a hot research topic in deep learning community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jeong-Hoe Ku (1 paper)
  2. JiHun Oh (4 papers)
  3. YoungYoon Lee (1 paper)
  4. Gaurav Pooniwala (1 paper)
  5. SangJeong Lee (3 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.