Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Knowledge Distillation: An Inheritance and Exploration Framework (2107.00181v1)

Published 1 Jul 2021 in cs.LG, cs.AI, and cs.CV

Abstract: Knowledge Distillation (KD) is a popular technique to transfer knowledge from a teacher model or ensemble to a student model. Its success is generally attributed to the privileged information on similarities/consistency between the class distributions or intermediate feature representations of the teacher model and the student model. However, directly pushing the student model to mimic the probabilities/features of the teacher model to a large extent limits the student model in learning undiscovered knowledge/features. In this paper, we propose a novel inheritance and exploration knowledge distillation framework (IE-KD), in which a student model is split into two parts - inheritance and exploration. The inheritance part is learned with a similarity loss to transfer the existing learned knowledge from the teacher model to the student model, while the exploration part is encouraged to learn representations different from the inherited ones with a dis-similarity loss. Our IE-KD framework is generic and can be easily combined with existing distillation or mutual learning methods for training deep neural networks. Extensive experiments demonstrate that these two parts can jointly push the student model to learn more diversified and effective representations, and our IE-KD can be a general technique to improve the student network to achieve SOTA performance. Furthermore, by applying our IE-KD to the training of two networks, the performance of both can be improved w.r.t. deep mutual learning. The code and models of IE-KD will be make publicly available at https://github.com/yellowtownhz/IE-KD.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zhen Huang (114 papers)
  2. Xu Shen (45 papers)
  3. Jun Xing (13 papers)
  4. Tongliang Liu (251 papers)
  5. Xinmei Tian (50 papers)
  6. Houqiang Li (236 papers)
  7. Bing Deng (14 papers)
  8. Jianqiang Huang (62 papers)
  9. Xian-Sheng Hua (85 papers)
Citations (27)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub