Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Education distillation:getting student models to learn in shcools (2311.13811v2)

Published 23 Nov 2023 in cs.AI

Abstract: Knowledge distillation is one of the methods for model compression, and existing knowledge distillation techniques focus on how to improve the distillation algorithm so as to enhance the distillation efficiency. This paper introduces dynamic incremental learning into knowledge distillation and proposes a distillation strategy for education distillation. Specifically, it is proposed to take fragmented student models divided from the complete student model as lower-grade models. As the grade level rises, fragmented student models deepen in conjunction with designed teaching reference layers, while learning and distilling from more teacher models. By moving from lower to higher grades, fragmented student models were gradually integrated into a complete target student model, and the performance of the student models gradually improved from lower to higher grades of the stage. Education distillation strategies combined with distillation algorithms outperform the results of single distillation algorithms on the public dataset CIFAR100,Caltech256, Food-101 dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ling Feng (29 papers)
  2. Danyang Li (13 papers)
  3. Tianhao Wu (68 papers)
  4. Xuliang Duan (2 papers)

Summary

We haven't generated a summary for this paper yet.