Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Better Teacher Better Student: Dynamic Prior Knowledge for Knowledge Distillation (2206.06067v4)

Published 13 Jun 2022 in cs.CV

Abstract: Knowledge distillation (KD) has shown very promising capabilities in transferring learning representations from large models (teachers) to small models (students). However, as the capacity gap between students and teachers becomes larger, existing KD methods fail to achieve better results. Our work shows that the prior knowledge' is vital to KD, especially when applying large teachers. Particularly, we propose the dynamic prior knowledge (DPK), which integrates part of teacher's features as the prior knowledge before the feature distillation. This means that our method also takes the teacher's feature asinput', not just `target'. Besides, we dynamically adjust the ratio of the prior knowledge during the training phase according to the feature gap, thus guiding the student in an appropriate difficulty. To evaluate the proposed method, we conduct extensive experiments on two image classification benchmarks (i.e. CIFAR100 and ImageNet) and an object detection benchmark (i.e. MS COCO. The results demonstrate the superiority of our method in performance under varying settings. Besides, our DPK makes the performance of the student model positively correlated with that of the teacher model, which means that we can further boost the accuracy of students by applying larger teachers. More importantly, DPK provides a fast solution in teacher model selection for any given model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zengyu Qiu (1 paper)
  2. Xinzhu Ma (30 papers)
  3. Kunlin Yang (8 papers)
  4. Chunya Liu (3 papers)
  5. Jun Hou (43 papers)
  6. Shuai Yi (45 papers)
  7. Wanli Ouyang (358 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.