Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Knowledge Distillation for Pre-trained Language Models (2109.11295v1)

Published 23 Sep 2021 in cs.CL and cs.LG

Abstract: Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-trained LLMs. However, existing methods conduct KD statically, e.g., the student model aligns its output distribution to that of a selected teacher model on the pre-defined training dataset. In this paper, we explore whether a dynamic knowledge distillation that empowers the student to adjust the learning procedure according to its competency, regarding the student performance and learning efficiency. We explore the dynamical adjustments on three aspects: teacher model adoption, data selection, and KD objective adaptation. Experimental results show that (1) proper selection of teacher model can boost the performance of student model; (2) conducting KD with 10% informative instances achieves comparable performance while greatly accelerates the training; (3) the student performance can be boosted by adjusting the supervision contribution of different alignment objective. We find dynamic knowledge distillation is promising and provide discussions on potential future directions towards more efficient KD methods. Our code is available at https://github.com/lancopku/DynamicKD.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lei Li (1293 papers)
  2. Yankai Lin (125 papers)
  3. Shuhuai Ren (30 papers)
  4. Peng Li (390 papers)
  5. Jie Zhou (687 papers)
  6. Xu Sun (194 papers)
Citations (44)