Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MiniDisc: Minimal Distillation Schedule for Language Model Compression (2205.14570v3)

Published 29 May 2022 in cs.CL and cs.LG

Abstract: Recent studies have uncovered that LLM distillation is less effective when facing a large capacity gap between the teacher and the student, and introduced teacher assistant-based distillation to bridge the gap. As a connection, the scale and the performance of the teacher assistant is of vital importance to bring the knowledge from the teacher to the student. However, existing teacher assistant-based methods require maximally many trials before scheduling an optimal teacher assistant. To this end, we propose a minimal distillation schedule (MiniDisc) for scheduling the optimal teacher assistant in minimally one trial. In particular, motivated by the finding that the performance of the student is positively correlated to the scale-performance tradeoff of the teacher assistant, MiniDisc is designed with a $\lambda$-tradeoff to measure the optimality of the teacher assistant without trial distillation to the student. MiniDisc then can schedule the optimal teacher assistant with the best $\lambda$-tradeoff in a sandwich framework. MiniDisc is evaluated with an extensive set of experiments on GLUE. Experimental results demonstrate the improved efficiency our MiniDisc compared to several state-of-the-art baselines. We further apply MiniDisc to a LLM with billions of parameters and show its scalability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chen Zhang (403 papers)
  2. Yang Yang (884 papers)
  3. Qifan Wang (129 papers)
  4. Jiahao Liu (72 papers)
  5. Jingang Wang (71 papers)
  6. Wei Wu (482 papers)
  7. Dawei Song (62 papers)
Citations (4)