Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Distillation of Transformer-based Language Models Revisited (2206.14366v3)

Published 29 Jun 2022 in cs.CL and cs.AI

Abstract: In the past few years, transformer-based pre-trained LLMs have achieved astounding success in both industry and academia. However, the large model size and high run-time latency are serious impediments to applying them in practice, especially on mobile phones and Internet of Things (IoT) devices. To compress the model, considerable literature has grown up around the theme of knowledge distillation (KD) recently. Nevertheless, how KD works in transformer-based models is still unclear. We tease apart the components of KD and propose a unified KD framework. Through the framework, systematic and extensive experiments that spent over 23,000 GPU hours render a comprehensive analysis from the perspectives of knowledge types, matching strategies, width-depth trade-off, initialization, model size, etc. Our empirical results shed light on the distillation in the pre-train LLM and with relative significant improvement over previous state-of-the-arts(SOTA). Finally, we provide a best-practice guideline for the KD in transformer-based models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Chengqiang Lu (14 papers)
  2. Jianwei Zhang (114 papers)
  3. Yunfei Chu (15 papers)
  4. Zhengyu Chen (45 papers)
  5. Jingren Zhou (198 papers)
  6. Fei Wu (317 papers)
  7. Haiqing Chen (29 papers)
  8. Hongxia Yang (130 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.