Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DDK: Distilling Domain Knowledge for Efficient Large Language Models (2407.16154v1)

Published 23 Jul 2024 in cs.CL

Abstract: Despite the advanced intelligence abilities of LLMs in various applications, they still face significant computational and storage demands. Knowledge Distillation (KD) has emerged as an effective strategy to improve the performance of a smaller LLM (i.e., the student model) by transferring knowledge from a high-performing LLM (i.e., the teacher model). Prevailing techniques in LLM distillation typically use a black-box model API to generate high-quality pretrained and aligned datasets, or utilize white-box distillation by altering the loss function to better transfer knowledge from the teacher LLM. However, these methods ignore the knowledge differences between the student and teacher LLMs across domains. This results in excessive focus on domains with minimal performance gaps and insufficient attention to domains with large gaps, reducing overall performance. In this paper, we introduce a new LLM distillation framework called DDK, which dynamically adjusts the composition of the distillation dataset in a smooth manner according to the domain performance differences between the teacher and student models, making the distillation process more stable and effective. Extensive evaluations show that DDK significantly improves the performance of student models, outperforming both continuously pretrained baselines and existing knowledge distillation methods by a large margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Jiaheng Liu (100 papers)
  2. Chenchen Zhang (19 papers)
  3. Jinyang Guo (28 papers)
  4. Yuanxing Zhang (30 papers)
  5. Haoran Que (10 papers)
  6. Ken Deng (13 papers)
  7. Zhiqi Bai (5 papers)
  8. Jie Liu (492 papers)
  9. Ge Zhang (170 papers)
  10. Jiakai Wang (33 papers)
  11. Yanan Wu (40 papers)
  12. Congnan Liu (3 papers)
  13. Wenbo Su (36 papers)
  14. Jiamang Wang (12 papers)
  15. Lin Qu (10 papers)
  16. Bo Zheng (205 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com