Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models (2504.13825v1)
Abstract: Knowledge distillation (KD) is a technique for transferring knowledge from complex teacher models to simpler student models, significantly enhancing model efficiency and accuracy. It has demonstrated substantial advancements in various applications including image classification, object detection, LLMing, text classification, and sentiment analysis. Recent innovations in KD methods, such as attention-based approaches, block-wise logit distillation, and decoupling distillation, have notably improved student model performance. These techniques focus on stimulus complexity, attention mechanisms, and global information capture to optimize knowledge transfer. In addition, KD has proven effective in compressing LLMs while preserving accuracy, reducing computational overhead, and improving inference speed. This survey synthesizes the latest literature, highlighting key findings, contributions, and future directions in knowledge distillation to provide insights for researchers and practitioners on its evolving role in artificial intelligence and machine learning.
- Junjie Yang (74 papers)
- Junhao Song (15 papers)
- Xudong Han (40 papers)
- Ziqian Bi (37 papers)
- Tianyang Wang (80 papers)
- Chia Xin Liang (13 papers)
- Xinyuan Song (32 papers)
- Yichao Zhang (66 papers)
- Qian Niu (158 papers)
- Benji Peng (30 papers)
- Keyu Chen (76 papers)
- Ming Liu (421 papers)