Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective (2302.01530v1)

Published 3 Feb 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Knowledge distillation (KD) is a highly promising method for mitigating the computational problems of pre-trained LLMs (PLMs). Among various KD approaches, Intermediate Layer Distillation (ILD) has been a de facto standard KD method with its performance efficacy in the NLP field. In this paper, we find that existing ILD methods are prone to overfitting to training datasets, although these methods transfer more information than the original KD. Next, we present the simple observations to mitigate the overfitting of ILD: distilling only the last Transformer layer and conducting ILD on supplementary tasks. Based on our two findings, we propose a simple yet effective consistency-regularized ILD (CR-ILD), which prevents the student model from overfitting the training dataset. Substantial experiments on distilling BERT on the GLUE benchmark and several synthetic datasets demonstrate that our proposed ILD method outperforms other KD techniques. Our code is available at https://github.com/jongwooko/CR-ILD.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jongwoo Ko (20 papers)
  2. Seungjoon Park (3 papers)
  3. Minchan Jeong (11 papers)
  4. Sukjin Hong (5 papers)
  5. Euijai Ahn (3 papers)
  6. Du-Seong Chang (17 papers)
  7. Se-Young Yun (114 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.