Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Distillation on Intermediate Representations for Language Model Compression (2009.14167v1)

Published 29 Sep 2020 in cs.CL and cs.LG

Abstract: Existing LLM compression methods mostly use a simple L2 loss to distill knowledge in the intermediate representations of a large BERT model to a smaller one. Although widely used, this objective by design assumes that all the dimensions of hidden representations are independent, failing to capture important structural knowledge in the intermediate layers of the teacher network. To achieve better distillation efficacy, we propose Contrastive Distillation on Intermediate Representations (CoDIR), a principled knowledge distillation framework where the student is trained to distill knowledge through intermediate layers of the teacher via a contrastive objective. By learning to distinguish positive sample from a large set of negative samples, CoDIR facilitates the student's exploitation of rich information in teacher's hidden layers. CoDIR can be readily applied to compress large-scale LLMs in both pre-training and finetuning stages, and achieves superb performance on the GLUE benchmark, outperforming state-of-the-art compression methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Siqi Sun (46 papers)
  2. Zhe Gan (135 papers)
  3. Yu Cheng (354 papers)
  4. Yuwei Fang (31 papers)
  5. Shuohang Wang (69 papers)
  6. Jingjing Liu (139 papers)
Citations (66)