Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models (2407.02775v1)

Published 3 Jul 2024 in cs.CL and cs.LG

Abstract: Knowledge distillation is an effective technique for pre-trained LLM compression. Although existing knowledge distillation methods perform well for the most typical model BERT, they could be further improved in two aspects: the relation-level knowledge could be further explored to improve model performance; and the setting of student attention head number could be more flexible to decrease inference time. Therefore, we are motivated to propose a novel knowledge distillation method MLKD-BERT to distill multi-level knowledge in teacher-student framework. Extensive experiments on GLUE benchmark and extractive question answering tasks demonstrate that our method outperforms state-of-the-art knowledge distillation methods on BERT. In addition, MLKD-BERT can flexibly set student attention head number, allowing for substantial inference time decrease with little performance drop.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ying Zhang (389 papers)
  2. Ziheng Yang (6 papers)
  3. Shufan Ji (2 papers)

Summary

We haven't generated a summary for this paper yet.