Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SKDBERT: Compressing BERT via Stochastic Knowledge Distillation (2211.14466v2)

Published 26 Nov 2022 in cs.CL

Abstract: In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style LLM dubbed SKDBERT. In each iteration, SKD samples a teacher model from a pre-defined teacher ensemble, which consists of multiple teacher models with multi-level capacities, to transfer knowledge into student model in an one-to-one manner. Sampling distribution plays an important role in SKD. We heuristically present three types of sampling distributions to assign appropriate probabilities for multi-level teacher models. SKD has two advantages: 1) it can preserve the diversities of multi-level teacher models via stochastically sampling single teacher model in each iteration, and 2) it can also improve the efficacy of knowledge distillation via multi-level teacher models when large capacity gap exists between the teacher model and the student model. Experimental results on GLUE benchmark show that SKDBERT reduces the size of a BERT$_{\rm BASE}$ model by 40% while retaining 99.5% performances of language understanding and being 100% faster.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zixiang Ding (15 papers)
  2. Guoqing Jiang (2 papers)
  3. Shuai Zhang (319 papers)
  4. Lin Guo (15 papers)
  5. Wei Lin (207 papers)