Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hard Gate Knowledge Distillation -- Leverage Calibration for Robust and Reliable Language Model (2210.12427v1)

Published 22 Oct 2022 in cs.CL and cs.AI

Abstract: In knowledge distillation, a student model is trained with supervisions from both knowledge from a teacher and observations drawn from a training data distribution. Knowledge of a teacher is considered a subject that holds inter-class relations which send a meaningful supervision to a student; hence, much effort has been put to find such knowledge to be distilled. In this paper, we explore a question that has been given little attention: "when to distill such knowledge." The question is answered in our work with the concept of model calibration; we view a teacher model not only as a source of knowledge but also as a gauge to detect miscalibration of a student. This simple and yet novel view leads to a hard gate knowledge distillation scheme that switches between learning from a teacher model and training data. We verify the gating mechanism in the context of natural language generation at both the token-level and the sentence-level. Empirical comparisons with strong baselines show that hard gate knowledge distillation not only improves model generalization, but also significantly lowers model calibration error.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dongkyu Lee (32 papers)
  2. Zhiliang Tian (32 papers)
  3. Yingxiu Zhao (13 papers)
  4. Ka Chun Cheung (32 papers)
  5. Nevin L. Zhang (44 papers)
Citations (3)