Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Knowledge Gets Distilled in Knowledge Distillation? (2205.16004v3)

Published 31 May 2022 in cs.CV and cs.LG

Abstract: Knowledge distillation aims to transfer useful information from a teacher network to a student network, with the primary goal of improving the student's performance for the task at hand. Over the years, there has a been a deluge of novel techniques and use cases of knowledge distillation. Yet, despite the various improvements, there seems to be a glaring gap in the community's fundamental understanding of the process. Specifically, what is the knowledge that gets distilled in knowledge distillation? In other words, in what ways does the student become similar to the teacher? Does it start to localize objects in the same way? Does it get fooled by the same adversarial samples? Does its data invariance properties become similar? Our work presents a comprehensive study to try to answer these questions. We show that existing methods can indeed indirectly distill these properties beyond improving task performance. We further study why knowledge distillation might work this way, and show that our findings have practical implications as well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Utkarsh Ojha (14 papers)
  2. Yuheng Li (37 papers)
  3. Anirudh Sundara Rajan (4 papers)
  4. Yingyu Liang (107 papers)
  5. Yong Jae Lee (88 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.