Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interactive Knowledge Distillation (2007.01476v3)

Published 3 Jul 2020 in cs.CV and cs.LG

Abstract: Knowledge distillation is a standard teacher-student learning framework to train a light-weight student network under the guidance of a well-trained large teacher network. As an effective teaching strategy, interactive teaching has been widely employed at school to motivate students, in which teachers not only provide knowledge but also give constructive feedback to students upon their responses, to improve their learning performance. In this work, we propose an InterActive Knowledge Distillation (IAKD) scheme to leverage the interactive teaching strategy for efficient knowledge distillation. In the distillation process, the interaction between teacher and student networks is implemented by a swapping-in operation: randomly replacing the blocks in the student network with the corresponding blocks in the teacher network. In the way, we directly involve the teacher's powerful feature transformation ability to largely boost the student's performance. Experiments with typical settings of teacher-student networks demonstrate that the student networks trained by our IAKD achieve better performance than those trained by conventional knowledge distillation methods on diverse image classification datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shipeng Fu (1 paper)
  2. Zhen Li (334 papers)
  3. Jun Xu (398 papers)
  4. Ming-Ming Cheng (185 papers)
  5. Zitao Liu (76 papers)
  6. Xiaomin Yang (12 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.