Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-head Knowledge Distillation for Model Compression (2012.02911v1)

Published 5 Dec 2020 in cs.CV, cs.AI, cs.LG, and cs.NE

Abstract: Several methods of knowledge distillation have been developed for neural network compression. While they all use the KL divergence loss to align the soft outputs of the student model more closely with that of the teacher, the various methods differ in how the intermediate features of the student are encouraged to match those of the teacher. In this paper, we propose a simple-to-implement method using auxiliary classifiers at intermediate layers for matching features, which we refer to as multi-head knowledge distillation (MHKD). We add loss terms for training the student that measure the dissimilarity between student and teacher outputs of the auxiliary classifiers. At the same time, the proposed method also provides a natural way to measure differences at the intermediate layers even though the dimensions of the internal teacher and student features may be different. Through several experiments in image classification on multiple datasets we show that the proposed method outperforms prior relevant approaches presented in the literature.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Huan Wang (211 papers)
  2. Suhas Lohit (29 papers)
  3. Michael Jones (92 papers)
  4. Yun Fu (131 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.