Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Representation Consolidation for Training Expert Students (2107.08039v1)

Published 16 Jul 2021 in cs.CV and cs.LG

Abstract: Traditionally, distillation has been used to train a student model to emulate the input/output functionality of a teacher. A more useful goal than emulation, yet under-explored, is for the student to learn feature representations that transfer well to future tasks. However, we observe that standard distillation of task-specific teachers actually reduces the transferability of student representations to downstream tasks. We show that a multi-head, multi-task distillation method using an unlabeled proxy dataset and a generalist teacher is sufficient to consolidate representations from task-specific teacher(s) and improve downstream performance, outperforming the teacher(s) and the strong baseline of ImageNet pretrained features. Our method can also combine the representational knowledge of multiple teachers trained on one or multiple domains into a single model, whose representation is improved on all teachers' domain(s).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhizhong Li (22 papers)
  2. Avinash Ravichandran (35 papers)
  3. Charless Fowlkes (35 papers)
  4. Marzia Polito (5 papers)
  5. Rahul Bhotika (13 papers)
  6. Stefano Soatto (179 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.