Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation (2109.10164v2)

Published 21 Sep 2021 in cs.CL

Abstract: Intermediate layer knowledge distillation (KD) can improve the standard KD technique (which only targets the output of teacher and student models) especially over large pre-trained LLMs. However, intermediate layer distillation suffers from excessive computational burdens and engineering efforts required for setting up a proper layer mapping. To address these problems, we propose a RAndom Intermediate Layer Knowledge Distillation (RAIL-KD) approach in which, intermediate layers from the teacher model are selected randomly to be distilled into the intermediate layers of the student model. This randomized selection enforce that: all teacher layers are taken into account in the training process, while reducing the computational cost of intermediate layer distillation. Also, we show that it act as a regularizer for improving the generalizability of the student model. We perform extensive experiments on GLUE tasks as well as on out-of-domain test sets. We show that our proposed RAIL-KD approach outperforms other state-of-the-art intermediate layer KD methods considerably in both performance and training-time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nithin Anchuri (2 papers)
  2. Mehdi Rezagholizadeh (78 papers)
  3. Abbas Ghaddar (18 papers)
  4. Philippe Langlais (23 papers)
  5. Pascal Poupart (80 papers)
  6. Md Akmal Haidar (6 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.