Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers (2010.03034v1)

Published 6 Oct 2020 in cs.CL

Abstract: With the growth of computing power neural machine translation (NMT) models also grow accordingly and become better. However, they also become harder to deploy on edge devices due to memory constraints. To cope with this problem, a common practice is to distill knowledge from a large and accurately-trained teacher network (T) into a compact student network (S). Although knowledge distillation (KD) is useful in most cases, our study shows that existing KD techniques might not be suitable enough for deep NMT engines, so we propose a novel alternative. In our model, besides matching T and S predictions we have a combinatorial mechanism to inject layer-level supervision from T to S. In this paper, we target low-resource settings and evaluate our translation engines for Portuguese--English, Turkish--English, and English--German directions. Students trained using our technique have 50% fewer parameters and can still deliver comparable results to those of 12-layer teachers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yimeng Wu (8 papers)
  2. Peyman Passban (13 papers)
  3. Mehdi Rezagholizade (1 paper)
  4. Qun Liu (231 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.