Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AMTSS: An Adaptive Multi-Teacher Single-Student Knowledge Distillation Framework For Multilingual Language Inference (2305.07928v1)

Published 13 May 2023 in cs.CL and cs.AI

Abstract: Knowledge distillation is of key importance to launching multilingual pre-trained LLMs for real applications. To support cost-effective language inference in multilingual settings, we propose AMTSS, an adaptive multi-teacher single-student distillation framework, which allows distilling knowledge from multiple teachers to a single student. We first introduce an adaptive learning strategy and teacher importance weight, which enables a student to effectively learn from max-margin teachers and easily adapt to new languages. Moreover, we present a shared student encoder with different projection layers in support of multiple languages, which contributes to largely reducing development and machine cost. Experimental results show that AMTSS gains competitive results on the public XNLI dataset and the realistic industrial dataset AliExpress (AE) in the E-commerce scenario.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qianglong Chen (25 papers)
  2. Feng Ji (74 papers)
  3. Feng-Lin Li (16 papers)
  4. Guohai Xu (21 papers)
  5. Ming Yan (190 papers)
  6. Ji Zhang (176 papers)
  7. Yin Zhang (98 papers)