Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs (2111.13284v1)

Published 26 Nov 2021 in cs.CL

Abstract: This paper describes our submission to the constrained track of WMT21 shared news translation task. We focus on the three relatively low resource language pairs Bengali to and from Hindi, English to and from Hausa, and Xhosa to and from Zulu. To overcome the limitation of relatively low parallel data we train a multilingual model using a multitask objective employing both parallel and monolingual data. In addition, we augment the data using back translation. We also train a bilingual model incorporating back translation and knowledge distillation then combine the two models using sequence-to-sequence mapping. We see around 70% relative gain in BLEU point for English to and from Hausa, and around 25% relative improvements for both Bengali to and from Hindi, and Xhosa to and from Zulu compared to bilingual baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Amr Hendy (8 papers)
  2. Esraa A. Gad (1 paper)
  3. Mohamed Abdelghaffar (4 papers)
  4. Jailan S. ElMosalami (1 paper)
  5. Mohamed Afify (10 papers)
  6. Ahmed Y. Tawfik (4 papers)
  7. Hany Hassan Awadalla (24 papers)
Citations (3)