Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Universal Parent Model for Low-Resource Neural Machine Translation Transfer (1909.06516v2)

Published 14 Sep 2019 in cs.CL

Abstract: Transfer learning from a high-resource language pair parent' has been proven to be an effective way to improve neural machine translation quality for low-resource language pairschildren.' However, previous approaches build a custom parent model or at least update an existing parent model's vocabulary for each child language pair they wish to train, in an effort to align parent and child vocabularies. This is not a practical solution. It is wasteful to devote the majority of training time for new language pairs to optimizing parameters on an unrelated data set. Further, this overhead reduces the utility of neural machine translation for deployment in humanitarian assistance scenarios, where extra time to deploy a new language pair can mean the difference between life and death. In this work, we present a `universal' pre-trained neural parent model with constant vocabulary that can be used as a starting point for training practically any new low-resource language to a fixed target language. We demonstrate that our approach, which leverages orthography unification and a broad-coverage approach to subword identification, generalizes well to several languages from a variety of families, and that translation systems built with our approach can be built more quickly than competing methods and with better quality as well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Mozhdeh Gheini (8 papers)
  2. Jonathan May (76 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.