Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting High-resource NMT Models to Translate Low-resource Related Languages without Parallel Data (2105.15071v2)

Published 31 May 2021 in cs.CL

Abstract: The scarcity of parallel data is a major obstacle for training high-quality machine translation systems for low-resource languages. Fortunately, some low-resource languages are linguistically related or similar to high-resource languages; these related languages may share many lexical or syntactic structures. In this work, we exploit this linguistic overlap to facilitate translating to and from a low-resource language with only monolingual data, in addition to any parallel data in the related high-resource language. Our method, NMT-Adapt, combines denoising autoencoding, back-translation and adversarial objectives to utilize monolingual data for low-resource adaptation. We experiment on 7 languages from three different language families and show that our technique significantly improves translation into low-resource language compared to other translation baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wei-Jen Ko (11 papers)
  2. Ahmed El-Kishky (25 papers)
  3. Adithya Renduchintala (17 papers)
  4. Vishrav Chaudhary (45 papers)
  5. Naman Goyal (37 papers)
  6. Francisco Guzmán (39 papers)
  7. Pascale Fung (150 papers)
  8. Philipp Koehn (60 papers)
  9. Mona Diab (71 papers)
Citations (37)