Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting Domain-Specific Parallel Data on Multilingual Language Models for Low-resource Language Translation (2412.19522v1)

Published 27 Dec 2024 in cs.CL

Abstract: Neural Machine Translation (NMT) systems built on multilingual sequence-to-sequence LLMs (msLMs) fail to deliver expected results when the amount of parallel data for a language, as well as the language's representation in the model are limited. This restricts the capabilities of domain-specific NMT systems for low-resource languages (LRLs). As a solution, parallel data from auxiliary domains can be used either to fine-tune or to further pre-train the msLM. We present an evaluation of the effectiveness of these two techniques in the context of domain-specific LRL-NMT. We also explore the impact of domain divergence on NMT model performance. We recommend several strategies for utilizing auxiliary parallel data in building domain-specific NMT models for LRLs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Surangika Ranathungaa (1 paper)
  2. Shravan Nayak (11 papers)
  3. Shih-Ting Cindy Huang (1 paper)
  4. Yanke Mao (2 papers)
  5. Tong Su (20 papers)
  6. Yun-Hsiang Ray Chan (1 paper)
  7. Songchen Yuan (2 papers)
  8. Anthony Rinaldi (4 papers)
  9. Annie En-Shiun Lee (2 papers)