Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation? (2203.08850v3)

Published 16 Mar 2022 in cs.CL

Abstract: What can pre-trained multilingual sequence-to-sequence models like mBART contribute to translating low-resource languages? We conduct a thorough empirical experiment in 10 languages to ascertain this, considering five factors: (1) the amount of fine-tuning data, (2) the noise in the fine-tuning data, (3) the amount of pre-training data in the model, (4) the impact of domain mismatch, and (5) language typology. In addition to yielding several heuristics, the experiments form a framework for evaluating the data sensitivities of machine translation systems. While mBART is robust to domain differences, its translations for unseen and typologically distant languages remain below 3.0 BLEU. In answer to our title's question, mBART is not a low-resource panacea; we therefore encourage shifting the emphasis from new models to new data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. En-Shiun Annie Lee (17 papers)
  2. Sarubi Thillainathan (5 papers)
  3. Shravan Nayak (11 papers)
  4. Surangika Ranathunga (34 papers)
  5. David Ifeoluwa Adelani (59 papers)
  6. Ruisi Su (5 papers)
  7. Arya D. McCarthy (23 papers)
Citations (37)