Papers
Topics
Authors
Recent
2000 character limit reached

Multilingual Translation with Extensible Multilingual Pretraining and Finetuning

Published 2 Aug 2020 in cs.CL | (2008.00401v1)

Abstract: Recent work demonstrates the potential of multilingual pretraining of creating one model that can be used for various tasks in different languages. Previous work in multilingual pretraining has demonstrated that machine translation systems can be created by finetuning on bitext. In this work, we show that multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one direction, a pretrained model is finetuned on many directions at the same time. Compared to multilingual models trained from scratch, starting from pretrained models incorporates the benefits of large quantities of unlabeled monolingual data, which is particularly important for low resource languages where bitext is not available. We demonstrate that pretrained models can be extended to incorporate additional languages without loss of performance. We double the number of languages in mBART to support multilingual machine translation models of 50 languages. Finally, we create the ML50 benchmark, covering low, mid, and high resource languages, to facilitate reproducible research by standardizing training and evaluation data. On ML50, we demonstrate that multilingual finetuning improves on average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while improving 9.3 BLEU on average over bilingual baselines from scratch.

Citations (421)

Summary

  • The paper demonstrates that multilingual finetuning extends mBART from 25 to 50 languages with significant performance gains.
  • It employs enriched monolingual data and the ML50 benchmark, achieving an average 1 BLEU improvement and up to 18 BLEU for low-resource languages.
  • The approach consolidates multiple translation directions in a single model, reducing computational costs while scaling neural machine translation.

Multilingual Translation with Extensible Multilingual Pretraining and Finetuning

This paper presents a study on multilingual translation using a methodology involving extensible multilingual pretraining and finetuning, focusing on the effectiveness of multilingual finetuning as opposed to traditional bilingual finetuning. The authors advance the existing capabilities of the mBART model by expanding its multilingual abilities, thereby proposing an approach that accommodates 50 languages, compared to the original 25. This extension is achieved without negatively impacting performance on the languages originally supported by mBART.

Methodology

The core methodology involves leveraging multilingual pretraining through mBART, which initially encodes monolingual data for 25 languages in a sequence-to-sequence setting. The significant augmentation occurs during the finetuning phase, where rather than adapting the model to a single language pair, the approach extends the model to cater to multiple language pairs simultaneously through multilingual finetuning. This method is particularly advantageous for low-resource languages that lack substantial bitext, as it relies on an enriched pretraining dataset that includes monolingual corpora, such as those derived from Commoncrawl and Wikipedia.

Benchmarks and Evaluation

To facilitate research and achieve rigorous testing, the authors introduce the ML50 benchmark, covering an extensive spectrum of languages categorized by resource availability. This benchmark standardizes the training and evaluation conditions across languages, enabling researchers to assess machine translation performance consistently. Through this benchmark, the authors reported an average improvement of 1 BLEU point over existing multilingual models trained from scratch. Notably, low-resource languages showed improvements of up to 18 BLEU points in the multilingual finetuning setting.

Results and Implications

Multilingual finetuning outperforms traditional bilingual finetuning, particularly in many-to-one language pair settings, with the most significant gains evident for low-resource languages. The finetuning approach not only improves translation quality but also offers substantial efficiency benefits by consolidating multiple translation directions into a single model, thereby reducing the computational footprint. This methodology provides a potent strategy for scaling neural machine translation (NMT) systems to support an increasing number of languages without the computational cost of training from scratch.

Future Directions

While the study highlights substantial gains, the challenges associated with translation into many languages (one-to-Many) remain, suggesting further investigation into model capacity and multilingual model design efficiencies will be beneficial. Future work could explore the integration of even more languages and potentially domain-specific adaptations of the model to enhance quality and applicability. The release of mBART50 as a community resource fosters continued research in multilingual NMT, supporting innovative approaches that bridge language barriers in the global communication landscape.

In summary, the extensibility of pre-trained models through multilingual finetuning presents an efficient and effective methodology for advancing multilingual translation capabilities. This paper underlines the significance of flexible model architectures that leverage extensive monolingual corpora, paving the way for future innovations across a broader range of languages.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.