Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilingual Denoising Pre-training for Neural Machine Translation (2001.08210v2)

Published 22 Jan 2020 in cs.CL
Multilingual Denoising Pre-training for Neural Machine Translation

Abstract: This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks. We present mBART -- a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. Pre-training a complete model allows it to be directly fine tuned for supervised (both sentence-level and document-level) and unsupervised machine translation, with no task-specific modifications. We demonstrate that adding mBART initialization produces performance gains in all but the highest-resource settings, including up to 12 BLEU points for low resource MT and over 5 BLEU points for many document-level and unsupervised models. We also show it also enables new types of transfer to language pairs with no bi-text or that were not in the pre-training corpus, and present extensive analysis of which factors contribute the most to effective pre-training.

Multilingual Denoising Pre-training for Neural Machine Translation

The paper "Multilingual Denoising Pre-training for Neural Machine Translation" presents an innovative approach to enhancing machine translation (MT) performance through multilingual denoising pre-training. This technique delivers notable improvements across various MT tasks. The focal point of this paper is mBART, an autoregressive sequence-to-sequence denoising auto-encoder pre-trained on extensive monolingual corpora using the BART objective function. mBART is distinct as it pre-trains a complete model by denoising full texts across multiple languages, unlike prior methods that have typically restricted their attention to the encoder, decoder, or partial text reconstructions.

Main Contributions

  1. mBART Architecture and Training:
    • The paper introduces mBART, comprising 12 layers each for the encoder and decoder, with a model dimension of 1024 and 16 heads.
    • mBART undergoes training on a subset of 25 languages from the Common Crawl (CC). Tokenization is performed using a sentence-piece model encompassing 250,000 subword tokens.
    • The training leverages two types of noise—span masking and sentence permutation—to create robust representations that generalize well across multiple tasks.
  2. Effectiveness Across MT Tasks:
    • Extensive experimentation shows that mBART leads to performance gains in low- and medium-resource settings, achieving up to 12 BLEU points improvement for low-resource pairs.
    • For document-level MT, mBART training enhances performance up to 5.5 BLEU points.
    • In unsupervised MT contexts, mBART minimizes the dependency on task-specific modifications and presents the first non-degenerate results for certain language pairs, like a 9.5 BLEU point increment on Nepali-English.
  3. Transfer Learning Capabilities:
    • mBART demonstrates remarkable transfer learning capabilities, performing well on language pairs not explicitly included during the pre-training phase.
    • The paper reveals that by fine-tuning using bi-text for one language pair, the model can translate across all languages within the pre-training set.
  4. Analysis and Comparison:
    • Detailed exploration identifies critical factors contributing to pre-training efficiency, including the number of languages and corpus balancing.
    • Comparisons with existing methods (e.g., MASS, XLM) position mBART favorably, showcasing superior results in various benchmark tests.

Practical and Theoretical Implications

The implications of this research are manifold. Practically, mBART facilitates robust performance in diverse translation scenarios, including low-resource and unsupervised settings. This aligns particularly well with real-world applications where language data can be sparse or non-parallel. Theoretically, the findings extend the understanding of how comprehensive sequence-to-sequence pre-training can serve as a universal foundation for downstream MT tasks.

Future Directions

Future developments may focus on scaling mBART to include more languages, potentially creating an mBART100 model. Further research could explore the optimization of pre-training strategies, such as adjusting the balance between seen and unseen language pairs. Additionally, addressing the deployment efficiency of these models in production environments remains a critical challenge, warranting innovative solutions in model compression and resource allocation.

In summary, the paper underscores the significant advantages of multilingual denoising pre-training for neural machine translation, presenting a versatile and powerful model in mBART. The contributions and findings propel the field forward, offering a clear pathway for future advancement in both practical applications and theoretical explorations of AI-driven translation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yinhan Liu (8 papers)
  2. Jiatao Gu (83 papers)
  3. Naman Goyal (37 papers)
  4. Xian Li (115 papers)
  5. Sergey Edunov (26 papers)
  6. Marjan Ghazvininejad (33 papers)
  7. Mike Lewis (78 papers)
  8. Luke Zettlemoyer (225 papers)
Citations (1,673)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com