Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models? (2106.02171v1)

Published 3 Jun 2021 in cs.CL

Abstract: Recently, mT5 - a massively multilingual version of T5 - leveraged a unified text-to-text format to attain state-of-the-art results on a wide variety of multilingual NLP tasks. In this paper, we investigate the impact of incorporating parallel data into mT5 pre-training. We find that multi-tasking LLMing with objectives such as machine translation during pre-training is a straightforward way to improve performance on downstream multilingual and cross-lingual tasks. However, the gains start to diminish as the model capacity increases, suggesting that parallel data might not be as essential for larger models. At the same time, even at larger model sizes, we find that pre-training with parallel data still provides benefits in the limited labelled data regime.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mihir Kale (18 papers)
  2. Aditya Siddhant (22 papers)
  3. Noah Constant (32 papers)
  4. Melvin Johnson (35 papers)
  5. Rami Al-Rfou (34 papers)
  6. Linting Xue (9 papers)
Citations (23)