Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

IKUN for WMT24 General MT Task: LLMs Are here for Multilingual Machine Translation (2408.11512v2)

Published 21 Aug 2024 in cs.CL

Abstract: This paper introduces two multilingual systems, IKUN and IKUN-C, developed for the general machine translation task in WMT24. IKUN and IKUN-C represent an open system and a constrained system, respectively, built on Llama-3-8b and Mistral-7B-v0.3. Both systems are designed to handle all 11 language directions using a single model. According to automatic evaluation metrics, IKUN-C achieved 6 first-place and 3 second-place finishes among all constrained systems, while IKUN secured 1 first-place and 2 second-place finishes across both open and constrained systems. These encouraging results suggest that LLMs are nearing the level of proficiency required for effective multilingual machine translation. The systems are based on a two-stage approach: first, continuous pre-training on monolingual data in 10 languages, followed by fine-tuning on high-quality parallel data for 11 language directions. The primary difference between IKUN and IKUN-C lies in their monolingual pre-training strategy. IKUN-C is pre-trained using constrained monolingual data, whereas IKUN leverages monolingual data from the OSCAR dataset. In the second phase, both systems are fine-tuned on parallel data sourced from NTREX, Flores, and WMT16-23 for all 11 language pairs.

Citations (5)

Summary

  • The paper demonstrates that continuous pre-training on monolingual data followed by fine-tuning on parallel corpora significantly improves multilingual MT performance.
  • The study highlights enhanced tokenizer efficiency for languages like Chinese, Japanese, Hindi, and Icelandic, reducing token lengths and training costs.
  • Experimental results show IKUN-C achieved multiple first-place finishes, validating LLMs' potential in robust translations across diverse language pairs.

Overview of "IKUN for WMT24 General MT Task: LLMs Are here for Multilingual Machine Translation"

The paper "IKUN for WMT24 General MT Task: LLMs Are here for Multilingual Machine Translation" presents two multilingual systems, IKUN (open) and IKUN-C (constrained), developed for the WMT24 general machine translation (MT) task. Utilizing LLMs—specifically Llama-3-8b and Mistral-7B-v0.3—both systems address the challenge of translating between 11 language directions using a single model. The research highlights the effectiveness of LLMs in multilingual MT, emphasizing their suitability for handling a broad range of languages, including those underrepresented in pre-training data.

Methodology

The systems comply with a two-stage approach:

  1. Continuous Pre-training on Monolingual Data: Initially, the LLMs undergo pre-training using monolingual data in 10 languages. This step enriches the models with knowledge across these languages, thereby facilitating effective transfer learning, particularly for low-resource languages.
  2. Fine-tuning on High-Quality Parallel Data: Following pre-training, the models are fine-tuned using high-quality parallel datasets, encompassing all 11 language pairs. The IKUN-C constrained system uses monolingual data constrained by WMT24 guidelines, while the open IKUN system leverages the OSCAR dataset.

The key differentiation between IKUN and IKUN-C lies in their monolingual pre-training strategies. IKUN-C employs constrained monolingual data, whereas IKUN utilizes the OSCAR dataset.

Tokenizer Efficiency

A critical aspect addressed in the paper is tokenizer efficiency across different languages. Since many LLMs are pre-trained on dominant languages, their tokenizers often perform suboptimally on low-resource languages, generating excessively long token sequences. This inefficiency can significantly increase GPU memory consumption during training. The paper introduces new sub-words for Chinese, Japanese, Hindi, and Icelandic in the IKUN-C system to improve tokenizer efficiency. The evaluation shows that this enhancement reduces the tokenized sentence length, thereby optimizing training efficacy.

Experimental Setup

For continuous pre-training, IKUN uses approximately 8 billion tokens from the OSCAR dataset, while IKUN-C utilizes data from permissible sources like News Crawl and Leipzig Corpora. Fine-tuning employs high-quality parallel data from FLoRes-200, NTREX-128, and previous WMT16-23 datasets. The models are fine-tuned bidirectionally for each language pair to ensure robustness and comprehensiveness.

Results

The experimental results, as reported in the paper, indicate significant achievements:

  • IKUN-C: Secured 6 first-place and 3 second-place finishes in the constrained track based on automatic evaluation metrics, demonstrating its competitiveness and efficacy.
  • IKUN: In both open and constrained tracks, IKUN achieved 1 first-place and 2 second-place finishes, emphasizing the robustness of the open model even when compared to specialized systems.

These results are based on various metrics such as MetricX-23-XL, CometKiwi-DA-XL, and AutoRank, which correlate strongly with human evaluation standards. The results validate the efficacy of adapting LLMs for multilingual MT tasks, showing substantial promise compared to traditional bilingual systems.

Implications and Future Work

The research underscores the practical implications of using LLMs for multilingual MT. The demonstrated ability to handle multiple languages with notable efficiency suggests broader accessibility and applicability of MT systems for diverse language speakers. On a theoretical level, it further consolidates the role of continuous pre-training and fine-tuning as vital steps for leveraging LLMs in low-resource language scenarios.

For future developments, the potential exploration of document-level fine-tuning and context-aware translation strategies could further enhance the performance of these models. As LLMs continue to evolve, these approaches could standardize the development pipeline for robust, multilingual MT systems, supporting a wider array of global languages.

Conclusion

In summary, this paper illustrates a methodical and effective approach for adapting pre-trained LLMs to multilingual MT tasks. By addressing tokenizer efficiency, leveraging monolingual and parallel data for training, and demonstrating competitive performance metrics, the IKUN and IKUN-C systems showcase the promising potential of LLMs in achieving comprehensive multilingual translation capabilities. The encouraging results signify a step forward in the domain of machine translation, paving the way for further innovations and applications in multilingual contexts.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com