Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities (2404.17790v1)

Published 27 Apr 2024 in cs.CL and cs.AI

Abstract: Cross-lingual continual pre-training of LLMs initially trained on English corpus allows us to leverage the vast amount of English language resources and reduce the pre-training cost. In this study, we constructed Swallow, an LLM with enhanced Japanese capability, by extending the vocabulary of Llama 2 to include Japanese characters and conducting continual pre-training on a large Japanese web corpus. Experimental results confirmed that the performance on Japanese tasks drastically improved through continual pre-training, and the performance monotonically increased with the amount of training data up to 100B tokens. Consequently, Swallow achieved superior performance compared to other LLMs that were trained from scratch in English and Japanese. An analysis of the effects of continual pre-training revealed that it was particularly effective for Japanese question answering tasks. Furthermore, to elucidate effective methodologies for cross-lingual continual pre-training from English to Japanese, we investigated the impact of vocabulary expansion and the effectiveness of incorporating parallel corpora. The results showed that the efficiency gained through vocabulary expansion had no negative impact on performance, except for the summarization task, and that the combined use of parallel corpora enhanced translation ability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Kazuki Fujii (14 papers)
  2. Taishi Nakamura (11 papers)
  3. Mengsay Loem (8 papers)
  4. Hiroki Iida (3 papers)
  5. Masanari Ohi (9 papers)
  6. Kakeru Hattori (5 papers)
  7. Hirai Shota (2 papers)
  8. Sakae Mizuki (7 papers)
  9. Rio Yokota (64 papers)
  10. Naoaki Okazaki (70 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.