Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PreAlign: Boosting Cross-Lingual Transfer by Early Establishment of Multilingual Alignment (2407.16222v3)

Published 23 Jul 2024 in cs.CL

Abstract: LLMs demonstrate reasonable multilingual abilities, despite predominantly English-centric pretraining. However, the spontaneous multilingual alignment in these models is shown to be weak, leading to unsatisfactory cross-lingual transfer and knowledge sharing. Previous works attempt to address this issue by explicitly injecting multilingual alignment information during or after pretraining. Thus for the early stage in pretraining, the alignment is weak for sharing information or knowledge across languages. In this paper, we propose PreAlign, a framework that establishes multilingual alignment prior to LLM pretraining. PreAlign injects multilingual alignment by initializing the model to generate similar representations of aligned words and preserves this alignment using a code-switching strategy during pretraining. Extensive experiments in a synthetic English to English-Clone setting demonstrate that PreAlign significantly outperforms standard multilingual joint training in LLMing, zero-shot cross-lingual transfer, and cross-lingual knowledge application. Further experiments in real-world scenarios further validate PreAlign's effectiveness across various model sizes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiahuan Li (10 papers)
  2. Shujian Huang (106 papers)
  3. Xinyu Dai (116 papers)
  4. Jiajun Chen (125 papers)
  5. Aarron Ching (1 paper)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com