Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unlocking the Potential of Model Merging for Low-Resource Languages (2407.03994v3)

Published 4 Jul 2024 in cs.CL and cs.AI

Abstract: Adapting LLMs to new languages typically involves continual pre-training (CT) followed by supervised fine-tuning (SFT). However, this CT-then-SFT approach struggles with limited data in the context of low-resource languages, failing to balance LLMing and task-solving capabilities. We thus propose model merging as an alternative for low-resource languages, combining models with distinct capabilities into a single model without additional training. We use model merging to develop task-solving LLMs for low-resource languages without SFT data in the target languages. Our experiments based on Llama-2-7B demonstrate that model merging effectively endows LLMs for low-resource languages with task-solving abilities, outperforming CT-then-SFT in scenarios with extremely scarce data. Observing performance saturation in model merging with more training tokens, we further analyze the merging process and introduce a slack variable to the model merging algorithm to mitigate the loss of important parameters, thereby enhancing performance. We hope that model merging can benefit more human languages suffering from data scarcity with its higher data efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mingxu Tao (12 papers)
  2. Chen Zhang (403 papers)
  3. Quzhe Huang (22 papers)
  4. Tianyao Ma (3 papers)
  5. Songfang Huang (51 papers)
  6. Dongyan Zhao (144 papers)
  7. Yansong Feng (81 papers)
Citations (3)