Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Multilingual Instruction Finetuning via Linguistically Natural and Diverse Datasets (2407.01853v1)

Published 1 Jul 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Advancements in LLMs have significantly enhanced instruction-following capabilities. However, most Instruction Fine-Tuning (IFT) datasets are predominantly in English, limiting model performance in other languages. Traditional methods for creating multilingual IFT datasets such as translating existing English IFT datasets or converting existing NLP datasets into IFT datasets by templating, struggle to capture linguistic nuances and ensure prompt (instruction) diversity. To address this issue, we propose a novel method for collecting multilingual IFT datasets that preserves linguistic naturalness and ensures prompt diversity. This approach leverages English-focused LLMs, monolingual corpora, and a scoring function to create high-quality, diversified IFT datasets in multiple languages. Experiments demonstrate that LLMs finetuned using these IFT datasets show notable improvements in both generative and discriminative tasks, indicating enhanced language comprehension by LLMs in non-English contexts. Specifically, on the multilingual summarization task, LLMs using our IFT dataset achieved 17.57% and 15.23% improvements over LLMs fine-tuned with translation-based and template-based datasets, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sathish Reddy Indurthi (4 papers)
  2. Wenxuan Zhou (61 papers)
  3. Shamil Chollampatt (6 papers)
  4. Ravi Agrawal (4 papers)
  5. Kaiqiang Song (32 papers)
  6. Lingxiao Zhao (48 papers)
  7. Chenguang Zhu (100 papers)
Citations (1)