Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

M2Lingual: Enhancing Multilingual, Multi-Turn Instruction Alignment in Large Language Models (2406.16783v2)

Published 24 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Instruction finetuning (IFT) is critical for aligning LLMs to follow instructions. While many effective IFT datasets have been introduced recently, they predominantly focus on high-resource languages like English. To better align LLMs across a broad spectrum of languages and tasks, we propose a fully synthetic, novel taxonomy (Evol) guided Multilingual, Multi-turn instruction finetuning dataset, called M2Lingual. It is constructed by first selecting a diverse set of seed examples and then utilizing the proposed Evol taxonomy to convert these seeds into complex and challenging multi-turn instructions. We demonstrate the effectiveness of M2Lingual by training LLMs of varying sizes and showcasing the enhanced performance across a diverse set of languages. We contribute the 2 step Evol taxonomy with the guided generation code: https://github.com/ServiceNow/M2Lingual, as well as the first fully synthetic, general and task-oriented, multi-turn, multilingual dataset built with Evol - M2Lingual: https://huggingface.co/datasets/ServiceNow-AI/ M2Lingual - containing 182K total IFT pairs, covering 70 languages and 17+ NLP tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Rishabh Maheshwary (14 papers)
  2. Vikas Yadav (38 papers)
  3. Hoang Nguyen (24 papers)
  4. Khyati Mahajan (4 papers)
  5. Sathwik Tejaswi Madhusudhan (10 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com