Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Demystifying Instruction Mixing for Fine-tuning Large Language Models (2312.10793v3)

Published 17 Dec 2023 in cs.CL and cs.AI

Abstract: Instruction tuning significantly enhances the performance of LLMs across various tasks. However, the procedure to optimizing the mixing of instruction datasets for LLM fine-tuning is still poorly understood. This study categorizes instructions into three primary types: NLP downstream tasks, coding, and general chat. We explore the effects of instruction tuning on different combinations of datasets on LLM performance, and find that certain instruction types are more advantageous for specific applications but can negatively impact other areas. This work provides insights into instruction mixtures, laying the foundations for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Renxi Wang (8 papers)
  2. Minghao Wu (31 papers)
  3. Yuxia Wang (41 papers)
  4. Xudong Han (40 papers)
  5. Chiyu Zhang (35 papers)
  6. Haonan Li (43 papers)
  7. Timothy Baldwin (125 papers)