One-Shot Learning as Instruction Data Prospector for Large Language Models (2312.10302v4)
Abstract: Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce \textsc{Nuggets}, a novel and efficient methodology that leverages one-shot learning to discern and select high-quality instruction data from extensive datasets. \textsc{Nuggets} assesses the potential of individual instruction examples to act as effective one-shot learning instances, thereby identifying those that can significantly improve performance across diverse tasks. \textsc{Nuggets} utilizes a scoring system based on the impact of candidate examples on the perplexity of a diverse anchor set, facilitating the selection of the most advantageous data for instruction tuning. Through comprehensive evaluations on two benchmarks, including MT-Bench and Alpaca-Eval, we show that instruction tuning with the top 1\% of examples curated by \textsc{Nuggets} substantially outperforms conventional methods employing the entire dataset.
- Yunshui Li (18 papers)
- Binyuan Hui (57 papers)
- Xiaobo Xia (43 papers)
- Min Yang (239 papers)
- Lei Zhang (1689 papers)
- Shuzheng Si (20 papers)
- Junhao Liu (60 papers)
- Tongliang Liu (251 papers)
- Fei Huang (408 papers)
- Yongbin Li (128 papers)
- Jiaxi yang (31 papers)
- Ling-Hao Chen (13 papers)