Papers
Topics
Authors
Recent
Search
2000 character limit reached

Large-Scale Data Selection for Instruction Tuning

Published 3 Mar 2025 in cs.CL | (2503.01807v2)

Abstract: Selecting high-quality training data from a larger pool is a crucial step when instruction-tuning LLMs, as carefully curated datasets often produce models that outperform those trained on much larger, noisier datasets. Automated data selection approaches for instruction-tuning are typically tested by selecting small datasets (roughly 10k samples) from small pools (100-200k samples). However, popular deployed instruction-tuned models often train on hundreds of thousands to millions of samples, subsampled from even larger data pools. We present a systematic study of how well data selection methods scale to these settings, selecting up to 2.5M samples from pools of up to 5.8M samples and evaluating across 7 diverse tasks. We show that many recently proposed methods fall short of random selection in this setting (while using more compute), and even decline in performance when given access to larger pools of data to select over. However, we find that a variant of representation-based data selection (RDS+), which uses weighted mean pooling of pretrained LM hidden states, consistently outperforms more complex methods across all settings tested -- all whilst being more compute-efficient. Our findings highlight that the scaling properties of proposed automated selection methods should be more closely examined. We release our code, data, and models at https://github.com/hamishivi/automated-instruction-selection.

Summary

  • The paper finds that Representation-based Data Selection (RDS+) using weighted mean pooling excels in large-scale instruction tuning by outperforming other methods and being more compute-efficient.
  • Unlike many methods whose performance drops with more data, RDS+ improves as the size of the data pool increases, demonstrating its scalability.
  • RDS+ consistently achieves better performance than balanced random selection across various data selection sizes and can yield superior results with less computation when selecting more data points.
  • meta_description": "Discover how Representation-based Data Selection (RDS+) outperforms other methods for large-scale instruction tuning, offering efficiency and improved performance with growing dataset sizes.
  • title": "Large-Scale Data Selection for Instruction Tuning"}],

This paper investigates data selection methods for instruction tuning when scaling to millions of samples.

Here's a summary of the key findings:

  • Representation-based data selection (RDS+), which uses weighted mean pooling of pretrained LM hidden states, outperforms other methods and is more compute-efficient.
  • Many dataset selection methods' performance decreases with increased dataset size, while RDS+ improves as the data pool size grows.
  • RDS+ consistently beats balanced random selection across different data selection sizes and can achieve better performance with less compute when selecting more data points.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 23 likes about this paper.