Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Take the essence and discard the dross: A Rethinking on Data Selection for Fine-Tuning Large Language Models (2406.14115v1)

Published 20 Jun 2024 in cs.CL

Abstract: Data selection for fine-tuning LLMs aims to select a high-quality subset from a given candidate dataset to train a Pending Fine-tune Model (PFM) into a Selective-Enhanced Model (SEM). It can improve the model performance and accelerate the training process. Although a few surveys have investigated related works of data selection, there is a lack of comprehensive comparison between existing methods due to their various experimental settings. To address this issue, we first propose a three-stage scheme for data selection and comprehensively review existing works according to this scheme. Then, we design a unified comparing method with ratio-based efficiency indicators and ranking-based feasibility indicators to overcome the difficulty of comparing various models with diverse experimental settings. After an in-depth comparative analysis, we find that the more targeted method with data-specific and model-specific quality labels has higher efficiency, but the introduction of additional noise information should be avoided when designing selection algorithms. Finally, we summarize the trends in data selection and highlight the short-term and long-term challenges to guide future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ziche Liu (3 papers)
  2. Rui Ke (3 papers)
  3. Feng Jiang (97 papers)
  4. Haizhou Li (285 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com