Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale (2309.04564v1)

Published 8 Sep 2023 in cs.CL and cs.LG

Abstract: Large volumes of text data have contributed significantly to the development of LLMs in recent years. This data is typically acquired by scraping the internet, leading to pretraining datasets comprised of noisy web text. To date, efforts to prune these datasets down to a higher quality subset have relied on hand-crafted heuristics encoded as rule-based filters. In this work, we take a wider view and explore scalable estimates of data quality that can be used to systematically measure the quality of pretraining data. We perform a rigorous comparison at scale of the simple data quality estimator of perplexity, as well as more sophisticated and computationally intensive estimates of the Error L2-Norm and memorization. These metrics are used to rank and prune pretraining corpora, and we subsequently compare LLMs trained on these pruned datasets. Surprisingly, we find that the simple technique of perplexity outperforms our more computationally expensive scoring methods. We improve over our no-pruning baseline while training on as little as 30% of the original training dataset. Our work sets the foundation for unexplored strategies in automatically curating high quality corpora and suggests the majority of pretraining data can be removed while retaining performance.

Investigating Data Pruning for Efficient Pretraining of LLMs

This paper explores a critical aspect of large-scale LLM pretraining: the potential benefits of data pruning. The authors challenge the conventional paradigm of more extensive data volumes equating to improved model performance by investigating whether strategic pruning can maintain or even enhance model quality.

Context and Motivation

LLMs, such as GPT and BERT, are traditionally trained on vast datasets, often compiled from noisy web sources. While the prevalent assumption has been more data leads to better models, this paper examines whether intelligently reducing training datasets improves model efficiency without sacrificing performance.

Methodology

The authors employ perplexity, Error L2-Norm (EL2N), and memorization factors as data quality estimators for pruning pretraining data. These metrics allow ranking dataset examples on a perceived quality scale. By retaining various portions of these ranked datasets (e.g., top, middle, bottom subsets), the paper assesses the impact on the trained LLM's performance.

Key Findings

  1. Perplexity as a Dominant Metric: Surprisingly, the authors find that using perplexity—a relatively simple metric—outperforms more computationally intensive measures like EL2N and memorization. Models trained on pruned datasets ranked by perplexity achieve superior performance, with results showing up to a 2.1% improvement in certain scenarios compared to other methods.
  2. Retention Rates and Optimal Subsets: Notably, retaining only 30-50% of the original dataset, when scored by perplexity, results in improved LLM performance over retaining larger volumes of data. This suggests that a significant portion of the data possesses limited utility for effective LLMing.
  3. Impact of Reference Model Scale: Increasing the complexity and size of reference models used to calculate perplexity scores leads to better pruning outcomes. Specifically, a 52B parameter model enables more effective pruning than smaller reference models.

Implications and Future Directions

This paper suggests possible shifts in pretraining strategies for LLMs. By focusing on quality over quantity, researchers and practitioners might curtail computational costs and environmental impacts of AI training regimens. The finding positions perplexity as a practical and computationally efficient tool for data selection criteria, potentially reshaping future approaches to building robust and efficient LLMs.

Further exploration might include refining metric combinations or exploring novel metrics for even more precise pruning. Additionally, evaluating the effects across diverse LLM architectures and extending experiments to other natural language processing domains could provide deeper insights.

Overall, this paper underlines a need for reconsideration of data utilization in pretraining LLMs, advocating for strategic, quality-focused data management to drive the next wave of advancements in AI LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Max Marion (2 papers)
  2. Ahmet Üstün (38 papers)
  3. Luiza Pozzobon (5 papers)
  4. Alex Wang (32 papers)
  5. Marzieh Fadaee (40 papers)
  6. Sara Hooker (71 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com