Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How to Synthesize Text Data without Model Collapse? (2412.14689v1)

Published 19 Dec 2024 in cs.CL, cs.AI, and cs.LG
How to Synthesize Text Data without Model Collapse?

Abstract: Model collapse in synthetic data indicates that iterative training on self-generated data leads to a gradual decline in performance. With the proliferation of AI models, synthetic data will fundamentally reshape the web data ecosystem. Future GPT-${n}$ models will inevitably be trained on a blend of synthetic and human-produced data. In this paper, we focus on two questions: what is the impact of synthetic data on LLM training, and how to synthesize data without model collapse? We first pre-train LLMs across different proportions of synthetic data, revealing a negative correlation between the proportion of synthetic data and model performance. We further conduct statistical analysis on synthetic data to uncover distributional shift phenomenon and over-concentration of n-gram features. Inspired by the above findings, we propose token editing on human-produced data to obtain semi-synthetic data. As a proof of concept, we theoretically demonstrate that token-level editing can prevent model collapse, as the test error is constrained by a finite upper bound. We conduct extensive experiments on pre-training from scratch, continual pre-training, and supervised fine-tuning. The results validate our theoretical proof that token-level editing improves data quality and enhances model performance.

An Analysis of Synthetic Text Data and Model Collapse Prevention

The paper "How to Synthesize Text Data without Model Collapse?" addresses the challenges associated with training generative LLMs using synthetic data. The authors delve into the phenomenon of "model collapse," where iterative training on self-generated synthetic data results in degraded model performance. With the anticipated reliance on mixed datasets of human-produced and synthetic data in future AI model training, understanding the repercussions of synthetic data on model effectiveness and strategies to avert model collapse is crucial.

Key Findings and Methodology

The research is structured to answer two primary inquiries:

  1. The influence of synthetic data on LLM training.
  2. Methods to generate synthetic data that do not lead to model collapse.

The authors' initial exploration reveals a negative correlation between the portions of synthetic data in training datasets and the performance of LLMs. Through empirical pre-training on varying mixtures of human and synthetic data, they identify a "non-iterative model collapse," even when training is not recursively iterative. This aspect emerges from the distributional discrepancies between synthetic and authentic data, especially the lack of long-tail coverage and the over-concentration of certain n-gram features in synthetic datasets.

In response to these findings, the authors propose a novel strategy termed "token-level editing."

Token-Level Editing Strategy

Token-level editing serves to generate what the authors refer to as "semi-synthetic" data. Rather than entirely replacing human-produced data, this method involves modifying specific token sequences that exhibit high model-generated confidence, guided by a prior distribution. This modification maintains the critical distributional characteristics present in human-authored data and, as a theoretical model suggests, limits test errors to an upper bound. Therefore, this approach averts collapse by preserving distribution coverage.

Theoretical Implications and Experimental Validation

The paper extends its theoretical framework to demonstrate that, unlike recursive training on self-generated outputs, utilizing token-level editing avoids the continual buildup of error typically leading to model collapse. Their theoretical analysis confirms that controlled editing of tokens maintains an error bound, ensuring that model performance does not degrade over time.

Supporting experimentation includes comprehensive LLM training utilizing pre-training from scratch, continued pre-training, and supervised fine-tuning phases. The results consistently indicate that token-level editing enhances model performance across different phases of LLM training without increasing the data corpus size.

Broader Implications and Future Directions

This work highlights the criticality of balancing synthetic data's informative potential with the long-tail distribution coverage inherent in genuine datasets. As AI systems increasingly incorporate synthetic data, the implications for maintaining model generalization and performance become ever more pertinent. Methods like token-level editing pave the way for more robust training datasets that prevent degradation in performance typical of model collapse.

For future developments, this research suggests further exploration into balancing efficiency and effectiveness in synthetic data generation, while optimizing the amalgamation of human and machine-generated content. This balance is essential not only for large-scale LLMs but also for tasks where nuanced understanding and generative diversity are crucial.

In conclusion, the authors of this paper provide a rigorous analysis and a novel method to avert performance loss in LLMs trained on synthetic data, setting a precedent for future exploration and application of synthetic data in AI training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xuekai Zhu (12 papers)
  2. Daixuan Cheng (8 papers)
  3. Hengli Li (7 papers)
  4. Kaiyan Zhang (33 papers)
  5. Ermo Hua (16 papers)
  6. Xingtai Lv (13 papers)
  7. Ning Ding (122 papers)
  8. Zhouhan Lin (57 papers)
  9. Zilong Zheng (63 papers)
  10. Bowen Zhou (141 papers)