What Matters in LLM-generated Data: Diversity and Its Effect on Model Fine-Tuning (2506.19262v1)
Abstract: With the remarkable generative capabilities of LLMs, using LLM-generated data to train downstream models has emerged as a promising approach to mitigate data scarcity in specific domains and reduce time-consuming annotations. However, recent studies have highlighted a critical issue: iterative training on self-generated data results in model collapse, where model performance degrades over time. Despite extensive research on the implications of LLM-generated data, these works often neglect the importance of data diversity, a key factor in data quality. In this work, we aim to understand the implications of the diversity of LLM-generated data on downstream model performance. Specifically, we explore how varying levels of diversity in LLM-generated data affect downstream model performance. Additionally, we investigate the performance of models trained on data that mixes different proportions of LLM-generated data, which we refer to as synthetic data. Our experimental results show that, with minimal distribution shift, moderately diverse LLM-generated data can enhance model performance in scenarios with insufficient labeled data, whereas highly diverse generated data has a negative impact. We hope our empirical findings will offer valuable guidance for future studies on LLMs as data generators.
- Yuchang Zhu (12 papers)
- Zhonghua zhen (1 paper)
- Qunshu Lin (11 papers)
- Haotong Wei (3 papers)
- Xiaolong Sun (5 papers)
- Zixuan Yu (2 papers)
- Minghao Liu (44 papers)
- Zibin Zheng (194 papers)
- Liang Chen (360 papers)