Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models (2406.12397v1)

Published 18 Jun 2024 in cs.CL

Abstract: Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of LLMs. Studies have shown that synthetic data can effectively improve the performance of LLMs on downstream benchmarks. However, despite its potential benefits, our analysis suggests that there may be inherent flaws in synthetic data. The uniform format of synthetic data can lead to pattern overfitting and cause significant shifts in the output distribution, thereby reducing the model's instruction-following capabilities. Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws. The empirical results demonstrate the effectiveness of our approach, which can reverse the instruction-following issues caused by pattern overfitting without compromising performance on benchmarks at relatively low cost. Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jie Chen (602 papers)
  2. Yupeng Zhang (25 papers)
  3. Bingning Wang (29 papers)
  4. Wayne Xin Zhao (196 papers)
  5. Ji-Rong Wen (299 papers)
  6. Weipeng Chen (56 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets