Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning (2205.12679v2)

Published 25 May 2022 in cs.CL

Abstract: There is a rising interest in further exploring the zero-shot learning potential of large pre-trained LLMs (PLMs). A new paradigm called data-generation-based zero-shot learning has achieved impressive success. In this paradigm, the synthesized data from the PLM acts as the carrier of knowledge, which is used to train a task-specific model with orders of magnitude fewer parameters than the PLM, achieving both higher performance and efficiency than prompt-based zero-shot learning methods on PLMs. The main hurdle of this approach is that the synthesized data from PLM usually contains a significant portion of low-quality samples. Fitting on such data will greatly hamper the performance of the task-specific model, making it unreliable for deployment. Previous methods remedy this issue mainly by filtering synthetic data using heuristic metrics(e.g., output confidence), or refining the data with the help of a human expert, which comes with excessive manual tuning or expensive costs. In this paper, we propose a novel noise-robust re-weighting framework SunGen to automatically construct high-quality data for zero-shot classification problems. Our framework features the ability to learn the sample weights indicating data quality without requiring any human annotation. We theoretically and empirically verify the ability of our method to help construct good-quality synthetic datasets. Notably, SunGen-LSTM yields a 9.8% relative improvement than the baseline on average accuracy across eight different established text classification tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jiahui Gao (25 papers)
  2. Renjie Pi (37 papers)
  3. Yong Lin (77 papers)
  4. Hang Xu (205 papers)
  5. Jiacheng Ye (21 papers)
  6. Zhiyong Wu (171 papers)
  7. Weizhong Zhang (40 papers)
  8. Xiaodan Liang (318 papers)
  9. Zhenguo Li (195 papers)
  10. Lingpeng Kong (134 papers)
Citations (34)