Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation (2206.02712v1)

Published 6 Jun 2022 in cs.CL

Abstract: Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks. Existing works mostly utilize abundant unlabeled structured data to conduct unsupervised pre-training for task adaption, which fail to model the complex relationship between source structured data and target texts. Thus, we introduce self-training as a better few-shot learner than task-adaptive pre-training, which explicitly captures this relationship via pseudo-labeled data generated by the pre-trained model. To alleviate the side-effect of low-quality pseudo-labeled data during self-training, we propose a novel method called Curriculum-Based Self-Training (CBST) to effectively leverage unlabeled data in a rearranged order determined by the difficulty of text generation. Experimental results show that our method can outperform fine-tuning and task-adaptive pre-training methods, and achieve state-of-the-art performance in the few-shot setting of data-to-text generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pei Ke (37 papers)
  2. Haozhe Ji (11 papers)
  3. Zhenyu Yang (56 papers)
  4. Yi Huang (161 papers)
  5. Junlan Feng (63 papers)
  6. Xiaoyan Zhu (54 papers)
  7. Minlie Huang (225 papers)
Citations (6)