Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning towards Selective Data Augmentation for Dialogue Generation (2303.09719v1)

Published 17 Mar 2023 in cs.CL

Abstract: As it is cumbersome and expensive to acquire a huge amount of data for training neural dialog models, data augmentation is proposed to effectively utilize existing training samples. However, current data augmentation techniques on the dialog generation task mostly augment all cases in the training dataset without considering the intrinsic attributes between different cases. We argue that not all cases are beneficial for augmentation task, and the cases suitable for augmentation should obey the following two attributes: (1) low-quality (the dialog model cannot generate a high-quality response for the case), (2) representative (the case should represent the property of the whole dataset). Herein, we explore this idea by proposing a Selective Data Augmentation framework (SDA) for the response generation task. SDA employs a dual adversarial network to select the lowest quality and most representative data points for augmentation in one stage. Extensive experiments conducted on two publicly available datasets, i.e., DailyDialog and OpenSubtitles, show that our framework can improve the response generation performance with respect to various metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xiuying Chen (80 papers)
  2. Mingzhe Li (85 papers)
  3. Jiayi Zhang (159 papers)
  4. Chen Wei (72 papers)
  5. Jianwei Cui (18 papers)
  6. Xin Gao (208 papers)
  7. Xiangliang Zhang (131 papers)
  8. Rui Yan (250 papers)
  9. xiaoqiang Xia (3 papers)
Citations (6)