Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DG2: Data Augmentation Through Document Grounded Dialogue Generation (2112.08342v1)

Published 15 Dec 2021 in cs.CL

Abstract: Collecting data for training dialog systems can be extremely expensive due to the involvement of human participants and need for extensive annotation. Especially in document-grounded dialog systems, human experts need to carefully read the unstructured documents to answer the users' questions. As a result, existing document-grounded dialog datasets are relatively small-scale and obstruct the effective training of dialogue systems. In this paper, we propose an automatic data augmentation technique grounded on documents through a generative dialogue model. The dialogue model consists of a user bot and agent bot that can synthesize diverse dialogues given an input document, which are then used to train a downstream model. When supplementing the original dataset, our method achieves significant improvement over traditional data augmentation methods. We also achieve great performance in the low-resource setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qingyang Wu (29 papers)
  2. Song Feng (43 papers)
  3. Derek Chen (15 papers)
  4. Sachindra Joshi (32 papers)
  5. Luis A. Lastras (9 papers)
  6. Zhou Yu (206 papers)
Citations (11)