Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Collaborative Human-LM Dialogue Generation Help Information Extraction from Human Dialogues? (2307.07047v2)

Published 13 Jul 2023 in cs.CL

Abstract: The capabilities of pretrained LLMs have opened opportunities to explore new application areas, but applications involving human-human interaction are limited by the fact that most data is protected from public release for privacy reasons. Problem-solving human dialogues in real applications can be much more complex than existing Wizard-of-Oz collections, preventing successful domain transfer. To support information extraction (IE) for a private call center dataset, we introduce a human-in-the-loop dialogue generation framework capable of synthesizing realistic dialogues. In IE experiments with auto insurance call center dialogues, we observe 25\% relative improvement in $F_1$ after augmenting a small set of real human conversations with synthetic data. We release code and our synthetic dataset to illustrate the complexity of real-world call center conversations and encourage development of complex dialogue datasets that are more representative of natural data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Bo-Ru Lu (8 papers)
  2. Nikita Haduong (6 papers)
  3. Chia-Hsuan Lee (12 papers)
  4. Zeqiu Wu (15 papers)
  5. Hao Cheng (190 papers)
  6. Paul Koester (1 paper)
  7. Jean Utke (17 papers)
  8. Tao Yu (282 papers)
  9. Noah A. Smith (224 papers)
  10. Mari Ostendorf (57 papers)
Citations (2)