PLACES: Prompting Language Models for Social Conversation Synthesis (2302.03269v3)
Abstract: Collecting high quality conversational data can be very expensive for most applications and infeasible for others due to privacy, ethical, or similar concerns. A promising direction to tackle this problem is to generate synthetic dialogues by prompting LLMs. In this work, we use a small set of expert-written conversations as in-context examples to synthesize a social conversation dataset using prompting. We perform several thorough evaluations of our synthetic conversations compared to human-collected conversations. This includes various dimensions of conversation quality with human evaluation directly on the synthesized conversations, and interactive human evaluation of chatbots fine-tuned on the synthetically generated dataset. We additionally demonstrate that this prompting approach is generalizable to multi-party conversations, providing potential to create new synthetic data for multi-party tasks. Our synthetic multi-party conversations were rated more favorably across all measured dimensions compared to conversation excerpts sampled from a human-collected multi-party dataset.
- Maximillian Chen (11 papers)
- Alexandros Papangelis (23 papers)
- Chenyang Tao (29 papers)
- Seokhwan Kim (29 papers)
- Andy Rosenbaum (10 papers)
- Yang Liu (2253 papers)
- Zhou Yu (206 papers)
- Dilek Hakkani-Tur (94 papers)