Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ComperDial: Commonsense Persona-grounded Dialogue Dataset and Benchmark (2406.11228v1)

Published 17 Jun 2024 in cs.CL

Abstract: We propose a new benchmark, ComperDial, which facilitates the training and evaluation of evaluation metrics for open-domain dialogue systems. ComperDial consists of human-scored responses for 10,395 dialogue turns in 1,485 conversations collected from 99 dialogue agents submitted to the Commonsense Persona-grounded Dialogue (CPD) challenge. As a result, for any dialogue, our benchmark includes multiple diverse responses with variety of characteristics to ensure more robust evaluation of learned dialogue metrics. In addition to single-turn response scores, ComperDial also contains dialogue-level human-annotated scores, enabling joint assessment of multi-turn model responses throughout a dialogue. Finally, building off ComperDial, we devise a new automatic evaluation metric to measure the general similarity of model-generated dialogues to human conversations. Our experimental results demonstrate that our novel metric, CPDScore is more correlated with human judgments than existing metrics. We release both ComperDial and CPDScore to the community to accelerate development of automatic evaluation metrics for open-domain dialogue systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hiromi Wakaki (16 papers)
  2. Yuki Mitsufuji (127 papers)
  3. Yoshinori Maeda (3 papers)
  4. Yukiko Nishimura (2 papers)
  5. Silin Gao (17 papers)
  6. Mengjie Zhao (35 papers)
  7. Keiichi Yamada (3 papers)
  8. Antoine Bosselut (85 papers)