Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue (2309.08156v2)

Published 15 Sep 2023 in cs.CL

Abstract: Evaluating open-domain dialogue systems is challenging for reasons such as the one-to-many problem, i.e., many appropriate responses other than just the golden response. As of now, automatic evaluation methods need better consistency with humans, while reliable human evaluation can be time- and cost-intensive. To this end, we propose the Reference-Assisted Dialogue Evaluation (RADE) approach under the multi-task learning framework, which leverages the pre-created utterance as reference other than the gold response to relief the one-to-many problem. Specifically, RADE explicitly compares reference and the candidate response to predict their overall scores. Moreover, an auxiliary response generation task enhances prediction via a shared encoder. To support RADE, we extend three datasets with additional rated responses other than just a golden response by human annotation. Experiments on our three datasets and two existing benchmarks demonstrate the effectiveness of our method, where Pearson, Spearman, and Kendall correlations with human evaluation outperform state-of-the-art baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhengliang Shi (15 papers)
  2. Weiwei Sun (93 papers)
  3. Shuo Zhang (256 papers)
  4. Zhen Zhang (384 papers)
  5. Pengjie Ren (95 papers)
  6. Zhaochun Ren (117 papers)
Citations (6)