Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging LLMs for Dialogue Quality Measurement (2406.17304v1)

Published 25 Jun 2024 in cs.CL

Abstract: In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in LLMs show robust zeroshot and few-shot capabilities across NLP tasks. This paper explores using LLMs for automated dialogue quality evaluation, experimenting with various configurations on public and proprietary datasets. Manipulating factors such as model size, in-context examples, and selection techniques, we examine "chain-of-thought" (CoT) reasoning and label extraction procedures. Our results show that (1) larger models yield more accurate dialogue labels; (2) algorithmic selection of in-context examples outperforms random selection; (3) CoT reasoning where an LLM is asked to provide justifications before outputting final labels improves performance; and (4) fine-tuned LLMs outperform out-of-the-box ones. Our results indicate that LLMs that are suitably fine-tuned and have sufficient reasoning capabilities can be leveraged for automated dialogue evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jinghan Jia (30 papers)
  2. Abi Komma (2 papers)
  3. Timothy Leffel (3 papers)
  4. Xujun Peng (5 papers)
  5. Ajay Nagesh (7 papers)
  6. Tamer Soliman (1 paper)
  7. Aram Galstyan (142 papers)
  8. Anoop Kumar (15 papers)
Citations (1)