Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models (2401.16745v1)

Published 30 Jan 2024 in cs.CL

Abstract: LLMs are increasingly relied upon for complex multi-turn conversations across diverse real-world applications. However, existing benchmarks predominantly focus on single-turn evaluations, overlooking the models' capabilities in multi-turn interactions. To address this gap, we introduce MT-Eval, a comprehensive benchmark designed to evaluate multi-turn conversational abilities. By analyzing human-LLM conversations, we categorize interaction patterns into four types: recollection, expansion, refinement, and follow-up. We construct multi-turn queries for each category either by augmenting existing datasets or by creating new examples with GPT-4 to avoid data leakage. To study the factors impacting multi-turn abilities, we create single-turn versions of the 1170 multi-turn queries and compare performance. Our evaluation of 11 well-known LLMs shows that while closed-source models generally surpass open-source ones, certain open-source models exceed GPT-3.5-Turbo in specific tasks. We observe significant performance degradation in multi-turn settings compared to single-turn settings in most models, which is not correlated with the models' fundamental capabilities. Moreover, we identify the distance to relevant content and susceptibility to error propagation as the key factors influencing multi-turn performance. MT-Eval is released publicly to encourage future research towards more robust conversational models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wai-Chung Kwan (8 papers)
  2. Xingshan Zeng (38 papers)
  3. Yuxin Jiang (26 papers)
  4. Yufei Wang (141 papers)
  5. Liangyou Li (36 papers)
  6. Lifeng Shang (90 papers)
  7. Xin Jiang (242 papers)
  8. Qun Liu (230 papers)
  9. Kam-Fai Wong (92 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets