Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GameEval: Evaluating LLMs on Conversational Games (2308.10032v1)

Published 19 Aug 2023 in cs.CL

Abstract: The rapid advancements in LLMs have presented challenges in evaluating those models. Existing evaluation methods are either reference-based or preference based, which inevitably need human intervention or introduce test bias caused by evaluator models. In this paper, we propose GameEval, a novel approach to evaluating LLMs through goal-driven conversational games, overcoming the limitations of previous methods. GameEval treats LLMs as game players and assigns them distinct roles with specific goals achieved by launching conversations of various forms, including discussion, question answering, and voting. We design three unique games with cooperative or adversarial objectives, accompanied by corresponding evaluation metrics, to show how this new paradigm comprehensively evaluates model performance.Through extensive experiments, we show that GameEval can effectively differentiate the capabilities of various LLMs, providing a comprehensive assessment of their integrated abilities to solve complex problems. Our public anonymous code is available at https://github.com/GameEval/GameEval.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dan Qiao (26 papers)
  2. Chenfei Wu (32 papers)
  3. Yaobo Liang (29 papers)
  4. Juntao Li (89 papers)
  5. Nan Duan (172 papers)
Citations (16)
Github Logo Streamline Icon: https://streamlinehq.com