Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AvalonBench: Evaluating LLMs Playing the Game of Avalon (2310.05036v3)

Published 8 Oct 2023 in cs.AI and cs.CL

Abstract: In this paper, we explore the potential of LLMs Agents in playing the strategic social deduction game, Resistance Avalon. Players in Avalon are challenged not only to make informed decisions based on dynamically evolving game phases, but also to engage in discussions where they must deceive, deduce, and negotiate with other players. These characteristics make Avalon a compelling test-bed to study the decision-making and language-processing capabilities of LLM Agents. To facilitate research in this line, we introduce AvalonBench - a comprehensive game environment tailored for evaluating multi-agent LLM Agents. This benchmark incorporates: (1) a game environment for Avalon, (2) rule-based bots as baseline opponents, and (3) ReAct-style LLM agents with tailored prompts for each role. Notably, our evaluations based on AvalonBench highlight a clear capability gap. For instance, models like ChatGPT playing good-role got a win rate of 22.2% against rule-based bots playing evil, while good-role bot achieves 38.2% win rate in the same setting. We envision AvalonBench could be a good test-bed for developing more advanced LLMs (with self-playing) and agent frameworks that can effectively model the layered complexities of such game environments.

Evaluation of LLM Agents in Social Deduction Games: The Case of AvalonBench

The paper presents AvalonBench, a novel environment specifically aimed at evaluating the decision-making and language-processing capabilities of LLMs within the context of the social deduction game, Resistance Avalon. This paper is significant in the field of artificial intelligence as it introduces an intricate test-bed for probing and improving the language understanding and reasoning capabilities of LLM Agents.

Resistance Avalon, a game where players assume hidden identities of either "good" or "evil," serves as an apt venue for this exploration, primarily due to the complexity introduced by its reliance on strategic deception, inference, and negotiation. The paper outlines the development of AvalonBench, which integrates a game environment with rule-based bots to act as baseline opponents. Additionally, it introduces ReAct-style LLM agents with role-specific prompts to simulate the game's social dynamics.

A notable aspect of the research is the benchmark results from AvalonBench, highlighting a discernible capability gap in current LLMs' performance. For instance, ChatGPT, when playing a "good" role, achieved a win rate of 22.2% against rule-based bots playing "evil," whereas the "good" baseline bot secured a 38.2% win rate in the same setting. Such statistics underscore the existing limitations in LLMs concerning strategic and adaptive reasoning abilities in dynamic and multi-agent contexts.

The implication of these findings extends to both theoretical and practical dimensions in AI. Theoretically, it challenges the conventional metrics and approaches in assessing LLM competencies, pushing for more nuanced and robust methodologies. Practically, it paves the way for the enhancement of LLM architectures and training techniques to better handle the complexities of real-world problem-solving scenarios where social interaction is key.

Furthermore, the speculative future of AI in this arena might focus on developing more autonomous LLM agents capable of learning and adapting through self-play, as indicated by the paper. AvalonBench could very well catalyze advancements in LLMs that possess the requisite skills to effectively model the layered complexities inherent in interactive environments.

Overall, AvalonBench promises to be an invaluable resource in the broader endeavor to refine AI agents, making them more adept at mimicking human strategic thought processes and conversational nuances. With continual improvements and insights gleaned through this benchmark, researchers can anticipate significant strides in the fidelity and functionality of LLM-driven agents in multi-agent settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jonathan Light (9 papers)
  2. Min Cai (14 papers)
  3. Sheng Shen (68 papers)
  4. Ziniu Hu (51 papers)