Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Werewolf Arena: A Case Study in LLM Evaluation via Social Deduction (2407.13943v1)

Published 18 Jul 2024 in cs.CL and cs.AI

Abstract: This paper introduces Werewolf Arena, a novel framework for evaluating LLMs through the lens of the classic social deduction game, Werewolf. In Werewolf Arena, LLMs compete against each other, navigating the game's complex dynamics of deception, deduction, and persuasion. The framework introduces a dynamic turn-taking system based on bidding, mirroring real-world discussions where individuals strategically choose when to speak. We demonstrate the framework's utility through an arena-style tournament featuring Gemini and GPT models. Our results reveal distinct strengths and weaknesses in the models' strategic reasoning and communication. These findings highlight Werewolf Arena's potential as a challenging and scalable LLM benchmark.

Evaluation of LLMs through Social Deduction in Werewolf Arena

Introduction

This essay provides an in-depth analysis of the paper titled "Werewolf Arena: A Case Study in LLM Evaluation via Social Deduction". The paper introduces a novel evaluation framework for LLMs using the social deduction game Werewolf. With the complexities of human-like reasoning, deception, and strategic communication being central to the game, Werewolf Arena proves to be a unique and challenging benchmark for LLMs. The framework incorporates a dynamic turn-taking system based on bidding, which mimics real-world discussion scenarios where individuals strategically choose when to speak. This essay examines the experimental setup, results, and implications for the future of AI research.

The Experimental Setup

Game Implementation

The authors of the paper designed an environment consisting of eight players with predefined roles: one Seer, one Doctor, two Werewolves, and four Villagers. The game proceeds in two phases – night and daytime. During the night, players perform role-specific actions such as the Seer investigating a player and the Doctor protecting a player. During the daytime, game dynamics center around debates and voting to exile a player suspected of being a Werewolf.

A rules-based Game Master (GM) oversees the game's progression, orchestrating the actions and ensuring that agents follow the game's sequence correctly. This setup is different from traditional AI evaluation benchmarks and emphasizes strategic reasoning over purely technical skill demonstrations.

Agent Architecture

Agents in the Werewolf Arena exhibit actions essential for gameplay, including voting, debating, bidding to speak, and executing special role actions. Each agent has a memory stream inspired by prior work by \cite{park2023generative}, which includes observational and reflective memories. The observational memories capture all game-level events, while reflective summaries distill insights from the debate rounds.

Dynamic Turn-Taking

The framework introduces a bidding mechanism for dynamic turn-taking, wherein agents indicate their desire to speak based on five distinct levels of urgency. This mechanism allows agents to influence the debate strategically, reflecting real-world social dynamics. The highest bidder gets the speaking turn, with ties broken based on prior mentions in the debate. The evaluation of these bidding dynamics reveals important insights into agent behavior and conversation control.

Evaluation and Results

Intra-Family and Head-to-Head Tournaments

The paper's evaluation phase features a two-phase tournament structure involving LLMs from Google's Gemini and OpenAI's GPT families. Initially, models from each family played intra-family round-robin tournaments to establish baseline performance. The results indicated relatively balanced win rates for both Gemini and GPT models, with some exceptions, such as the lower performance of GPT-3.5.

Following the intra-family tournaments, the top-performing models from each family, namely Gemini 1.5 Pro and GPT-4, competed in a head-to-head tournament. Gemini 1.5 Pro demonstrated stronger performance overall, particularly excelling as Villagers, where strategic communication and deception detection are critical.

Seer Performance Analysis

The role of the Seer adds another dimension to the evaluation. Seers reveal critical information, but face the dilemma of exposing themselves to be targeted by Werewolves. Performance metrics for Seers across different models are analyzed, including the frequency of player reveals, the timing of the first reveal, and the accuracy and impact of their accusations.

Gemini 1.5 Pro Seers tend to reveal their identity early, leveraging the information advantage, while GPT-4 Seers exhibit cautious behavior, often delaying their reveals to gather more information and avoid becoming targets. Furthermore, GPT-4's Seers are more successful in convincing Villagers of their legitimacy, highlighting effective persuasive communication.

Implications and Future Directions

Practical Implications

The paper underscores the significance of evaluating LLMs in dynamic, interactive, and multi-party settings such as Werewolf. This approach diverges from traditional benchmarks, which often rely on static metrics. The dynamic evaluation framework of Werewolf Arena enables more comprehensive insights into models’ abilities to engage in strategic communication, a key skill for deploying conversational agents in real-world applications, such as customer service and collaborative tools.

Theoretical Implications

From a theoretical perspective, the introduction of a bidding mechanism for turn-taking and the focus on strategic reasoning contribute to the understanding of social dynamics in AI systems. The framework provides evidence of different communication styles impacting game outcomes, hinting at the nuanced interplay between language, strategy, and persuasion.

Future Developments

Future research directions could include expanding the complexity of the Werewolf game environment to capture more nuanced social interactions. Additionally, exploring more sophisticated reasoning and memory mechanisms could further enhance agent performance. Incorporating a wider range of models and comparing their performance on other social deduction games could validate and generalize the findings.

Conclusion

The Werewolf Arena framework represents a significant advancement in the evaluation of LLMs, providing robust and scalable benchmarks for strategic reasoning, deception, and communication. The distinction between the performance of Gemini and GPT models demonstrates how communication styles and strategic decisions translate into success in social deduction games. By fostering dynamic evaluation, this framework encourages ongoing development and enrichment of LLMs, pushing the boundaries of AI research.

Ethical Considerations

While focusing on positive uses such as strategic reasoning and improving communication skills, the paper acknowledges the potential for misuse of LLMs in deceptive practices. This paper underscores the importance of establishing robust ethical guidelines and safeguards in AI development to prevent the exploitation of these capabilities in malicious contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Suma Bailis (2 papers)
  2. Jane Friedhoff (1 paper)
  3. Feiyang Chen (18 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com