Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis (2402.05863v1)

Published 8 Feb 2024 in cs.AI, cs.CL, and cs.GT
How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis

Abstract: Negotiation is the basis of social interactions; humans negotiate everything from the price of cars to how to share common resources. With rapidly growing interest in using LLMs to act as agents on behalf of human users, such LLM agents would also need to be able to negotiate. In this paper, we study how well LLMs can negotiate with each other. We develop NegotiationArena: a flexible framework for evaluating and probing the negotiation abilities of LLM agents. We implemented three types of scenarios in NegotiationArena to assess LLM's behaviors in allocating shared resources (ultimatum games), aggregate resources (trading games) and buy/sell goods (price negotiations). Each scenario allows for multiple turns of flexible dialogues between LLM agents to allow for more complex negotiations. Interestingly, LLM agents can significantly boost their negotiation outcomes by employing certain behavioral tactics. For example, by pretending to be desolate and desperate, LLMs can improve their payoffs by 20\% when negotiating against the standard GPT-4. We also quantify irrational negotiation behaviors exhibited by the LLM agents, many of which also appear in humans. Together, \NegotiationArena offers a new environment to investigate LLM interactions, enabling new insights into LLM's theory of mind, irrationality, and reasoning abilities.

Evaluating LLMs in Complex Negotiation Scenarios Using NEGOTIATION ARENA

Introduction to NEGOTIATION ARENA

Recent advancements in LLMs, such as GPT-4 and Claude-2, have ushered in a new era where these models are increasingly deployed as agents acting on behalf of human users. To effectively serve this role, LLMs must demonstrate competence in a wide range of social dynamics, notably negotiation. This research introduces NEGOTIATION ARENA, a flexible framework designed for evaluating LLM agents' negotiation abilities across various settings, including resource exchange, multi-turn ultimatum games, and buyer-seller negotiations.

Designing NEGOTIATION ARENA

NEGOTIATION ARENA is structured around discrete negotiation scenarios where LLM agents engage in dialogues to trade resources, divide assets, or determine prices for goods. The platform allows researchers to assess LLMs’ negotiation strategies, utility maximization, and the impact of social behaviors, such as desperation or aggression, on negotiation outcomes.

  • Resource Exchange Scenario: Agents negotiate to maximize their total resources, leading to the development of complex strategies for resource diversification.
  • Multi-Turn Ultimatum Game: Expands the classical ultimatum game to multiple turns, enabling agents to make and respond to counteroffers.
  • Seller and Buyer Scenario: A complex negotiation involving incomplete information where agents negotiate over the price of goods.

The practicality of NEGOTIATION ARENA lies in its capacity for detailed scenario customization and comprehensive analysis of negotiation behaviors exhibited by LLM agents.

Key Findings and Insights

The benchmarking of LLM agents revealed several notable insights:

  • Behavioral Tactics Increase Win Rates: Employing specific behavioral tactics, such as pretending desperation or acting with aggression significantly boosted negotiation outcomes for LLM agents.
  • GPT-4 Demonstrates Superior Negotiation Skills: Among the evaluated models, GPT-4 consistently outperformed others, demonstrating advanced strategy formulation and utility maximization abilities.
  • Exhibition of Rational and Irrational Behaviors: LLM agents displayed a mix of rational decision-making skills and human-like irrational behaviors, such as anchoring bias and suboptimal counter-offers when "over-valuing" objects in seller-buyer scenarios.

Theoretical and Practical Implications

The research underscores the importance of incorporating sophisticated social dynamics simulation in LLM training and evaluation frameworks. By mimicking complex human negotiation strategies and biases, LLM agents can become more adept at representing human users in various negotiation contexts. Furthermore, understanding the mechanisms behind LLMs’ negotiation behaviors opens avenues for enhancing their decision-making processes.

Future Directions in AI and Negotiation

Looking ahead, this work paves the way for future explorations into LLMs' theory of mind, adaptability to novel negotiation scenarios, and their capacity to transcend human-like irrationalities for optimized decision-making outcomes. Further iterations of the NEGOTIATION ARENA could explore deeper aspects of emotional intelligence, ethical negotiation strategies, and multi-party negotiation dynamics, contributing to the development of more nuanced and human-compatible LLM agents.

Conclusion

NEGOTIATION ARENA offers a pioneering approach to scrutinizing and enhancing LLM agents' negotiation capabilities. Our analysis not only benchmarks current state-of-the-art LLMs but also highlights the critical need for embedding rich social dynamics within AI systems, marking a significant step toward realizing AI agents capable of navigating the complex landscape of human negotiations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Federico Bianchi (47 papers)
  2. Patrick John Chia (9 papers)
  3. Mert Yuksekgonul (23 papers)
  4. Jacopo Tagliabue (34 papers)
  5. Dan Jurafsky (118 papers)
  6. James Zou (232 papers)
Citations (17)