Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Big Five Personality and AI Capability Effects in LLM-Simulated Negotiation Dialogues (2506.15928v1)

Published 19 Jun 2025 in cs.AI, cs.CL, and cs.HC

Abstract: This paper presents an evaluation framework for agentic AI systems in mission-critical negotiation contexts, addressing the need for AI agents that can adapt to diverse human operators and stakeholders. Using Sotopia as a simulation testbed, we present two experiments that systematically evaluated how personality traits and AI agent characteristics influence LLM-simulated social negotiation outcomes--a capability essential for a variety of applications involving cross-team coordination and civil-military interactions. Experiment 1 employs causal discovery methods to measure how personality traits impact price bargaining negotiations, through which we found that Agreeableness and Extraversion significantly affect believability, goal achievement, and knowledge acquisition outcomes. Sociocognitive lexical measures extracted from team communications detected fine-grained differences in agents' empathic communication, moral foundations, and opinion patterns, providing actionable insights for agentic AI systems that must operate reliably in high-stakes operational scenarios. Experiment 2 evaluates human-AI job negotiations by manipulating both simulated human personality and AI system characteristics, specifically transparency, competence, adaptability, demonstrating how AI agent trustworthiness impact mission effectiveness. These findings establish a repeatable evaluation methodology for experimenting with AI agent reliability across diverse operator personalities and human-agent team dynamics, directly supporting operational requirements for reliable AI systems. Our work advances the evaluation of agentic AI workflows by moving beyond standard performance metrics to incorporate social dynamics essential for mission success in complex operations.

Integrating Big Five Personality and AI Traits in LLM-Based Negotiation: Empirical Evaluation and Implications

"Exploring Big Five Personality and AI Capability Effects in LLM-Simulated Negotiation Dialogues" (Cohen et al., 19 Jun 2025 ) presents a rigorous empirical investigation into how Big Five personality traits and manipulated AI agent qualities jointly affect negotiation dialogues produced by LLMs. Utilizing the Sotopia multi-agent simulation framework, the paper advances both methodology and practical understanding of how social and technical characteristics shape agentic AI performance, with direct implications for human-AI teaming in high-stakes applications.

Experimental Design and Evaluation Framework

The primary methodological contribution is a robust, multi-dimensional evaluation pipeline for LLM-based agents operationalized in negotiation contexts. This pipeline synergistically combines:

  • Scenario-based metrics via Sotopia-Eval: goal completion, believability, knowledge acquisition, material benefit, etc.
  • Lexical analytics: automated and lexicon-based measures capturing empathy, moral values, sentiment, toxicity, and connotation.
  • Post-interaction subjective questionnaires: mirroring human evaluative measures in psychological experiments.

Crucially, the authors leverage both correlational and causal discovery (CausalNex, Causal Forests in EconML) approaches to disentangle the effects of personality and AI agent traits on these outcome metrics. This is a methodological advance beyond prior evaluations focusing only on correlations or end-task performance in social AI simulations.

Two experiments systematically manipulate (a) personality traits (via Big Five Inventory-driven personas) in both agents (Experiment 1: interpersonal price bargaining), and (b) both simulated human and AI agent characteristics (transparency, competence, adaptability) in a human-AI job negotiation scenario (Experiment 2).

Key Empirical Results

Experiment 1 (Interpersonal Negotiation):

  • Personality manipulations produce significant, predictable effects on negotiation outcomes and interaction dynamics.
    • Agreeableness and Extraversion substantially increase believability, goal achievement, and knowledge acquisition.
    • Neuroticism exerts a broadly negative influence, while Conscientiousness yields muted effects, primarily on believability.
  • Lexical analysis uncovers robust trait-linked differences in empathic expressions (strongest for Extraversion and Agreeableness), sentiment, emotional vocabulary, and connotative framing.
  • Causal modeling establishes significant average treatment effects: for instance, manipulation of Agreeableness yields measurable increases in positive empathy, moral authority framing, and collaborative negotiation language.

Experiment 2 (Human-AI Negotiation):

  • Manipulated AI traits (transparency, competence, adaptability) impact scenario-level metrics, improving transactivity and dialogue equity. However, these effects are moderate in comparison to personality-driven differences.
  • Personality traits of the human digital twin (HDT) dominate subjective experience: High Agreeableness and Extraversion are strongly associated with higher self-reported trust, satisfaction, and lower frustration, consistent with human negotiation literature.
  • Lexical markers of empathy, morality, affect, and connotation in AI-human dialogues are primarily shaped by human personality, with minor but detectable influences of AI adaptability.
  • Negative association between personality traits and negative affect (anger, apprehension) is robust across both scenarios; moreover, high Agreeableness increases use of subtle connotative language, facilitating rapport-building.

Theoretical and Practical Implications

The work has several important implications:

  • Validity of LLM-driven Social Simulation. The strong alignment between simulated negotiation outcomes and established psychological theory lends substantial support to the use of LLM agents for controlled, high-throughput studies of human-like social behavior. The finding that personality effects are robust across metrics and detectable via both conversation-level and textual markers is critical for both sociotechnical AI evaluation and behavioral model development.
  • Causal Attribution, Not Just Correlation. Employing causal inference methods allows for treatment effect quantification and strengthens design claims about which agent traits deterministically impact negotiation success. This strengthens the methodological rigor and addresses a persistent gap in AI behavior evaluation.
  • Practical Agent Design:
    • The dominance of operator (HDT) personality over AI agent traits in dictating interaction success suggests that future agentic AI should prioritize real-time personality recognition and adaptive communication strategies over purely technical improvements in transparency or competence.
    • This result is particularly salient for military and mission-critical deployments, where uncalibrated adaptation to individual differences can have operational consequences.
    • The developed evaluation pipeline may serve as a pre-deployment testbed for stress-testing agent reliability across varied operator profiles, informing both agent design and downstream team composition strategies.

Limitations and Directions for Future Work

Notable limitations include the reliance on prompt-based personality simulation (potentially oversimplifying real-world psychological complexity) and the focus on only two negotiation archetypes. The paper also recognizes that non-verbal and paraverbal cues—central to human negotiation—are not yet captured in the textual LLM simulation.

Future extensions envisioned in the paper include:

  • Expanding negotiation contexts to differentiate collaborative from competitive team interactions, a distinction of critical importance in defense and security.
  • Systematic manipulation of additional AI agent traits such as warmth and theory of mind, to probe effects on trust calibration and rapport-building.
  • Human-in-the-loop validation, leveraging the simulation-to-reality pipeline to ensure that synthetic findings generalize to operational field deployments.
  • More granular analysis of trait interactions, potentially using multivariate causal effect estimation, to inform compound agentic architectures capable of multi-dimensional adaptation.

Speculation on Future Developments

As multi-agent LLM-based frameworks mature, empirical findings such as those in this paper will fundamentally inform the next generation of adaptive, personality-aware AI teammates—whether in defense, crisis management, or high-stakes civilian applications. Advances in real-time trait estimation, in tandem with multi-modal simulation (e.g., integrating vocal and kinesic behaviors), will further close the gap between simulated agentic behavior and human expectations for social interaction.

The approach detailed in this work provides a scalable, reproducible template not only for negotiation simulation but for the broader investigation of social cognition, trust, and cooperation dynamics in hybrid human-AI systems. As AI agents continue to assume more central roles in mission-critical and socially-sensitive domains, such evidence-based, causally robust design and evaluation workflows will become increasingly essential.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Myke C. Cohen (5 papers)
  2. Zhe Su (33 papers)
  3. Hsien-Te Kao (5 papers)
  4. Daniel Nguyen (4 papers)
  5. Spencer Lynch (3 papers)
  6. Maarten Sap (86 papers)
  7. Svitlana Volkova (24 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com