Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Persuasion Games using Large Language Models (2408.15879v2)

Published 28 Aug 2024 in cs.AI and cs.CL
Persuasion Games using Large Language Models

Abstract: LLMs have emerged as formidable instruments capable of comprehending and producing human-like text. This paper explores the potential of LLMs, to shape user perspectives and subsequently influence their decisions on particular tasks. This capability finds applications in diverse domains such as Investment, Credit cards and Insurance, wherein they assist users in selecting appropriate insurance policies, investment plans, Credit cards, Retail, as well as in Behavioral Change Support Systems (BCSS). We present a sophisticated multi-agent framework wherein a consortium of agents operate in collaborative manner. The primary agent engages directly with user agents through persuasive dialogue, while the auxiliary agents perform tasks such as information retrieval, response analysis, development of persuasion strategies, and validation of facts. Empirical evidence from our experiments demonstrates that this collaborative methodology significantly enhances the persuasive efficacy of the LLM. We continuously analyze the resistance of the user agent to persuasive efforts and counteract it by employing a combination of rule-based and LLM-based resistance-persuasion mapping techniques. We employ simulated personas and generate conversations in insurance, banking, and retail domains to evaluate the proficiency of LLMs in recognizing, adjusting to, and influencing various personality types. Concurrently, we examine the resistance mechanisms employed by LLM simulated personas. Persuasion is quantified via measurable surveys before and after interaction, LLM-generated scores on conversation, and user decisions (purchase or non-purchase).

Persuasion Games with LLMs

Authors: Ganesh Prasath Ramani, Shirish Karande, Santhosh V, Yash Bhatia

Institutions: Tata Consultancy Services, IIT Palakad, IIT Madras

Keywords: persuasion, LLM, agent, collaboration

Introduction

The advent of LLMs has significantly enhanced the capabilities of conversational agents across various domains, including customer support, recommendation systems, and even the delicate art of persuasion. This paper investigates the potential of LLMs in shaping human perspectives and influencing decision-making through sophisticated persuasive dialogues. The research introduces a multi-agent framework designed to understand, adjust to, and influence user behavior in domains such as insurance, banking, and investments.

Methodology

The proposed multi-agent framework consists of four interconnected agents: the Conversation Agent, Advisor Agent, Moderator, and Retrieval Agent. The Conversation Agent directly interacts with users, while the Advisor and Retrieval agents support the Conversation Agent by analyzing user messages and retrieving pertinent information. A significant aspect of this framework is its ability to counteract user resistance via rule-based and LLM-based resistance-persuasion mapping techniques.

User Agents

To evaluate the system, the authors simulate user personas using LLMs. These personas are dynamically generated to represent various demographic, financial, educational, and personal attributes, ensuring realistic engagements. The user agents are also programmed to exhibit resistance strategies such as counterarguments, source-derogation, reactance, and selective-exposure, making the persuasive task more challenging and realistic.

Experimental Setup

The research employs an internal conversational platform for three applications: Insurance Agent, Banking Agent, and Investment Advisor. Each session involves a pre-conversation survey to capture the user's initial beliefs, a dialogue interaction limited to 20 exchanges, and a post-conversation survey to measure changes in user beliefs and attitudes. Additionally, the final outcome—whether the user decided to buy, seek more information, or reject the proposal—is recorded to quantify persuasive success.

Results

Data Collection

A total of 300 conversations were generated using the 25 distinct user agents, with a benchmark set of 75 conversations featuring neutral emotional modifiers. Each interaction included pre- and post-conversation surveys to measure the change in user perspective.

Numeric Results

  1. Conversation Length: Conversations initiated without emotional modifiers were generally longer, averaging more dialogues compared to those influenced by strong negative emotions like anger or betrayal.
  2. Persuasion Efficacy: The research reports a baseline persuasion success rate of 71% based on positive changes in user perspectives, which drops to 56% when emotional modifiers are applied.
  3. User Action: Positive decisions (indicating successful persuasion) were achieved 35% of the time in the baseline scenario and 28% when emotional modifiers were present.
  4. Resistance Strategies: User agents frequently employed resistance strategies, showcasing the complexity of human-like interactions. Techniques such as counterarguments and selective exposure were commonly observed.

Qualitative Analysis

The qualitative analysis revealed that sales agents often responded to resistance strategies with rational persuasion, emotional appeal, or social proof. Despite some conversations ending prematurely due to inadequate information from the sales agents, the conversational framework successfully exhibited the ability to influence user decisions and alter perspectives significantly.

Discussion

The multi-agent framework proposed in this paper demonstrates the substantial potential of LLMs in persuasive applications. The integration of support agents enhances the primary Conversation Agent's ability to handle complex queries and counter resistance effectively. However, the paper also highlights the need for an enriched domain-specific knowledge base to prevent premature conversation termination.

Future Work

Future developments include equipping sales agents with memory capabilities, enabling recognition and refinement of persuasion tactics for recurring users. There is also an intention to empower user agents with tools for autonomous information retrieval, further enriching the conversation dynamics and making the interactions more informative and contextually relevant.

Conclusion

This research provides critical insights into the persuasive capabilities of LLMs within a multi-agent framework. The empirical evidence supports the efficacy of tailored persuasion strategies in altering user perspectives and influencing decisions. Although challenges remain, particularly in ensuring the comprehensiveness of information presented, the proposed framework lays a robust foundation for future advancements in conversational AI for commercial and behavioral applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ganesh Prasath Ramani (1 paper)
  2. Shirish Karande (23 papers)
  3. Santhosh V (1 paper)
  4. Yash Bhatia (1 paper)
Youtube Logo Streamline Icon: https://streamlinehq.com