Persuasion Games with LLMs
Authors: Ganesh Prasath Ramani, Shirish Karande, Santhosh V, Yash Bhatia
Institutions: Tata Consultancy Services, IIT Palakad, IIT Madras
Keywords: persuasion, LLM, agent, collaboration
Introduction
The advent of LLMs has significantly enhanced the capabilities of conversational agents across various domains, including customer support, recommendation systems, and even the delicate art of persuasion. This paper investigates the potential of LLMs in shaping human perspectives and influencing decision-making through sophisticated persuasive dialogues. The research introduces a multi-agent framework designed to understand, adjust to, and influence user behavior in domains such as insurance, banking, and investments.
Methodology
The proposed multi-agent framework consists of four interconnected agents: the Conversation Agent, Advisor Agent, Moderator, and Retrieval Agent. The Conversation Agent directly interacts with users, while the Advisor and Retrieval agents support the Conversation Agent by analyzing user messages and retrieving pertinent information. A significant aspect of this framework is its ability to counteract user resistance via rule-based and LLM-based resistance-persuasion mapping techniques.
User Agents
To evaluate the system, the authors simulate user personas using LLMs. These personas are dynamically generated to represent various demographic, financial, educational, and personal attributes, ensuring realistic engagements. The user agents are also programmed to exhibit resistance strategies such as counterarguments, source-derogation, reactance, and selective-exposure, making the persuasive task more challenging and realistic.
Experimental Setup
The research employs an internal conversational platform for three applications: Insurance Agent, Banking Agent, and Investment Advisor. Each session involves a pre-conversation survey to capture the user's initial beliefs, a dialogue interaction limited to 20 exchanges, and a post-conversation survey to measure changes in user beliefs and attitudes. Additionally, the final outcome—whether the user decided to buy, seek more information, or reject the proposal—is recorded to quantify persuasive success.
Results
Data Collection
A total of 300 conversations were generated using the 25 distinct user agents, with a benchmark set of 75 conversations featuring neutral emotional modifiers. Each interaction included pre- and post-conversation surveys to measure the change in user perspective.
Numeric Results
- Conversation Length: Conversations initiated without emotional modifiers were generally longer, averaging more dialogues compared to those influenced by strong negative emotions like anger or betrayal.
- Persuasion Efficacy: The research reports a baseline persuasion success rate of 71% based on positive changes in user perspectives, which drops to 56% when emotional modifiers are applied.
- User Action: Positive decisions (indicating successful persuasion) were achieved 35% of the time in the baseline scenario and 28% when emotional modifiers were present.
- Resistance Strategies: User agents frequently employed resistance strategies, showcasing the complexity of human-like interactions. Techniques such as counterarguments and selective exposure were commonly observed.
Qualitative Analysis
The qualitative analysis revealed that sales agents often responded to resistance strategies with rational persuasion, emotional appeal, or social proof. Despite some conversations ending prematurely due to inadequate information from the sales agents, the conversational framework successfully exhibited the ability to influence user decisions and alter perspectives significantly.
Discussion
The multi-agent framework proposed in this paper demonstrates the substantial potential of LLMs in persuasive applications. The integration of support agents enhances the primary Conversation Agent's ability to handle complex queries and counter resistance effectively. However, the paper also highlights the need for an enriched domain-specific knowledge base to prevent premature conversation termination.
Future Work
Future developments include equipping sales agents with memory capabilities, enabling recognition and refinement of persuasion tactics for recurring users. There is also an intention to empower user agents with tools for autonomous information retrieval, further enriching the conversation dynamics and making the interactions more informative and contextually relevant.
Conclusion
This research provides critical insights into the persuasive capabilities of LLMs within a multi-agent framework. The empirical evidence supports the efficacy of tailored persuasion strategies in altering user perspectives and influencing decisions. Although challenges remain, particularly in ensuring the comprehensiveness of information presented, the proposed framework lays a robust foundation for future advancements in conversational AI for commercial and behavioral applications.