Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Adaptive Psychological Persuasion of Large Language Models (2506.06800v1)

Published 7 Jun 2025 in cs.CL

Abstract: Previous work has showcased the intriguing capabilities of LLMs in instruction-following and rhetorical fluency. However, systematic exploration of their dual capabilities to autonomously persuade and resist persuasion, particularly in contexts involving psychological rhetoric, remains unexplored. In this paper, we first evaluate four commonly adopted LLMs by tasking them to alternately act as persuaders and listeners in adversarial dialogues. Empirical results show that persuader LLMs predominantly employ repetitive strategies, leading to low success rates. Then we introduce eleven comprehensive psychological persuasion strategies, finding that explicitly instructing LLMs to adopt specific strategies such as Fluency Effect and Repetition Effect significantly improves persuasion success rates. However, no ``one-size-fits-all'' strategy proves universally effective, with performance heavily dependent on contextual counterfactuals. Motivated by these observations, we propose an adaptive framework based on direct preference optimization that trains LLMs to autonomously select optimal strategies by leveraging persuasion results from strategy-specific responses as preference pairs. Experiments on three open-source LLMs confirm that the proposed adaptive psychological persuasion method effectively enables persuader LLMs to select optimal strategies, significantly enhancing their success rates while maintaining general capabilities. Our code is available at https://github.com/KalinaEine/PsychologicalPersuasion.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Tianjie Ju (16 papers)
  2. Yujia Chen (22 papers)
  3. Hao Fei (105 papers)
  4. Mong-Li Lee (10 papers)
  5. Wynne Hsu (32 papers)
  6. Pengzhou Cheng (17 papers)
  7. Zongru Wu (13 papers)
  8. Zhuosheng Zhang (125 papers)
  9. Gongshen Liu (37 papers)

Summary

Adaptive Psychological Persuasion in LLMs

The research paper titled "On the Adaptive Psychological Persuasion of LLMs" undertakes a comprehensive exploration of the capabilities of LLMs in generating and resisting psychological persuasion strategies. This investigation is crucial for understanding how LLMs can be deployed as persuasive agents across various domains, particularly in ingesting counterfactual knowledge while maintaining epistemic resistance.

Psychological Persuasion Capabilities of LLMs

Initially, the paper scrutinizes the autonomous abilities of existing LLMs by conducting a series of adversarial dialogues. In these dialogues, four LLMs were tasked with alternately functioning as persuaders and listeners. The empirical paper reveals that while models like Falcon-3-7B-Instruct exhibit relatively strong persuasive capabilities, they often leverage repetitive strategies resulting in limited effectiveness. GPT-4o, on the other hand, demonstrates superior epistemic resistance, maintaining adherence to factual correctness despite exposure to misleading rhetoric.

Psychological Strategy Integration

A pivotal contribution of this paper is the introduction of eleven distinct psychological persuasion strategies derived from established psychological theories. These strategies include various tactics such as Fluency Effect, Scarcity Effect, and Repetition Effect. When LLMs were explicitly instructed to adopt specific psychological strategies, marked improvements in persuasion success rates were observed, underscoring the utility of directed psychological prompts. Notably, the paper identifies that no single psychological strategy proves universally effective across all scenarios, highlighting the necessity for context-sensitive application of these strategies.

Adaptive Framework for Strategy Optimization

To address the absence of a universally applicable strategy, the authors propose an innovative adaptive framework leveraging Direct Preference Optimization (DPO). This framework enables LLMs to autonomously select optimal psychological strategies based on contextual cues. By feeding persuasion results from strategy-specific responses into preference pairs, the adaptive framework significantly enhances the success rates of models. Post-training, LLMs demonstrate an increased ability to integrate diverse strategies dynamically without explicit instructions, illustrating the feasibility of adaptable psychological reasoning in enhancing LLM performance.

Experimental Results

The experimental section of the paper extensively evaluates the dual capabilities and the impact of psychological strategies on persuasion success rates. Despite initial constraints in autonomous strategy generation, explicit psychological strategy prompts yield substantial improvements. Furthermore, adaptive training via DPO results in LLMs achieving higher persuasion success rates across varied domains like Person, Geography, Culture, and Life, without compromising general capabilities, as evidenced by stable MMLU benchmark scores.

Implications and Future Directions

The findings of this paper have significant implications for the deployment of LLMs in environments requiring nuanced persuasive abilities, whether in negotiation scenarios, educational technologies, or human-agent interaction systems. However, the paper notes ethical considerations, urging for the development of safeguards to prevent the misuse of adaptive strategy techniques for malicious purposes. Future research might explore extending this framework to dynamic interactive environments and testing across a broader spectrum of LLM architectures.

In conclusion, this paper enriches our understanding of LLMs' psychological persuasion capabilities and lays the groundwork for further developments in adaptive strategy selection, contributing to both theoretical exploration and practical applications in AI-powered communication systems.

Youtube Logo Streamline Icon: https://streamlinehq.com