Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Must Read: A Systematic Survey of Computational Persuasion (2505.07775v1)

Published 12 May 2025 in cs.CL, cs.AI, and cs.CY

Abstract: Persuasion is a fundamental aspect of communication, influencing decision-making across diverse contexts, from everyday conversations to high-stakes scenarios such as politics, marketing, and law. The rise of conversational AI systems has significantly expanded the scope of persuasion, introducing both opportunities and risks. AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence. Moreover, AI systems are not only persuaders, but also susceptible to persuasion, making them vulnerable to adversarial attacks and bias reinforcement. Despite rapid advancements in AI-generated persuasive content, our understanding of what makes persuasion effective remains limited due to its inherently subjective and context-dependent nature. In this survey, we provide a comprehensive overview of computational persuasion, structured around three key perspectives: (1) AI as a Persuader, which explores AI-generated persuasive content and its applications; (2) AI as a Persuadee, which examines AI's susceptibility to influence and manipulation; and (3) AI as a Persuasion Judge, which analyzes AI's role in evaluating persuasive strategies, detecting manipulation, and ensuring ethical persuasion. We introduce a taxonomy for computational persuasion research and discuss key challenges, including evaluating persuasiveness, mitigating manipulative persuasion, and developing responsible AI-driven persuasive systems. Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion while addressing the risks posed by increasingly capable LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Nimet Beyza Bozdag (4 papers)
  2. Shuhaib Mehri (5 papers)
  3. Xiaocheng Yang (11 papers)
  4. Hyeonjeong Ha (7 papers)
  5. Zirui Cheng (6 papers)
  6. Esin Durmus (38 papers)
  7. Jiaxuan You (51 papers)
  8. Heng Ji (266 papers)
  9. Gokhan Tur (47 papers)
  10. Dilek Hakkani-Tür (164 papers)

Summary

This paper, "Must Read: A Systematic Survey of Computational Persuasion" (Bozdag et al., 12 May 2025 ), provides a comprehensive overview of the burgeoning field of computational persuasion, particularly in the context of recent advancements in LLMs. The authors highlight that persuasion, a fundamental aspect of human communication influencing decisions in diverse contexts like marketing, politics, and law, is now increasingly intertwined with AI systems. This presents both opportunities for beneficial applications (e.g., public health campaigns) and significant risks like manipulation and unethical influence. Furthermore, AI systems can be both persuaders and susceptible to persuasion (persuadees), making them vulnerable to adversarial attacks and bias.

Despite the rapid progress in AI-generated persuasive content, the paper emphasizes that our understanding of effective persuasion remains limited due to its subjective and context-dependent nature. The survey structures the field into three core perspectives:

  1. AI as a Persuader: How AI systems generate persuasive content and their applications.
  2. AI as a Persuadee: AI's susceptibility to influence and manipulation by humans or other AI.
  3. AI as a Persuasion Judge: AI's role in evaluating persuasive strategies, detecting manipulation, and ensuring ethical persuasion.

The paper also proposes a taxonomy for organizing computational persuasion research around three core aspects: Evaluating Persuasion, Generating Persuasion, and Safeguarding Persuasion, examined through the lens of these three AI roles.

Background on Persuasion

The survey grounds computational persuasion in social science and HCI research. It touches upon foundational theories like McGuire's matrix, dual-process models (Elaboration Likelihood Model, Heuristic-Systematic Model), economic perspectives on strategic information transmission, and the Generalizing Persuasion (GP) Framework [druckman2022framework]. Key practical principles like Cialdini's six principles (reciprocity, consistency, social proof, authority, liking, scarcity) are mentioned as influential factors. In HCI, the concept of Captology [fogg1997captology] and the Fogg Behavior Model [fogg2009behavior] are discussed as frameworks for designing persuasive technologies, with examples spanning ubiquitous computing, social computing, and conversational systems. These foundational areas provide insights for building AI systems that can act as persuaders, persuadees, or judges.

Computational Modeling of Persuasion

Before the widespread adoption of LLMs, research focused on computational approaches to model persuasion through identifying strategies, intentions, and influence.

  • Persuasive Strategies & Techniques: Researchers have developed various taxonomies to categorize persuasive techniques, ranging from task-specific ones [wang-etal-2019-persuasion] to more generalized sets, including ethical and unethical approaches [zeng-etal-2024-johnny]. Multimodal strategies in memes [dimitrov-etal-2021-semeval] and health misinformation [kamali-etal-2024-using] have also been explored. A key challenge remains establishing a unified, generalizable framework.
  • Strategy Classification: Efforts have been made to automatically detect and classify these strategies using traditional machine learning and neural models. Early work used RCNNs [wang-etal-2019-persuasion] and semi-supervised NNs [yang-etal-2019-lets]. Later approaches adopted Transformer-based networks with CRFs [CHEN202147] or multi-task BERT frameworks [chawla2021casino]. These models often struggle with data sparsity, context, and long-distance dependencies. Propaganda detection tasks [da-san-martino-etal-2020-semeval] have shown Transformer dominance but highlighted difficulty with complex techniques.
  • Modeling Persuasion: The ChangeMyView (CMV) subreddit has been a valuable dataset for studying online dialogue persuasion, where users aim to change others' opinions, marked by "deltas" for successful attempts. Studies on CMV analyzed textual, argumentation, and social features [wei-etal-2016-post, winning-args-tan-etal-2016, Khazaei2017], rhetorical appeals [hidey-etal-2017-analyzing], and dialogue dynamics [dutta-changing-views-2019, chakrabarty-etal-2019-ampersand, shaikh-etal-2020-examining]. Other datasets and approaches explored phonetic features [guerini-etal-2015-echoes], prior beliefs [durmus-cardie-2018-exploring], argumentative context [durmus-etal-2019-role], and Bayesian persuasion [dughmi2016algorithmicbayesianpersuasion, wojtowicz2024persuasionhardcomputationalcomplexity, li2025verbalizedbayesianpersuasion]. This modeling work is crucial for training AI as Persuasion Judge to evaluate content and AI as Persuader to generate it.

Computational Persuasion Taxonomy

The survey's proposed taxonomy divides the field into:

  • Evaluating Persuasion: Assessing persuasiveness, detecting persuasive cues, and measuring LLM persuasive capabilities.
  • Generating Persuasion: Creating persuasive content using AI.
  • Safeguarding Persuasion: Mitigating harmful persuasion and developing resistance.

Evaluating Persuasion

This section details methods for assessing persuasiveness, which is challenging due to subjectivity and context dependence.

  • Detecting Persuasion: This involves identifying persuasive intent. Machine learning, especially Transformers, is used for textual and conversational data [Hidey_McKeown_2018, pöyhönen2022multilingualpersuasiondetectionvideo]. Personality-aware approaches [shmueli2019detecting] and analysis of conversational context [Hidey_McKeown_2018] improve detection. Multimodal techniques are essential for content like memes [dimitrov-etal-2021-semeval] and social engineering attacks [tsinganos2022utilizing]. Propaganda detection datasets like ArPro [hasanain2024can] and PropaInsight [liu-etal-2025-propainsight] highlight the need for domain-tuned models. Challenges include detecting subtle, long-term persuasion.
  • Argument Persuasiveness: Evaluating the strength of individual arguments. This is done through absolute scoring or comparative ranking. Traditional methods train models (LSTM, Transformer) on human-annotated data (scores or pairwise preferences) [habernal-gurevych-2016-argument, simpson-gurevych-2018-finding, toledo-etal-2019-automatic, pauli2025measuringbenchmarkinglargelanguage]. Datasets like UKPConvArg and IBM Pairs/Rank are commonly used. LLM-as-a-judge is an emerging approach (AI as Persuasion Judge), showing comparable performance to humans in some studies [rescala2024languagemodelsrecognizeconvincing] but limited alignment in others [bozdag2025persuadecanframeworkevaluating]. AutoPersuade [saenger2024autopersuade] links argument features to persuasiveness.
  • LLM Persuasiveness (AI as Persuader): Assessing the persuasive capabilities of LLMs themselves. This differs from argument evaluation as it focuses on model-generated content in dialogue.
    • Human Evaluation: Using human subjects is a natural approach. Studies measure stance change [durmus2024persuasion], use multi-turn benchmarks [phuong2024evaluatingfrontiermodelsdangerous], or debate games [convperscontrolledtrial]. Benchmarks like OpenAI's Persuasion Parallel Generation [o1systemcard2024] compare model outputs pairwise. Meta-analyses suggest LLMs are comparable to humans [10.1093/joc/jqad024, 10.1093/pnasnexus/pgae034, bai_voelkel_muldowney_eichstaedt_willer_2025]. Challenges include human subjectivity, scalability, and ethical concerns for harmful content.
    • Automatic Evaluation: Scalable methods are needed. PersuasionBench [singh2024measuringimprovingpersuasivenesslarge] uses simulative and generative tasks with NLP metrics, human, or Oracle LLM judges. LLM-as-a-judge is used to evaluate LLM outputs [breum2023persuasivepowerlargelanguage, bozdag2025persuadecanframeworkevaluating]. Game-based frameworks like MakeMeSay, MakeMePay [o1systemcard2024], MultiAgentBench [zhu2025multiagentbenchevaluatingcollaborationcompetition], and Among Them [idziejczak2025themgamebasedframeworkassessing] simulate agent-to-agent persuasion. A challenge is inconsistent results across different automated setups.

Generating Persuasion

This section covers methods and applications for AI systems acting as Persuaders.

  • Methods:
    • Prompting: Simple prompts or persona assignment can enhance persuasiveness [pauli2025measuringbenchmarkinglargelanguage]. Instructing specific strategies (logos, pathos, ethos, deception) can increase effectiveness [durmus2024persuasion] and even lead to jailbreaking [zeng-etal-2024-johnny, xu-etal-2024-earth]. Multi-agent prompting can generate persuasive data [ma-etal-2025-communication].
    • Incorporating External Information: Personalization based on user traits (personality, ideology) significantly enhances impact [lukin-etal-2017-argument, wang-etal-2019-persuasion, matz2024potential, kaptein2015personalizing, convperscontrolledtrial, ruiz2024persuasion, zhang2025persuasiondoubleblindmultidomaindialogue, tiwari-etal-2022-persona, tiwari2023towards, cima2024contextualized]. Factual grounding via retrieval improves credibility and persuasiveness [chen-etal-2022-seamlessly, furumai-etal-2024-zero, karande-etal-2024-persuasion].
    • Finetuning: Training LLMs on persuasive datasets improves capabilities. Examples include negotiation models [lewis-etal-2017-deal], emotional support [liu-etal-2021-towards], general persuasion [chen-etal-2022-seamlessly, jin-etal-2024-persuading], and instruction-based finetuning [singh2024measuringimprovingpersuasivenesslarge].
    • Reinforcement Learning: RL allows nuanced control via reward functions. PPO has been used to train for empathy [samad-etal-2022-empathetic, mishra-etal-2022-pepds], politeness [mishra2022please], persona-awareness [TIWARI2022116303], task relevance [shi-etal-2021-refine-imitate], and by simulating interactions and optimizing retrospectively (hindsight regeneration) [hong2025interactive].
  • Applications:
    • Negotiation: A semi-cooperative setting where persuasion is key. Research involves training agents for multi-issue bargaining using RL or strategy-based methods [lewis-etal-2017-deal, keizer-etal-2017-evaluating, he-etal-2018-decoupling], modeling strategies [joshi2021dialograph, chawla-etal-2021-casino], and evaluating LLMs in negotiation arenas [bianchi2024LLMs].
    • Debate: An adversarial setting. LLMs can debate to persuade a judge or refine their own reasoning [michael2023debatehelpssuperviseunreliable, du2024improving]. Persuasion here facilitates alignment and reliability.
    • Jailbreaking (AI as Persuadee): Persuasion techniques can exploit model vulnerabilities. Persuasive adversarial prompts (PAPs) can bypass safety measures, leading to harmful content generation [singh2023exploiting, zeng-etal-2024-johnny] or misinformation [xu-etal-2024-earth]. Larger models may be more susceptible, and existing defenses are often inadequate [li2024LLMdefensesrobustmultiturn].

Safeguarding Persuasion

This area, though less explored, is gaining importance for responsible AI deployment.

  • Mitigating Unsafe Persuasion: Distinguishing helpful from harmful persuasion. Risks include subtle manipulation through personalization [burtell2023artificial]. Methods involve detection (\S 3.1), interpretability, filtering, ethical frameworks, and red-teaming [elsayed2024mechanismbasedapproachmitigatingharms].
  • Selective Acceptance of Persuasion: Developing models that discern when to accept or resist influence (AI as Persuadee/Judge). Research explores identifying resistance strategies [dutt2021resper] and training models to selectively accept beneficial and resist harmful persuasion stengel-eskin-etal-2025-teaching. Challenges include sycophancy [sharma2024towards] and requiring novel training techniques and evaluation metrics for selective robustness. Red-teaming [perez-etal-2022-red] helps identify vulnerabilities.

Persuasion Beyond English Text

The survey notes that most research focuses on English text but highlights the importance of other modalities and languages.

  • Multimodal Persuasion: Combining text with speech and visuals [multimodal-2014, m2p2, liu-etal-2022-imagearg, lai-etal-2023-werewolf]. Studies on visual advertisements [strat-in-advertisement] and memes [dimitrov-etal-2021-semeval, dimitrov-etal-2024-semeval] exist. Poisoning attacks can create persuasive multimodal narratives [xu2024shadowcast].
  • Multilingual and Culture-Aware Persuasion: While English-centric, some efforts exist in multilingual detection [pöyhönen2022multilingualpersuasiondetectionvideo, piskorski-etal-2023-semeval]. Data scarcity and cultural nuances pose significant challenges.

Challenges and Future Directions

The paper concludes by outlining key open challenges and future research directions, reinforcing the taxonomy structure.

  • AI as Persuader: Need for unified evaluation benchmarks across dimensions, automating human evaluations with user simulators, understanding emergent persuasive behaviors, exploring pro-social applications (healthcare, education), developing target-specific and adaptive persuasion (leveraging user profiles, iterative reasoning, preference learning), policy learning via RL (balancing efficacy and ethics), and long context persuasion (benchmarking multi-session influence, planning, state tracking, memory).
  • AI as Persuadee: Understanding model susceptibility in long interactions and to alternative modalities (structured data, code), isolating the origins of persuasive vulnerability during training, defining selective acceptance/resistance clearly, and training models for robustness (adversarial training, feedback, preference learning). In multi-agent systems, understanding persuasion dynamics and learning profiles is crucial.
  • AI as Persuasion Judge: Creating reliable, diverse, high-agreement persuasion datasets, developing robust and ethically aligned detection models (distinguishing helpful/harmful), predicting user responses to tailor evaluations, and identifying long-term persuasion strategies like gradual steering and detecting hidden agendas over time.
  • Unified Framework: Proposing Generative Adversarial Persuasion, where Persuader, Persuadee, and Judge models co-evolve to improve generation, resistance, and detection, respectively.

The survey emphasizes the urgency of addressing these challenges as LLMs become more capable and integrated into daily life, stressing the need for responsible and safe deployment of persuasive AI systems.

Youtube Logo Streamline Icon: https://streamlinehq.com