Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial (2403.14380v1)

Published 21 Mar 2024 in cs.CY
On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial

Abstract: The development and popularization of LLMs have raised concerns that they will be used to create tailor-made, convincing arguments to push false or misleading narratives online. Early work has found that LLMs can generate content perceived as at least on par and often more persuasive than human-written messages. However, there is still limited knowledge about LLMs' persuasive capabilities in direct conversations with human counterparts and how personalization can improve their performance. In this pre-registered study, we analyze the effect of AI-driven persuasion in a controlled, harmless setting. We create a web-based platform where participants engage in short, multiple-round debates with a live opponent. Each participant is randomly assigned to one of four treatment conditions, corresponding to a two-by-two factorial design: (1) Games are either played between two humans or between a human and an LLM; (2) Personalization might or might not be enabled, granting one of the two players access to basic sociodemographic information about their opponent. We found that participants who debated GPT-4 with access to their personal information had 81.7% (p < 0.01; N=820 unique participants) higher odds of increased agreement with their opponents compared to participants who debated humans. Without personalization, GPT-4 still outperforms humans, but the effect is lower and statistically non-significant (p=0.31). Overall, our results suggest that concerns around personalization are meaningful and have important implications for the governance of social media and the design of new online environments.

Conversational Persuasiveness of LLMs in Personalized Debates

Introduction

Recent advancements in the development of LLMs have significantly impacted the landscape of online communication, enabling the creation of persuasive, human-like text. This paper examines the persuasive capabilities of LLMs in the context of online debates, focusing on the influence of personalization on their effectiveness. By setting up a structured debate environment, the research assesses the comparative persuasiveness of human vs. LLM participants and investigates the impact of providing personalized information to the debaters.

Research Design and Methods

The paper employed a controlled experiment through a specially designed web-based platform where participants engaged in debates against either a human or a GPT-4 opponent. The debates were structured in multiple rounds, with topics and stances assigned randomly to each participant. The experiment incorporated a two-by-two factorial design, varying across two dimensions: opponent type (human vs. GPT-4) and access to personalization (basic sociodemographic information available vs. not available).

Participants were subjected to one of four treatment conditions: Human-Human, Human-AI (GPT-4), Human-Human with personalization, and Human-AI with personalization. The primary metric for assessing persuasiveness was the change in participants' agreement with the debate propositions, measured before and after the debates.

Results

The paper's findings reveal significant differences in persuasiveness between the treatment conditions. Notably, debates against GPT-4 with access to participants' personal information resulted in an 81.7% higher likelihood of opinion change towards the AI's stance, demonstrating a pronounced advantage of personalization in AI-driven persuasion. In contrast, personalization did not significantly enhance human debaters' persuasiveness, and without personalization, GPT-4's superior persuasiveness over humans was noticeable but not statistically significant.

Analysis of the debates' content highlighted distinct linguistic strategies employed by LLMs, particularly the use of more analytical language and less utilization of personal pronouns compared to human participants. These textual characteristics did not significantly differ between personalized and non-personalized conditions, indicating that the effectiveness of personalization may not solely depend on linguistic adaptation.

Implications

This paper underscores the potent persuasive capabilities of LLMs in online debates, especially when coupled with personalization techniques. The findings raise important considerations for the governance of social media and online platforms, highlighting the need for mechanisms to mitigate potential misuse of AI-driven personalization in persuasion. The results also prompt reflection on the evolving role of AI in shaping public opinion and discourse, urging further research on ethical and regulatory frameworks to harness the benefits of LLMs while safeguarding against their risks.

Future Directions

Future research could expand on this paper by exploring different LLMs, examining the effects of more nuanced personalization, and investigating the persuasive impact of AI in a variety of communication contexts beyond structured debates. Additionally, further inquiry into the mechanisms underlying LLMs' persuasive success could unveil valuable insights for developing AI technologies that support constructive discourse and informed decision-making in the digital age.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Francesco Salvi (5 papers)
  2. Manoel Horta Ribeiro (44 papers)
  3. Riccardo Gallotti (29 papers)
  4. Robert West (154 papers)
Citations (23)
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com