The Persuasive Power of Large Language Models (2312.15523v1)
Abstract: The increasing capability of LLMs to act as human-like social agents raises two important questions in the area of opinion dynamics. First, whether these agents can generate effective arguments that could be injected into the online discourse to steer the public opinion. Second, whether artificial agents can interact with each other to reproduce dynamics of persuasion typical of human social systems, opening up opportunities for studying synthetic social systems as faithful proxies for opinion dynamics in human populations. To address these questions, we designed a synthetic persuasion dialogue scenario on the topic of climate change, where a 'convincer' agent generates a persuasive argument for a 'skeptic' agent, who subsequently assesses whether the argument changed its internal opinion state. Different types of arguments were generated to incorporate different linguistic dimensions underpinning psycho-linguistic theories of opinion change. We then asked human judges to evaluate the persuasiveness of machine-generated arguments. Arguments that included factual knowledge, markers of trust, expressions of support, and conveyed status were deemed most effective according to both humans and agents, with humans reporting a marked preference for knowledge-based arguments. Our experimental framework lays the groundwork for future in-silico studies of opinion dynamics, and our findings suggest that artificial agents have the potential of playing an important role in collective processes of opinion formation in online social media.
- Playing repeated games with Large Language Models. arXiv preprint arXiv:2305.16867.
- Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proceedings of the National Academy of Sciences, 120(41): e2311627120. Publisher: Proceedings of the National Academy of Sciences.
- Austin, J. L. 1975. How to do things with words, volume 88. Oxford university press.
- Artificial intelligence can persuade humans on political issues. OSF Preprints.
- On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, 610–623.
- Agent-based models in sociology. Wiley Interdisciplinary Reviews: Computational Statistics, 7(4): 284–306.
- Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4): 324–345.
- Sparks of Artificial General Intelligence: Early experiments with GPT-4. ArXiv:2303.12712 [cs].
- Artificial influence: An analysis of AI-driven persuasion. arXiv preprint arXiv:2303.08721.
- Ten Social Dimensions of Conversations and Relationships. In Proceedings of The Web Conference 2020, 1514–1525. ArXiv:2001.09954 [cs].
- Simulating Opinion Dynamics with Networks of LLM-based Agents. ArXiv:2311.09618 [physics].
- Emergence of Scale-Free Networks in Social Interactions among Large Language Models. arXiv:2312.06619.
- Coloring in the Links: Capturing Social Ties as They are Perceived. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW): 1–18. ArXiv:1902.04528 [cs].
- Persuasive Natural Language Generation–A Literature Review. arXiv preprint arXiv:2101.05786.
- GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30: 681–694.
- Susceptibility to Influence of Large Language Models. arXiv preprint arXiv:2303.06074.
- Habermas, J. 1979. Communication and the Evolution of Society. Beacon press.
- Evaluating the persuasive influence of political microtargeting with large language models. OSF Preprints.
- An Overview of Catastrophic AI Risks. arXiv preprint arXiv:2306.12001.
- Hugging Face. 2023. Inference for templates for chat models.
- Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences, 120(11): e2208839120.
- Mistral 7B. arXiv preprint arXiv:2310.06825.
- Working With AI to Persuade: Examining a Large Language Model’s Ability to Generate Pro-Vaccination Messages. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1): 1–29.
- All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of experimental political science, 9(1): 104–117.
- Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias. arXiv preprint arXiv:2311.00217.
- Levinson, S. C. 1995. Interactional Biases in Human Thinking. Social Intelligence and Interaction.
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. ArXiv:2005.11401 [cs].
- Quantifying the Impact of Large Language Models on Collective Opinion Dynamics. ArXiv:2308.03313 [cs].
- The Potential of Generative AI for Personalized Persuasion at Scale. PsyArXiv.
- The language of opinion change on social media under the lens of communicative action. Scientific Reports, 12(1): 17920. Number: 1 Publisher: Nature Publishing Group.
- Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine. arXiv preprint arXiv:2311.16452.
- Leveraging Large Language Models for Collective Decision-Making. arXiv preprint arXiv:2311.04928.
- Generative Agents: Interactive Simulacra of Human Behavior. ArXiv:2304.03442 [cs].
- Measuring Behavior Change with Observational Studies: a Review. arXiv preprint arXiv:2310.19951.
- Prakken, H. 2006. Formal systems for persuasion dialogue. The knowledge engineering review, 21(2): 163–188.
- Can ai-generated text be reliably detected? arXiv preprint arXiv:2303.11156.
- Large Language Models as Subpopulation Representative Models: A Review. arXiv preprint arXiv:2310.17888.
- Llama 2: Open Foundation and Fine-Tuned Chat Models. ArXiv:2307.09288 [cs].
- A Survey on Large Language Model based Autonomous Agents. ArXiv:2308.11432 [cs] version: 1.
- Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 214–229.
- The Rise and Potential of Large Language Model Based Agents: A Survey. ArXiv:2309.07864 [cs].
- Anatomy of an AI-powered malicious social botnet. arXiv preprint arXiv:2307.16336.
- How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534.
- Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–20.
- Can Large Language Models Transform Computational Social Science? arXiv:2305.03514.
- Simon Martin Breum (2 papers)
- Daniel Vædele Egdal (1 paper)
- Victor Gram Mortensen (1 paper)
- Anders Giovanni Møller (5 papers)
- Luca Maria Aiello (60 papers)