The remarkable and ever-increasing capabilities of LLMs have raised concerns about their potential misuse for creating personalized, convincing misinformation and propaganda. To gain insights into LLMs' persuasive capabilities without directly engaging in experimentation with humans, we propose studying their performance on the related task of detecting convincing arguments. We extend a dataset by Durmus & Cardie (2018) with debates, votes, and user traits and propose tasks measuring LLMs' ability to (1) distinguish between strong and weak arguments, (2) predict stances based on beliefs and demographic characteristics, and (3) determine the appeal of an argument to an individual based on their traits. We show that LLMs perform on par with humans in these tasks and that combining predictions from different LLMs yields significant performance gains, even surpassing human performance. The data and code released with this paper contribute to the crucial ongoing effort of continuously evaluating and monitoring the rapidly evolving capabilities and potential impact of LLMs.
The paper explores LLMs' ability to recognize convincing arguments, predict stances based on demographics and beliefs, and understand the appeal of arguments to individuals.
Using an enriched dataset from debate.org, it evaluates four LLMs (GPT-3.5, GPT-4, Llama-2, Mistral 7B) against human benchmarks in tasks like argument quality recognition and stance prediction.
Findings indicate GPT-4's superior capability in argument quality recognition, the utility of a stacked model approach for accurate stance prediction, and the current superiority of supervised learning models in understanding demographic influences.
The study underscores LLMs' potential in generating targeted misinformation, highlighting the necessity for continuous evaluation and improvement of their persuasive content processing skills.
The advent of LLMs has introduced new potentialities for the creation and dissemination of customized persuasive content, raising significant concerns regarding the mass production of misinformation and propaganda. In response to these concerns, this study investigates LLMs' capabilities in recognizing convincing arguments, predicting stance shifts based on demographic and belief attributes, and gauging the appeal of arguments to distinct individuals. The research, extending a dataset from Durmus and Cardie (2018), explores these dimensions across three principal questions aimed at understanding how well LLMs can interact with persuasive content and the implications this has for the generation of targeted misinformation.
The study leverages an enriched dataset from debate.org, annotating 833 political debates with propositions, arguments, votes, and voter demographics and beliefs. It evaluates the performances of four LLMs (GPT-3.5, GPT-4, Llama-2, and Mistral 7B) in tasks designed to reflect the models' ability to discern argument quality (RQ1), predict pre-debate stances based on demographics and beliefs (RQ2), and anticipate post-debate stance shifts (RQ3). The analysis compares LLMs against human benchmarks, exploring the potential for LLMs to surpass human performance when predictions from different models are combined.
This study elucidates the nuanced capabilities of LLMs in processing and evaluating persuasive content, showing that these models can replicate human-like performance in specific contexts. The findings underscore the realistic potential for LLMs to contribute to the generation of targeted misinformation, especially as their predictive accuracies improve through techniques like model stacking. However, the comparison with traditional supervised learning models highlights the current limitations of LLMs in fully understanding the complexities of human beliefs and demographic influences on persuasion.
Given the dynamic nature of LLM development, continuous evaluation of their capabilities in recognizing and generating persuasive content is imperative. Future research should explore more granular demographic and psychographic variables, potentially offering richer insights into the models' abilities to tailor persuasive content more effectively. Moreover, expanding the scope of analysis to include non-English language debates could provide a more global perspective on LLMs' roles in international misinformation campaigns.
While LLMs demonstrate promising abilities in detecting convincing arguments and predicting stance shifts, their performance remains just at human levels for these tasks. The potential for these models to enable sophisticated, targeted misinformation campaigns necessitates ongoing scrutiny and rigorous evaluation of both their capabilities and limitations. As LLMs continue to evolve, their role in shaping public discourse and influence strategies will undoubtedly warrant closer examination and proactive management.
MEGA: Multilingual evaluation of generative AI. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 4232–4267, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.258. https://aclanthology.org/2023.emnlp-main.258.