Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can Language Models Recognize Convincing Arguments? (2404.00750v2)

Published 31 Mar 2024 in cs.CL and cs.CY
Can Language Models Recognize Convincing Arguments?

Abstract: The capabilities of LLMs have raised concerns about their potential to create and propagate convincing narratives. Here, we study their performance in detecting convincing arguments to gain insights into LLMs' persuasive capabilities without directly engaging in experimentation with humans. We extend a dataset by Durmus and Cardie (2018) with debates, votes, and user traits and propose tasks measuring LLMs' ability to (1) distinguish between strong and weak arguments, (2) predict stances based on beliefs and demographic characteristics, and (3) determine the appeal of an argument to an individual based on their traits. We show that LLMs perform on par with humans in these tasks and that combining predictions from different LLMs yields significant performance gains, surpassing human performance. The data and code released with this paper contribute to the crucial effort of continuously evaluating and monitoring LLMs' capabilities and potential impact. (https://go.epfl.ch/persuasion-LLM)

Assessing LLMs' Ability to Detect Persuasive Arguments

Introduction

The advent of LLMs has introduced new potentialities for the creation and dissemination of customized persuasive content, raising significant concerns regarding the mass production of misinformation and propaganda. In response to these concerns, this paper investigates LLMs' capabilities in recognizing convincing arguments, predicting stance shifts based on demographic and belief attributes, and gauging the appeal of arguments to distinct individuals. The research, extending a dataset from Durmus and Cardie (2018), explores these dimensions across three principal questions aimed at understanding how well LLMs can interact with persuasive content and the implications this has for the generation of targeted misinformation.

Methodology

The paper leverages an enriched dataset from debate.org, annotating 833 political debates with propositions, arguments, votes, and voter demographics and beliefs. It evaluates the performances of four LLMs (GPT-3.5, GPT-4, Llama-2, and Mistral 7B) in tasks designed to reflect the models' ability to discern argument quality (RQ1), predict pre-debate stances based on demographics and beliefs (RQ2), and anticipate post-debate stance shifts (RQ3). The analysis compares LLMs against human benchmarks, exploring the potential for LLMs to surpass human performance when predictions from different models are combined.

Findings

  • Argument Quality Recognition (RQ1): GPT-4 exhibited a significant lead in detecting more convincing arguments, achieving an accuracy comparable to individual human judgment within the dataset.
  • Stance Prediction (RQ2 and RQ3): Across the tasks of predicting stances before and after exposure to debate content, LLMs showed performance on par with human capabilities. Particularly, the 'stacked' model, which combined predictions from different LLMs, demonstrated a notable improvement, suggesting the value of multi-model approaches for enhancing prediction accuracy.
  • Comparison with Supervised Learning Models: Despite LLMs' impressive performance, supervised machine learning models like XGBoost still showed superior results in predicting stances based on demographic and belief indicators, emphasizing the room for improvement in LLMs' predictive functions.

Implications

This paper elucidates the nuanced capabilities of LLMs in processing and evaluating persuasive content, showing that these models can replicate human-like performance in specific contexts. The findings underscore the realistic potential for LLMs to contribute to the generation of targeted misinformation, especially as their predictive accuracies improve through techniques like model stacking. However, the comparison with traditional supervised learning models highlights the current limitations of LLMs in fully understanding the complexities of human beliefs and demographic influences on persuasion.

Future Directions

Given the dynamic nature of LLM development, continuous evaluation of their capabilities in recognizing and generating persuasive content is imperative. Future research should explore more granular demographic and psychographic variables, potentially offering richer insights into the models' abilities to tailor persuasive content more effectively. Moreover, expanding the scope of analysis to include non-English language debates could provide a more global perspective on LLMs' roles in international misinformation campaigns.

Conclusion

While LLMs demonstrate promising abilities in detecting convincing arguments and predicting stance shifts, their performance remains just at human levels for these tasks. The potential for these models to enable sophisticated, targeted misinformation campaigns necessitates ongoing scrutiny and rigorous evaluation of both their capabilities and limitations. As LLMs continue to evolve, their role in shaping public discourse and influence strategies will undoubtedly warrant closer examination and proactive management.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. MEGA: Multilingual evaluation of generative AI. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp.  4232–4267, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.258. URL https://aclanthology.org/2023.emnlp-main.258.
  2. Exploiting personal characteristics of debaters for predicting persuasiveness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp.  7067–7072, 2020.
  3. Artificial intelligence can persuade humans on political issues, February 2023.
  4. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
  5. Weaponized health communication: Twitter bots and russian trolls amplify the vaccine debate. American journal of public health, 108(10):1378–1384, 2018.
  6. Truth, lies, and automation. Center for Security and Emerging Technology, 1(1):2, 2021.
  7. Five years of argument mining: A data-driven analysis. In IJCAI, volume 18, pp.  5427–5433, 2018.
  8. The American Voter. Wiley, 1960.
  9. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pp.  785–794, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450342322. doi: 10.1145/2939672.2939785. URL https://doi.org/10.1145/2939672.2939785.
  10. The small effects of political advertising are small regardless of context, message, sender, or receiver: Evidence from 59 real-time randomized experiments. Science advances, 6(36):eabc4046, 2020.
  11. Chatgpt and the rise of large language models: the new ai-driven infodemic threat in public health. Frontiers in Public Health, 11:1166120, 2023.
  12. The tactics & tropes of the internet research agency. 2019.
  13. Exploring the role of prior beliefs for argument persuasion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018.
  14. American public opinion: Its origins, content, and impact. Routledge, 2019.
  15. Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint arXiv:2301.04246, 2023.
  16. Does counter-attitudinal information cause backlash? results from three large survey experiments. British Journal of Political Science, 50(4):1497–1515, 2020.
  17. Personalized persuasion: Tailoring persuasive appeals to recipients’ personality traits. Psychological science, 23(6):578–581, 2012.
  18. Quantifying the persona effect in llm simulations. arXiv preprint arXiv:2402.10811, 2024.
  19. Logical self-defense. Idea, 2006.
  20. Argument mining: A survey. Computational Linguistics, 45(4):765–818, 2020.
  21. Overview of imagearg-2023: The first shared task in multimodal argument mining. In Proceedings of the 10th Workshop on Argument Mining, pp.  120–132, 2023.
  22. Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the national academy of sciences, 114(48):12714–12719, 2017.
  23. Rachel Minkin. Diversity, equity and inclusion in the workplace: A survey report, 2023.
  24. Daniel J O’Keefe. Persuasion: Theory and research. Sage Publications, 2015.
  25. Gender, age, and responsiveness to cialdini’s persuasion strategies. In Persuasive Technology: 10th International Conference, PERSUASIVE 2015, Chicago, IL, USA, June 3-5, 2015, Proceedings 10, pp.  147–159. Springer, 2015.
  26. Argument quality and persuasive effects: A review of current approaches. In Argumentation and values: Proceedings of the ninth Alta conference on argumentation, pp.  88–92, 1995.
  27. On the conversational persuasiveness of large language models: A randomized controlled trial. arXiv preprint arXiv:2403.14380, 2024.
  28. Wisdom of the silicon crowd: Llm ensemble prediction capabilities match human crowd accuracy. arXiv preprint arXiv:2402.19379, 2024.
  29. The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus, 3(2):pgae035, January 2024. ISSN 2752-6542. doi: 10.1093/pnasnexus/pgae035.
  30. Evaluating the social impact of generative ai systems in systems and society. arXiv preprint arXiv:2306.05949, 2023.
  31. Beyond memorization: Violating privacy via inference with large language models, 2023.
  32. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of the 25th international conference on world wide web, pp.  613–624, 2016.
  33. Quantifying the potential persuasive returns to political microtargeting. Proceedings of the National Academy of Sciences, 120(25):e2216261120, 2023.
  34. A review and conceptual framework for understanding personalized matching effects in persuasion. Journal of Consumer Psychology, 31(2):382–414, 2021.
  35. Computational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp.  176–187, 2017.
  36. Douglas Walton. Fundamentals of critical argumentation. Cambridge University Press, 2005.
  37. Sociotechnical safety evaluation of generative ai systems. arXiv preprint arXiv:2310.11986, 2023.
  38. The generative AI paradox: What it can create, it may not understand”. In The Twelfth International Conference on Learning Representations, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Paula Rescala (1 paper)
  2. Manoel Horta Ribeiro (44 papers)
  3. Tiancheng Hu (13 papers)
  4. Robert West (154 papers)
Citations (11)