Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disinformation Capabilities of Large Language Models (2311.08838v2)

Published 15 Nov 2023 in cs.CL

Abstract: Automated disinformation generation is often listed as an important risk associated with LLMs. The theoretical ability to flood the information space with disinformation content might have dramatic consequences for societies around the world. This paper presents a comprehensive study of the disinformation capabilities of the current generation of LLMs to generate false news articles in the English language. In our study, we evaluated the capabilities of 10 LLMs using 20 disinformation narratives. We evaluated several aspects of the LLMs: how good they are at generating news articles, how strongly they tend to agree or disagree with the disinformation narratives, how often they generate safety warnings, etc. We also evaluated the abilities of detection models to detect these articles as LLM-generated. We conclude that LLMs are able to generate convincing news articles that agree with dangerous disinformation narratives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ivan Vykopal (8 papers)
  2. Matúš Pikuliak (12 papers)
  3. Ivan Srba (28 papers)
  4. Robert Moro (22 papers)
  5. Dominik Macko (13 papers)
  6. Maria Bielikova (27 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com

HackerNews