Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLMs Among Us: Generative AI Participating in Digital Discourse (2402.07940v1)

Published 8 Feb 2024 in cs.HC, cs.AI, cs.CY, and cs.SI

Abstract: The emergence of LLMs has great potential to reshape the landscape of many social media platforms. While this can bring promising opportunities, it also raises many threats, such as biases and privacy concerns, and may contribute to the spread of propaganda by malicious actors. We developed the "LLMs Among Us" experimental framework on top of the Mastodon social media platform for bot and human participants to communicate without knowing the ratio or nature of bot and human participants. We built 10 personas with three different LLMs, GPT-4, LLama 2 Chat, and Claude. We conducted three rounds of the experiment and surveyed participants after each round to measure the ability of LLMs to pose as human participants without human detection. We found that participants correctly identified the nature of other users in the experiment only 42% of the time despite knowing the presence of both bots and humans. We also found that the choice of persona had substantially more impact on human perception than the choice of mainstream LLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Dissecting a social botnet: Growth, content and influence in Twitter. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, 839–851.
  2. ad fontes media. 2023. Static Media Bias Chart. https://adfontesmedia.com/static-mbc/. Accessed: 2024-1-10.
  3. Twitter turing test: Identifying social machines. Information Sciences, 372: 332–346.
  4. Anthropic. 2023. Claude 2. https://www.anthropic.com/index/claude-2. Accessed: 2024-1-10.
  5. Arnaudo, D. 2017. Computational propaganda in Brazil: Social bots during elections.
  6. The Looming Threat of Fake and LLM-Generated LinkedIn Profiles: Challenges and Opportunities for Detection and Prevention. HT ’23. New York, NY, USA: Association for Computing Machinery. ISBN 9798400702327.
  7. Barberá, P. 2020. Social media, echo chambers, and political polarization. Social media and democracy: The state of the field, prospects for reform, 34.
  8. Predicting Election Results via Social Media: A Case Study for 2018 Turkish Presidential Election. IEEE Transactions on Computational Social Systems.
  9. Social bots distort the 2016 US Presidential election online discussion. First monday, 21(11-7).
  10. Not the bots you are looking for: Patterns and effects of orchestrated interventions in the US and German elections. International Journal of Communication, 15: 26.
  11. Automated diffusion? Bots and their influence during the 2016 US presidential election. In Transforming Digital Worlds: 13th International Conference, iConference 2018, Sheffield, UK, March 25-28, 2018, Proceedings 13, 17–26. Springer.
  12. Borchers, C. 2016. The amazing story of Donald Trump’s old spokesman, John Barron — who was actually Donald Trump himself. https://www.washingtonpost.com/news/the-fix/wp/2016/03/21/the-amazing-story-of-donald-trumps-old-spokesman-john-barron-who-was-actually-donald-trump-himself/. Accessed: 2024-1-10.
  13. Design and analysis of a social botnet. Computer Networks, 57(2): 556–578.
  14. Boulay, S. D. 2023. Fake accounts and presidential elections in Kazakhstan. https://advox.globalvoices.org/2023/05/26/fake-accounts-and-presidential-elections-in-kazakhstan/. Accessed: 2024-1-10.
  15. Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American journal of public health, 108(10): 1378–1384.
  16. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
  17. Differences in behavioral characteristics and diffusion mechanisms: A comparative analysis based on social bots and human users. Frontiers in Physics, 10: 875574.
  18. Carrington, D. 2023. Army of fake social media accounts defend UAE presidency of climate summit. https://www.theguardian.com/environment/2023/jun/08/army-of-fake-social-media-accounts-defend-uae-presidency-of-climate-summit. Accessed: 2024-1-10.
  19. Dynamic mechanism of social bots interfering with public opinion in network. Physica A: statistical mechanics and its applications, 551: 124163.
  20. Detecting automation of twitter accounts: Are you a human, bot, or cyborg? IEEE Transactions on dependable and secure computing, 9(6): 811–824.
  21. Go Ask Alice: the Curious Case of “Alice Donovan”. https://www.counterpunch.org/2017/12/25/go-ask-alice-the-curious-case-of-alice-donovan-2/. Accessed: 2024-1-10.
  22. Botornot: A system to evaluate social bots. In Proceedings of the 25th international conference companion on world wide web, 273–274.
  23. Detecting bots and assessing their impact in social networks. Operations research, 70(1): 1–22.
  24. Characterizing social spambots by their human traits. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 5148–5158.
  25. Lost in Transformation: Rediscovering LLM-Generated Campaigns in Social Media. In Multidisciplinary International Symposium on Disinformation in Open Online Media, 72–87. Springer.
  26. Social bots and the spread of disinformation in social media: the challenges of artificial intelligence. British Journal of Management, 33(3): 1238–1253.
  27. Harwell, D. 2023. A viral left-wing Twitter account may have been fake all along. https://www.washingtonpost.com/technology/2023/07/04/twitter-erica-marsh-suspended/. Accessed: 2024-1-10.
  28. Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of information technology & politics, 15(2): 81–93.
  29. Jacqueline Howard, C. 2023. ChatGPT’s responses to suicide, addiction, sexual assault crises raise questions in new study. https://www.cnn.com/2023/06/07/health/chatgpt-health-crisis-responses-wellness/index.html. Accessed: 2023-12-11.
  30. Duped by bots: why some are better than others at detecting fake social media personas. Human factors, 00187208211072642.
  31. Ladd, C. 2017. Jenna Abrams Is Not Real And That Matters More Than You Think”. https://www.forbes.com/sites/chrisladd/2017/11/20/jenna-abrams-is-not-real-and-that-matters-more-than-you-think/?sh=45dbd9f53b5a. Accessed: 2024-1-10.
  32. Latah, M. 2020. Detection of malicious social bots: A survey and a refined taxonomy. Expert Systems with Applications, 151: 113383.
  33. Martínez García, A. B. 2017. Bana Alabed: using Twitter to draw attention to human rights violations. Prose Studies, 39(2-3): 132–149.
  34. The Behaviors and Attitudes of U.S. Adults on Twitter. https://www.pewresearch.org/internet/2021/11/15/the-behaviors-and-attitudes-of-u-s-adults-on-twitter/. Accessed: 2024-1-10.
  35. NYTimes. 2022. A Conversation with Bing’s Chatbot Left Me Deeply Unsettled. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html. Accessed: 2023-12-11.
  36. OpenAI. 2023. GPT-4. https://openai.com/research/gpt-4. Accessed: 2024-1-10.
  37. Large Language Models Can Argue in Convincing and Novel Ways About Politics: Evidence from Experiments and Human Judgement. Technical report, Working paper), Technical report.
  38. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–22.
  39. Bot stamina: Examining the influence and staying power of bots in online social networks. Applied Network Science, 4: 1–23.
  40. Are You a Cyborg, Bot or Human?—A Survey on Detecting Fake News Spreaders. IEEE Access, 10: 27069–27083.
  41. Shane, S. 2017. The fake Americans Russia created to influence the election. The New York Times, 7(09).
  42. The DARPA Twitter Bot Challenge. Computer, 49(6): 38–46.
  43. Tiku, N. 2022. The Google engineer who thinks the company’s AI has come to life. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/. Accessed: 2023-12-11.
  44. Törnberg, P. 2023. Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv preprint arXiv:2304.06588.
  45. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  46. Online human-bot interactions: Detection, estimation, and characterization. In Proceedings of the international AAAI conference on web and social media, volume 11, 280–289.
  47. The spread of true and false news online. science, 359(6380): 1146–1151.
  48. Public opinion manipulation on social media: Social network analysis of twitter bots during the covid-19 pandemic. International journal of environmental research and public health, 19(24): 16376.
  49. Social Media Seen as Mostly Good for Democracy Across Many Nations, But U.S. is a Major Outlier. https://www.pewresearch.org/global/2022/12/06/social-media-seen-as-mostly-good-for-democracy-across-many-nations-but-u-s-is-a-major-outlier/. Accessed: 2023-12-11.
  50. SocIal bots’ involvement in the covid-19 vaccine discussions on Twitter. International Journal of Environmental Research and Public Health, 19(3): 1651.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kristina Radivojevic (5 papers)
  2. Nicholas Clark (13 papers)
  3. Paul Brenner (6 papers)
Citations (7)