You are a Bot! -- Studying the Development of Bot Accusations on Twitter (2302.00546v3)
Abstract: The characterization and detection of bots with their presumed ability to manipulate society on social media platforms have been subject to many research endeavors over the last decade. In the absence of ground truth data (i.e., accounts that are labeled as bots by experts or self-declare their automated nature), researchers interested in the characterization and detection of bots may want to tap into the wisdom of the crowd. But how many people need to accuse another user as a bot before we can assume that the account is most likely automated? And more importantly, are bot accusations on social media at all a valid signal for the detection of bots? Our research presents the first large-scale study of bot accusations on Twitter and shows how the term bot became an instrument of dehumanization in social media conversations since it is predominantly used to deny the humanness of conversation partners. Consequently, bot accusations on social media should not be naively used as a signal to train or test bot detection models.
- Dissecting a social botnet: Growth, content and influence in Twitter. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, 839–851.
- Barberá, P. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data. Political analysis, 23(1): 76–91.
- Tweeting from left to right: Is online political communication more than an echo chamber? Psychological science, 26(10): 1531–1542.
- The Brexit botnet and user-generated hyperpartisan news. Social science computer review, 37(1): 38–54.
- Social bots distort the 2016 US Presidential election online discussion. First Monday, 21(11-7).
- Botornot: A system to evaluate social bots. In Proceedings of the 25th international conference companion on world wide web, 273–274.
- Ferrara, E. 2020. What Types of COVID-19 Conspiracies are Populated by Twitter Bots? First Monday. ArXiv:2004.09531 [physics].
- The rise of social bots. Communications of the ACM, 59(7): 96–104.
- Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16): E3635–E3644.
- Social Bots: Human-Like by Means of Human Control? Big Data, 5(4): 279–293.
- Halperin, Y. 2021. When Bots and Users Meet: Automated Manipulation and the New Culture of Online Suspicion. Global Perspectives, 2(1): 24955.
- Hanu, L.; and Unitary team. 2020. Detoxify. Github. https://github.com/unitaryai/detoxify.
- Haslam, N. 2006. Dehumanization: An Integrative Review. Personality and Social Psychology Review, 10(3): 252–264. PMID: 16859440.
- Dehumanization and infrahumanization. Annual review of psychology, 65: 399–423.
- Bad Company—Neighborhoods in Neural Embedding Spaces Considered Harmful. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, 2785–2796. Osaka, Japan: The COLING 2016 Organizing Committee.
- Bots and Misinformation Spread on Social Media: Implications for COVID-19. Journal of Medical Internet Research, 23(5).
- Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum.
- Affect, not ideology: A social identity perspective on polarization. Public opinion quarterly, 76(3): 405–431.
- Bots and automation over Twitter during the first US presidential debate. Comprop data memo, 1: 1–4.
- Bot, or not? Comparing three methods for detecting social bots in five political discourses. Big Data & Society, 8(2): 20539517211033566.
- Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.
- Social media and the elections. Science, 338(6106): 472–473.
- BERTweet: A pre-trained language model for English Tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 9–14. Online: Association for Computational Linguistics.
- The false positive problem of automatic bot detection in social science research. PloS one, 15(10): e0241045.
- Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3982–3992. Hong Kong, China: Association for Computational Linguistics.
- Detection of Novel Social Bots by Ensembles of Specialized Classifiers. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, CIKM ’20, 2725–2732. New York, NY, USA: Association for Computing Machinery. ISBN 978-1-4503-6859-9.
- A total error framework for digital traces of human behavior on online platforms. Public Opinion Quarterly, 85(S1): 399–422.
- Do social bots dream of electric sheep? A categorisation of social media bot accounts. arXiv preprint arXiv:1710.04044.
- Adapting social spam infrastructure for political censorship. In 5th USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET 12).
- Törnberg, P. 2022. How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences, 119(42).
- Disagree? You Must be a Bot! How Beliefs Shape Twitter Profile Perceptions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, 1–11. New York, NY, USA: Association for Computing Machinery. ISBN 978-1-4503-8096-6.
- “I agree with you, bot!” How users (dis)engage with social bots on Twitter. New Media & Society, 0(0): 14614448211072307.
- Political communication, computational propaganda, and autonomous agents: Introduction. International journal of Communication, 10.
- The landscape of social bot research: a critical appraisal.
- Asymmetrical perceptions of partisan political bots. New Media & Society, 23(10): 3016–3037. Publisher: SAGE Publications.
- Dennis Assenmacher (11 papers)
- Leon Fröhling (6 papers)
- Claudia Wagner (37 papers)