Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Who Falls for Online Political Manipulation? (1808.03281v1)

Published 9 Aug 2018 in cs.SI, cs.HC, and physics.soc-ph

Abstract: Social media, once hailed as a vehicle for democratization and the promotion of positive social change across the globe, are under attack for becoming a tool of political manipulation and spread of disinformation. A case in point is the alleged use of trolls by Russia to spread malicious content in Western elections. This paper examines the Russian interference campaign in the 2016 US presidential election on Twitter. Our aim is twofold: first, we test whether predicting users who spread trolls' content is feasible in order to gain insight on how to contain their influence in the future; second, we identify features that are most predictive of users who either intentionally or unintentionally play a vital role in spreading this malicious content. We collected a dataset with over 43 million elections-related posts shared on Twitter between September 16 and November 9, 2016, by about 5.7 million users. This dataset includes accounts associated with the Russian trolls identified by the US Congress. Proposed models are able to very accurately identify users who spread the trolls' content (average AUC score of 96%, using 10-fold validation). We show that political ideology, bot likelihood scores, and some activity-related account meta data are the most predictive features of whether a user spreads trolls' content or not.

Analysis of Online Political Manipulation During the 2016 US Presidential Election

This paper presents a focused examination of the Russian interference in the 2016 US Presidential Election, analyzing how Russian trolls leveraged Twitter to disseminate political misinformation. The research concentrates on identifying individuals susceptible to sharing content from Russian-affiliated troll accounts and understanding the features that make certain users more likely to do so.

The authors collected a substantial dataset of over 43 million election-related tweets, comprising contributions from about 5.7 million distinct Twitter users during the critical period leading up to the election. Within this dataset, they identified 221 accounts linked to Russian trolls, revealing significant insight into how these accounts propagated their content. A pivotal result from this paper is the successful use of machine learning techniques to predict users who would spread troll content with a high degree of accuracy, achieving an average AUC score of 96% using Gradient Boosting.

The analysis shows that political ideology is the most significant predictor of whether a user will engage in spreading troll content. Specifically, users with a conservative stance were more likely to rebroadcast content from Russian trolls. Additionally, bot likelihood scores and various activity-related account metadata, such as the number of followers and the volume of tweets, also played crucial roles in predicting spreader behavior.

The research employs multiple machine learning classifiers to assess predictive factors, utilizing features categorized into metadata, LIWC (Linguistic Inquiry and Word Count), engagement, activity, and others such as bot scores and political ideology. The variable importance analysis highlights the dominance of political ideology, with followership and status counts also contributing significantly.

By adopting comprehensive preprocessing and feature extraction methods, including bot detection through Botometer and political alignment via label propagation, the paper advances our understanding of how online political manipulation can be detected and potentially mitigated.

In terms of implications, this research underscores the significance of understanding the interplay between social media behaviors and political influence efforts. The models and findings point towards the potential for developing preemptive strategies to safeguard democratic processes from manipulation by identifying at-risk users and content-spreading patterns prior to critical political events.

Looking forward, the methodological framework implemented in this paper holds potential applications for analyzing similar interference campaigns in other electoral contexts. The insights about the predictive features of spreaders offer groundwork for creating counter-campaigns or modified algorithms that might suppress such manipulative activities automatically.

In summary, while the findings are specific to the 2016 US election, the techniques and insights could prove useful in broader contexts, necessitating further exploration into tailoring these predictive models to respond dynamically to varying political and social landscapes. This research emphasizes the critical need for ongoing vigilance and adaptation within the field of online political communication, as the tactics and technologies employed by malicious entities continue to evolve.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Adam Badawy (8 papers)
  2. Kristina Lerman (197 papers)
  3. Emilio Ferrara (197 papers)
Citations (115)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com