Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign (1802.04291v1)

Published 12 Feb 2018 in cs.SI and cs.CY
Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign

Abstract: Until recently, social media was seen to promote democratic discourse on social and political issues. However, this powerful communication platform has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress' investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of using trolls (malicious accounts created to manipulate) and bots to spread misinformation and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset with over 43 million election-related posts shared on Twitter between September 16 and October 21, 2016, by about 5.7 million distinct users. This dataset included accounts associated with the identified Russian trolls. We use label propagation to infer the ideology of all users based on the news sources they shared. This method enables us to classify a large number of users as liberal or conservative with precision and recall above 90%. Conservatives retweeted Russian trolls about 31 times more often than liberals and produced 36x more tweets. Additionally, most retweets of troll content originated from two Southern states: Tennessee and Texas. Using state-of-the-art bot detection techniques, we estimated that about 4.9% and 6.2% of liberal and conservative users respectively were bots. Text analysis on the content shared by trolls reveals that they had a mostly conservative, pro-Trump agenda. Although an ideologically broad swath of Twitter users was exposed to Russian Trolls in the period leading up to the 2016 U.S. Presidential election, it was mainly conservatives who helped amplify their message.

Analysis of the 2016 Russian Interference Twitter Campaign

This paper by Badawy, Ferrara, and Lerman presents an in-depth examination of the digital traces left by Russian interference during the 2016 United States Presidential election, focusing specifically on Twitter. The research utilizes an extensive dataset comprising over 43 million election-related tweets shared by approximately 5.7 million users, including those generated by Russian troll accounts identified by the U.S. Congress.

Methodology

The paper employs sophisticated techniques for data collection, user ideology classification, bot detection, and geo-location analysis. Initially, the researchers collected tweets using a wide array of election-specific hashtags and keywords. They applied label propagation across the retweet network to infer user ideology based on the partisan nature of the media outlets shared, achieving precision and recall metrics exceeding 90%. Additionally, they determined bot activity using Botometer, an advanced tool for detecting social bots based on a multitude of features.

Key Findings

A crucial aspect of the research is the investigation of whether political ideology influences engagement with misinformation. The authors report that conservative users on Twitter were overwhelmingly more active in resharing Russian troll content compared to their liberal counterparts. The data reveals a striking disparity: conservatives retweeted Russian trolls 31 times more and produced 36 times more tweets than liberals.

Another significant finding is the geographical concentration of this activity. The analysis indicates that most of the amplification of Russian troll content originated from conservative states, primarily Tennessee and Texas.

The paper also addresses the role of bots in spreading misinformation. Approximately 4.9% of liberal users who engaged with Russian trolls were identified as bots, compared to 6.2% of conservative users. Despite the slightly higher presence of bots among conservative accounts, both sides exhibited a non-negligible bot participation that exceeded 8% of the activity within their respective groups.

Implications and Future Directions

The authors underscore the implications of their findings on both theoretical and practical grounds. Theoretically, the research contributes to the understanding of how ideological inclinations affect misinformation consumption on social media platforms. Practically, the paper's insights could inform the development of more reliable methods to detect and mitigate the effects of misinformation campaigns orchestrated by hostile entities.

Given the evolving nature of social media manipulation, future work should focus on enhancing bot detection methods, especially in the context of increasingly sophisticated social bots. Additionally, further research could delve into the psychological and sociopolitical factors that render particular ideological groups more susceptible to misinformation.

In summary, this paper provides an extensive analysis of the Russian interference on Twitter during the 2016 U.S. Presidential election, shedding light on user dynamics in misinformation propagation and the pertinent influence of political ideology. As the integrity of democratic processes continues to be threatened by digital misinformation, the findings reinforce the need for vigilant research and proactive measures to counteract such manipulative endeavors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Adam Badawy (8 papers)
  2. Emilio Ferrara (197 papers)
  3. Kristina Lerman (197 papers)
Citations (296)