Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The spread of low-credibility content by social bots (1707.07592v4)

Published 24 Jul 2017 in cs.SI, cs.CY, and physics.soc-ph
The spread of low-credibility content by social bots

Abstract: The massive spread of digital misinformation has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of misinformation online and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. With few exceptions, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots played a disproportionate role in amplifying low-credibility content. Accounts that actively spread articles from low-credibility sources are significantly more likely to be bots. Automated accounts are particularly active in amplifying content in the very early spreading moments, before an article goes viral. Bots also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, retweeting bots who post links to low-credibility content. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.

The Spread of Low-Credibility Content by Social Bots

Summary and Key Insights

This paper, authored by Chengcheng Shao and colleagues from Indiana University, addresses the critical issue of digital misinformation proliferating through social media, with a specific focus on the role of social bots. The authors analyze an extensive dataset comprising 13.6 million tweets linked to 400,000 articles, capturing the spread of low-credibility content on Twitter during and following the 2016 U.S. presidential election. Their findings provide quantitative evidence highlighting the significant involvement of social bots in amplifying misinformation.

Major Findings

  1. Prevalence of Social Bots: The paper reveals that social bots are disproportionately responsible for disseminating articles from low-credibility sources. Specifically, active accounts spreading these articles are significantly more likely to be bots compared to accounts spreading fact-checking content. The data shows that while bots constituted 6% of the accounts that shared low-credibility sources, they were responsible for 31% of the related tweets and 34% of the shared articles.
  2. Early Amplification and Targeting Strategies: The researchers identify that social bots exhibit pronounced activity in the early stages of content dissemination, thereby increasing the chances of an article going viral. Additionally, bots strategically target influential users with high follower counts through replies and mentions, enhancing the content's visibility and perceived legitimacy.
  3. Human Interaction and Bot Amplification: Human users are found to be vulnerable to this manipulation, retweeting content posted by bots at similar rates as that shared by other humans. This dynamic creates a feedback loop where bot activity instigates substantial human engagement, leading to the super-linear amplification of low-credibility content.
  4. Analysis of Popularity and Critical Networks: The distribution of content popularity is highly skewed: most articles remain relatively unnoticed, while a fraction achieves viral status. Importantly, the retweet networks for low-credibility content are significantly impacted by bot activity. The dismantling analysis highlights that removing a small number of influential nodes, many of which are bots, disproportionately reduces the spread of misinformation.
  5. Source Analysis and Bot Support: The paper also explores the bot support across different sources. It found that popular low-credibility sites received greater bot support by volume and frequency compared to satire sites or fact-checking organizations.

Implications and Future Directions

Theoretical and practical implications of these findings are notable. From a theoretical perspective, the paper underscores the importance of understanding the behavior of automated agents in the context of misinformation. The correlation between bot activity and the success of low-credibility content suggests that addressing the automation component could be critical in mitigating misinformation spread.

Practically, this research supports the notion that social media platforms could benefit from advancing their bot detection mechanisms. The deployment of tools like Botometer, developed by the authors' laboratory, can be pivotal in identifying and managing bot activity. The use of CAPTCHAs or similar challenge-response tests can help thwart bot activities without heavily impeding legitimate social media interactions.

Future work should aim to refine bot detection algorithms further to reduce false positives, allowing platforms to manage bot activity more effectively without encroaching on legitimate users. Additionally, extending these analyses to other social media platforms, which might be under different forms of misinformation attack strategies, will provide a more holistic view of the anti-misinformation measures' effectiveness.

In conclusion, this paper contributes significantly to our understanding of the mechanisms through which social bots amplify low-credibility content. By systematically analyzing the spread and amplification processes, the authors lay a solid foundation for developing informed strategies to counteract the detrimental effects of digital misinformation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chengcheng Shao (7 papers)
  2. Giovanni Luca Ciampaglia (23 papers)
  3. Onur Varol (33 papers)
  4. Kaicheng Yang (21 papers)
  5. Alessandro Flammini (67 papers)
  6. Filippo Menczer (102 papers)
Citations (959)
Youtube Logo Streamline Icon: https://streamlinehq.com