Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Longitudinal Analysis of YouTube's Promotion of Conspiracy Videos (2003.03318v1)

Published 6 Mar 2020 in cs.CY, cs.HC, cs.IR, and cs.SI

Abstract: Conspiracy theories have flourished on social media, raising concerns that such content is fueling the spread of disinformation, supporting extremist ideologies, and in some cases, leading to violence. Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized. To verify this claim, we have developed a classifier for automatically determining if a video is conspiratorial (e.g., the moon landing was faked, the pyramids of Giza were built by aliens, end of the world prophecies, etc.). We coupled this classifier with an emulation of YouTube's watch-next algorithm on more than a thousand popular informational channels to obtain a year-long picture of the videos actively promoted by YouTube. We also obtained trends of the so-called filter-bubble effect for conspiracy theories.

Analysis of YouTube’s Influence on Conspiracy Theory Promotion

This paper conducts a comprehensive investigation into YouTube's algorithmic promotion of conspiracy theory videos, assessing the platform's claims of mitigating such content over a year-long period. The focus lies in determining the extent of recommended conspiratorial videos and understanding the filter bubble effect facilitated by YouTube’s recommendation algorithms.

Methodology and Implementation

The authors developed a sophisticated classifier system to ascertain whether a video is conspiratorial. They leveraged fastText for text classification, evaluating several textual elements of YouTube videos, such as transcripts, snippets composed of titles, descriptions, and tags, and comments using Google's Perspective API. Over 8 million recommendations from the watch-next algorithm across more than 1,000 news-oriented channels were meticulously analyzed from October 2018 to February 2020.

Key Findings

  • Conspiratorial Trends: The paper uncovered a notable reduction in conspiratorial content recommendations, especially after YouTube's announcements in 2019. The raw frequency of conspiratorial recommendations declined, although a resurgence was observed towards the latter part of the paper period. This resurgence was less pronounced in weighted estimates that account for the popularity of source videos.
  • Classification and Analysis: The conspiracy classifier achieved robust accuracy with an F1 score of 0.82. This model identified the most statistically significant words for conspiratorial versus non-conspiratorial videos, providing a quantitative foundation for topic modeling. The analysis yielded three main conspiratorial themes: alternative science and history, prophecies and online cults, and political conspiracies.
  • Filter Bubble and Engagement: The filter-bubble effect, where conspiratorial content leads to more of the same type of content being recommended, was pronounced, albeit decreasing. The paper posits that while YouTube no longer recommends more conspiratorial videos than initially viewed, users with prior exposure still encounter a recommendation system that perpetuates conspiracy exposure.

Implications and Future Directions

The findings suggest a dual role for YouTube's recommendation system, both ameliorating and perpetuating conspiratorial content prominence. While the overall trend indicates a reduction in conspiracy promotions, notable exceptions exist, suggesting that changes in algorithmic strategies or user interactions are influencing these outcomes.

Moreover, the research underscores the prominence of particular channels in recommender systems—rising, established, and newly surfaced—that continue to propagate conspiratorial content. These channels either exploit algorithmic biases intentionally or incidentally gain from the lack of stringent platform oversight.

Future work should explore personalized recommendations and the implications of algorithmic changes on different user demographics. Additionally, exploring algorithmic transparency and greater public accountability in YouTube’s content moderation policy will be crucial.

Conclusion

The paper makes a significant contribution to understanding YouTube's role in promoting conspiracy theories. Beyond merely assessing algorithmic efficacy, it provides a framework for assessing content moderation effectiveness and policy application. This paper contributes to the conversation on online information dissemination and the responsibilities of major platforms in managing disinformation while inspiring policy discussions around algorithmic governance in the context of social media.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Marc Faddoul (3 papers)
  2. Guillaume Chaslot (2 papers)
  3. Hany Farid (20 papers)
Citations (70)
Youtube Logo Streamline Icon: https://streamlinehq.com