Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination (1804.04096v1)

Published 11 Apr 2018 in cs.SI

Abstract: As of 2018, YouTube, the major online video sharing website, hosts multiple channels promoting right-wing content. In this paper, we observe issues related to hate, violence and discriminatory bias in a dataset containing more than 7,000 videos and 17 million comments. We investigate similarities and differences between users' comments and video content in a selection of right-wing channels and compare it to a baseline set using a three-layered approach, in which we analyze (a) lexicon, (b) topics and (c) implicit biases present in the texts. Among other results, our analyses show that right-wing channels tend to (a) contain a higher degree of words from "negative" semantic fields, (b) raise more topics related to war and terrorism, and (c) demonstrate more discriminatory bias against Muslims (in videos) and towards LGBT people (in comments). Our findings shed light not only into the collective conduct of the YouTube community promoting and consuming right-wing content, but also into the general behavior of YouTube users.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Raphael Ottoni (2 papers)
  2. Evandro Cunha (3 papers)
  3. Gabriel Magno (9 papers)
  4. Pedro Bernadina (1 paper)
  5. Wagner Meira Jr (15 papers)
  6. Virgilio Almeida (14 papers)
Citations (73)

Summary

  • The paper demonstrates that right-wing YouTube channels exhibit heightened negative sentiment and hate-driven language compared to mainstream news channels.
  • It employs a dataset of over 7,000 videos and 17 million comments, using lexical analysis, LDA, and WEAT to quantify implicit biases.
  • Results reveal significant biases against Muslims and LGBT individuals, highlighting challenges for content moderation and digital policy strategies.

Analyzing Right-wing YouTube Channels: Hate, Violence, and Discrimination

The paper "Analyzing Right-wing YouTube Channels: Hate, Violence, and Discrimination" investigates the dynamics of comment culture and content published by specific YouTube channels of right-wing political orientation. Designed by Raphael Ottoni et al., the research aims to shed light on potential propagation of hate, violence, and bias within these digital spaces compared to more general channels categorized under "news and politics."

Dataset and Methodology

This analysis utilizes a comprehensive dataset comprising over 7,000 videos and 17 million comments. The channels studied include renowned right-wing personalities and entities, with Alex Jones’ InfoWars as a pivotal entry point for data collection. The baseline dataset consists of the most popular channels within YouTube’s "news and politics" category, offering contrasting perspectives in user engagement and content presentation.

To conduct this paper, the authors employed a multi-layered analytical approach centered around three core components: lexical analysis, topic modeling via Latent Dirichlet Allocation (LDA), and the measurement of implicit bias using the Word Embedding Association Test (WEAT).

Lexical Analysis

The lexical examination revealed distinct disparities in semantic fields between right-wing and baseline channels. Right-wing content was richer in words associated with negative emotions and actions such as aggression and violence. Interestingly, comments often amplified hateful rhetoric compared to video captions, displaying higher engagement with semantic fields related to disgust and swearing, among others.

Topic Modeling

Through LDA, the paper identified that right-wing channels frequently broached topics linked to terrorism and war, whereas baseline channels addressed a more expansive array of topics, including entertainment and general news. This specialization vividly characterizes the political focus inherent to many right-wing channels compared to the diverse subject matter encountered within broader and more public video collections.

Implicit Bias Examination

The WEAT approach demonstrated that implicit biases against Muslims, immigrants, and LGBT people vary between captions and comments. Notably, right-wing channels showed stronger implicit bias against Muslims within their video content, while comments exhibited heightened bias against LGBT individuals. Among baseline channels, the biases followed a less delineated pattern, albeit significant.

Implications

The findings have both theoretical and practical implications, emphasizing the nature of right-wing video content in reinforcing negative stereotypes and biases through semantic expressions. This influence potentially guides the discourse within comment sections, implicating how viewers process and react to content within these spheres. Furthermore, these conclusions underscore the challenging task social media platforms face in moderating and understanding the propagation of hate speech and discrimination across diverse cultural backgrounds.

Future Directions

Future research could benefit from integrating temporal analyses to assess the progression and causal relationships between video content and comment behavior over time. Enhanced sentiment analysis considering negations and context might further elucidate the complex interplay of language and bias. An expanding scope to encompass channels with various political orientations may illuminate broader patterns and inform more effective content moderation strategies within digital ecosystems.

The paper by Ottoni et al. serves as a critical contribution to the growing field of computational social science, particularly in understanding negativity and bias within highly interactive platforms like YouTube. As online engagement continues to shape sociopolitical landscapes globally, studies like this are critical to informing ethical standards and policy strategies aimed at fostering healthier digital communities.

Youtube Logo Streamline Icon: https://streamlinehq.com