Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disturbed YouTube for Kids: Characterizing and Detecting Inappropriate Videos Targeting Young Children (1901.07046v3)

Published 21 Jan 2019 in cs.SI and cs.CY

Abstract: A large number of the most-subscribed YouTube channels target children of a very young age. Hundreds of toddler-oriented channels on YouTube feature inoffensive, well-produced, and educational videos. Unfortunately, inappropriate content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrisome problem. In this work, we build a classifier able to discern inappropriate content that targets toddlers on YouTube with 84.3% accuracy, and leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis reveals that YouTube is still plagued by such disturbing videos and its currently deployed counter-measures are ineffective in terms of detecting them in a timely manner. Alarmingly, using our classifier we show that young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Kostantinos Papadamou (8 papers)
  2. Antonis Papasavva (8 papers)
  3. Savvas Zannettou (55 papers)
  4. Jeremy Blackburn (76 papers)
  5. Nicolas Kourtellis (83 papers)
  6. Ilias Leontiadis (29 papers)
  7. Gianluca Stringhini (77 papers)
  8. Michael Sirivianos (24 papers)
Citations (92)

Summary

Characterizing and Detecting Inappropriate Videos Targeting Toddlers on YouTube

The growth of YouTube as a popular platform for children's content has created new challenges for the vigilance of digital media consumption. The paper "Disturbed YouTube for Kids: Characterizing and Detecting Inappropriate Videos Targeting Young Children" explores the pervasive issue of inappropriate content that targets toddlers on YouTube and examines the effectiveness of the platform's mechanisms to control such content. The paper presents a thorough analysis supported by both qualitative and quantitative methods to address this concern and offers a technological solution that can assist in mitigating these risks.

The authors acknowledge the vast consumption of toddler-oriented channels on YouTube, where many videos provide stimulating educational or entertaining content for young children. However, they shed light on the problematic occurrence of disturbing videos that exploit innocuous-looking thumbnails and titles to mislead toddlers and their guardians. Such content can hinder proper child development if consumed consistently.

To tackle the detection of disturbing videos, the researchers developed a robust deep learning classifier boasting an 84.3% accuracy rate in differentiating inappropriate videos from suitable ones. This classifier leverages metadata features such as video title, tags, thumbnails, and viewer statistics to paint a comprehensive picture of the content without necessitating manual video inspection.

The findings of this paper are significant. Analysis reveals that 1.1% of Elsagate-related videos—videos known for containing inappropriate content—are indeed unsuitable for toddler consumption. Additionally, the researchers found that YouTube's current counter-measures underperform—the platform struggles to timely detect and remove inappropriate videos. This results in a considerable probability (about 3.5%) that a toddler browsing recommended videos starting from benign videos would encounter inappropriate material within a mere ten navigational hops.

The implications of this research are multifaceted. Firstly, it underscores the need for more efficient content monitoring strategies on YouTube, particularly concerning automated systems like recommendation algorithms that can inadvertently propagate inappropriate content. Secondly, it stresses the importance of further development in AI-driven solutions for content moderation, which may support platforms like YouTube in curating safer environments for their younger audiences.

Finally, the research community might view the results from this paper as an inaugural standard, upon which further improvements to AI models can be developed to elevate their accuracy and generalizability. Supporting fields might in future consider the incorporation of richer multimedia analysis, including audio and textual comment data. As AI and machine learning models evolve, their applications for such socially impactful problems will only broaden, enhancing the prospects for better digital safeguards on child-focused content platforms.

Youtube Logo Streamline Icon: https://streamlinehq.com