Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mean Birds: Detecting Aggression and Bullying on Twitter (1702.06877v3)

Published 22 Feb 2017 in cs.CY and cs.SI

Abstract: In recent years, bullying and aggression against users on social media have grown significantly, causing serious consequences to victims of all demographics. In particular, cyberbullying affects more than half of young social media users worldwide, and has also led to teenage suicides, prompted by prolonged and/or coordinated digital harassment. Nonetheless, tools and technologies for understanding and mitigating it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of cyberbullies and aggressors, and what features distinguish them from regular users. We find that bully users post less, participate in fewer online communities, and are less popular than normal users, while aggressors are quite popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, achieving over 90% AUC.

Citations (404)

Summary

  • The paper introduces a scalable machine learning framework that categorizes user behavior into bully, aggressive, and normal groups.
  • It integrates data collection, crowdsourced labeling, and comprehensive feature extraction from textual, user, and network data.
  • The model achieves over 91% precision and recall, highlighting the critical role of network-based features in cyberbullying detection.

Detecting Aggression and Bullying on Twitter: A Methodological Approach

The paper "Mean Birds: Detecting Aggression and Bullying on Twitter," authored by Despoina Chatzakou et al., addresses the increasingly pertinent issue of detecting aggressive and bullying behaviors on the Twitter platform. More than half of the young users globally experience cyberbullying, making it critical to develop effective detection tools. This paper contributes a scalable and structured methodology aimed at identifying users who exhibit bullying or aggressive behavior by leveraging machine learning techniques.

Summary of Methodology

The authors propose a sophisticated process encompassing several phases: data collection, preprocessing, sessionization, ground truth labeling, feature extraction, modeling, and classification. An extensive dataset of 1.6 million tweets collected over three months is utilized. Data is drawn from both a random sample and streams likely containing hate speech, enriched by hashtags strongly associated with bullying contexts. The methodology effectively harnesses crowdsourced labeling and intricate feature extraction procedures that segment user behaviors into three categories: bully, aggressive, and normal, while also considering spam behaviors.

Feature Extraction and Analysis

Chatzakou et al.'s approach encompasses user-based, textual, and network-based attributes. Notably, the strong emphasis on network-based features distinguishes this work from prior research centered mainly on linguistic or content features. The authors found that network features, such as the number of friends, followers, and centrality measures, were instrumental in distinguishing aggressive behavior.

User-based features like the number of posts and account age provided additional behavioral insights. Textual features, including sentiment and the use of hashtags and URLs, played a supporting role, especially in evaluating the tone of user interactions. The authors observed unique patterns; for instance, bullying users exhibited lower popularity and engagement levels, while aggressive users held more centralized positions within their networks.

Classification and Results

The machine learning model, utilizing a Random Forest classifier, demonstrated robust performance. It achieved over 91% precision and recall in distinguishing bully and aggressive users from normal users when focusing exclusively on non-spam data. Moreover, network attributes were found to be dominant in overall feature importance, underscoring the necessity to consider social connectivity and interactions in behavioral analyses.

Implications and Future Directions

The research underscores the intrinsic complexities in accurately detecting nuanced cases of cyberbullying and aggression on platforms with limited contextual data per post, like Twitter. The paper highlights the potential for network-based features to significantly enhance the efficacy of detection algorithms, offering pathways to refine real-time monitoring systems progressively.

Future developments should ensure the integration of advanced natural language processing techniques to better interpret indirect or implicit aggressive content aligned with public platforms like Twitter. Since bullying behaviors are often transient and subject to rapid contextual shifts, adaptive machine learning models capable of retraining with evolving data are recommended. Furthermore, ongoing collaboration with social media companies is obligatory to align detection systems with societal and legal standards governing online speech.

In conclusion, this paper lays a foundation for advanced cyberbullying and aggression detection on social networks, providing a comprehensive methodological approach that balances textual and network parameters. This positions the research as an authoritative guide for subsequent investigations aiming to mitigate the negative experiences many users face in digital communities.