Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior (1802.00393v3)

Published 1 Feb 2018 in cs.SI

Abstract: In recent years, offensive, abusive and hateful language, sexism, racism and other types of aggressive and cyberbullying behavior have been manifesting with increased frequency, and in many online social media platforms. In fact, past scientific work focused on studying these forms in popular media, such as Facebook and Twitter. Building on such work, we present an 8-month study of the various forms of abusive behavior on Twitter, in a holistic fashion. Departing from past work, we examine a wide variety of labeling schemes, which cover different forms of abusive behavior, at the same time. We propose an incremental and iterative methodology, that utilizes the power of crowdsourcing to annotate a large scale collection of tweets with a set of abuse-related labels. In fact, by applying our methodology including statistical analysis for label merging or elimination, we identify a reduced but robust set of labels. Finally, we offer a first overview and findings of our collected and annotated dataset of 100 thousand tweets, which we make publicly available for further scientific exploration.

Citations (661)

Summary

  • The paper introduces a novel crowdsourcing methodology with iterative annotation rounds to refine labels for abusive language.
  • It quantitatively analyzes 80,000 tweets, revealing that approximately 11% exhibit abusive behavior while 7.5% show hateful speech.
  • The study provides an open-source dataset and enhanced annotations that advance algorithmic detection of online abusive content.

Analysis of "Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior"

In this comprehensive paper, the authors present a detailed examination of abusive behavior on Twitter through an innovative crowdsourcing methodology. Utilizing a dataset of 80,000 tweets, they propose a refined strategy to annotate abusive content, which addresses several challenges associated with the variability and ambiguity of abusive language online.

Methodological Overview

The research offers a clear methodological advancement by employing an incremental approach, which includes multiple annotation rounds to assess and refine labels for abusive content. Initially, they consider a broad spectrum of abusive behaviors, including offensive, hateful, aggressive, and cyberbullying speech. Through preliminary rounds, they systematically identify and resolve label confusion to ensure high fidelity in their final annotations.

The methodology roots itself in three primary steps:

  1. Data Collection and Pre-processing: Utilizing the Twitter Stream API, the paper begins by filtering and annotating a vast pool of tweets. They strategically apply boosted sampling to amplify the presence of tweets likely to contain abusive speech.
  2. Exploratory Annotation Rounds: Initial rounds focus on understanding label confusion and adjusting parameters to maximize annotation accuracy. Through statistical analysis, correlations, and co-occurrences, they iterate on their labeling scheme, leading to a more robust final set of labels.
  3. Large-scale Annotation: With the optimized labeling schema, they annotate 80,000 tweets, leveraging their enhanced crowdsourcing platform to maintain control over quality and cost.

Empirical Findings and Implications

The paper yields several key insights. Importantly, the correlation between offensive, abusive, and aggressive behaviors justified their consolidation into a singular category, while hateful speech remained distinct. The annotated dataset demonstrated significant differentiation with abusive speech appearing in approximately 11% and hateful in about 7.5% of sampled tweets.

These nuanced findings suggest practical implications in designing algorithms for automatic detection of abusive language on social media. The open-source nature of the dataset and platform provides valuable resources for further research in computational social linguistics and can aid in improving the robustness of content moderation systems.

Future Directions

While the dataset offers an extensive foundation, the research suggests expanding the corpus to include more tweets with flagged abusive potential. Future developments could explore integrating context-aware models to better understand the nuances of sarcasm or indirect threats, which remain challenging even to trained annotators.

Overall, this paper contributes significantly to the methodologies for handling complex social media datasets and sets a precedent for future studies aiming to combat cyber abuse through technological solutions.

Github Logo Streamline Icon: https://streamlinehq.com