Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Predicting the Type and Target of Offensive Posts in Social Media (1902.09666v2)

Published 25 Feb 2019 in cs.CL
Predicting the Type and Target of Offensive Posts in Social Media

Abstract: As offensive content has become pervasive in social media, there has been much research in identifying potentially offensive messages. However, previous work on this topic did not consider the problem as a whole, but rather focused on detecting very specific types of offensive content, e.g., hate speech, cyberbulling, or cyber-aggression. In contrast, here we target several different kinds of offensive content. In particular, we model the task hierarchically, identifying the type and the target of offensive messages in social media. For this purpose, we complied the Offensive Language Identification Dataset (OLID), a new dataset with tweets annotated for offensive content using a fine-grained three-layer annotation scheme, which we make publicly available. We discuss the main similarities and differences between OLID and pre-existing datasets for hate speech identification, aggression detection, and similar tasks. We further experiment with and we compare the performance of different machine learning models on OLID.

Predicting the Type and Target of Offensive Posts in Social Media

Authors: Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar

The paper "Predicting the Type and Target of Offensive Posts in Social Media" addresses the complex issue of recognizing various forms of offensive content in online interactions, with a specific focus on social media platforms such as Twitter. This research overcomes the limitations of previous work that primarily focused on specific kinds of offensive language (e.g., hate speech, cyberbullying) by introducing a multi-faceted approach to offensive content detection. The authors propose a hierarchical model for classifying offensive posts, identifying both the type and target of the offense, thus providing a more comprehensive framework.

Hierarchical Annotation Schema

The authors introduce the Offensive Language Identification Dataset (OLID), annotated using a detailed three-level hierarchical schema:

  1. Level A: Offensive Language Detection
    • NOT (Not Offensive): Posts devoid of any offensive language or profanity.
    • OFF (Offensive): Posts containing unacceptable language, either targeted or untargeted.
  2. Level B: Categorization of Offensive Language
    • TIN (Targeted Insult): Posts containing specific threats or insults directed at an individual, group, or entity.
    • UNT (Untargeted): Posts with general profanity or swearing without a specific target.
  3. Level C: Offensive Language Target Identification
    • IND (Individual): Posts targeting specific individuals.
    • GRP (Group): Posts aimed at groups based on characteristics such as ethnicity, gender, or religious beliefs.
    • OTH (Other): Posts targeting entities other than individuals or groups, such as organizations or events.

This annotation framework allows for detailed categorization and differentiation of offensive content, providing significant practical utility for social media platforms in moderating and managing content.

Data Collection and Annotation

The dataset was compiled using Twitter API, targeting keywords typically associated with offensive language. The authors stratified the collection to ensure a balanced representation of political and non-political content, given the higher propensity for offensive language within political contexts. Notably, the annotation process employed crowdsourcing through Figure Eight, ensuring high-quality data through strict annotator selection and agreement protocols.

Key statistics include:

  • Training Set Size: 13,240 tweets
  • Test Set Size: 860 tweets
  • Distribution of Offensive Content: Approximately 30% offensive to 70% non-offensive

Experimental Evaluation

The performance of different machine learning models, including SVM, BiLSTM, and CNN, was evaluated on the OLID dataset. Here are the notable findings:

  1. Offensive Language Detection (Level A):
    • CNN achieved the highest macro-F1 score (0.80), outperforming the BiLSTM and SVM models.
  2. Categorization of Offensive Language (Level B):
    • CNN again showed superior performance with a macro-F1 score (0.69), particularly excelling in identifying targeted insults (TIN).
  3. Offensive Language Target Identification (Level C):
    • Despite challenges due to the heterogeneous nature of the OTH category, the CNN and BiLSTM models performed comparably, with macro-F1 scores indicating moderate success (0.47).

Implications and Future Directions

The hierarchical approach delineated in this research provides a robust framework for handling offensive language detection at multiple levels of granularity. Practically, OLID's schema and the associated machine learning baselines can enhance the moderation capabilities of social media platforms, enabling more nuanced and effective handling of offensive content.

Future research should further explore cross-corpus comparisons with other datasets on related tasks such as aggression and hate speech identification. Expanding OLID to include other languages while adhering to the structured hierarchical annotation can pave the way for more generalizable and internationally applicable models. The work opens avenues for refining offensive content detection mechanisms, contributing to the broader goal of maintaining healthier online discourse.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Marcos Zampieri (94 papers)
  2. Shervin Malmasi (40 papers)
  3. Preslav Nakov (253 papers)
  4. Sara Rosenthal (21 papers)
  5. Noura Farra (6 papers)
  6. Ritesh Kumar (42 papers)
Citations (739)