Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HateBERT: Retraining BERT for Abusive Language Detection in English (2010.12472v2)

Published 23 Oct 2020 in cs.CL

Abstract: In this paper, we introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have collected and made available to the public. We present the results of a detailed comparison between a general pre-trained LLM and the abuse-inclined version obtained by retraining with posts from the banned communities on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the generic pre-trained LLM and its corresponding abusive language-inclined counterpart across the datasets, indicating that portability is affected by compatibility of the annotated phenomena.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tommaso Caselli (12 papers)
  2. Valerio Basile (11 papers)
  3. Jelena Mitrović (16 papers)
  4. Michael Granitzer (47 papers)
Citations (317)

Summary

HateBERT: Retraining BERT for Abusive Language Detection in English

The paper "HateBERT: Retraining BERT for Abusive Language Detection in English" presents a focused paper on developing a BERT-based model specifically for the task of detecting abusive language phenomena in the English language. Recognizing the challenges posed by general-purpose LLMs when applied to domain-specific tasks such as abusive language detection, the authors address this by introducing HateBERT—an adapted version of BERT retrained on a dataset comprising abusive language content from banned Reddit communities.

Methodology and Datasets

HateBERT was created by further pre-training the foundational BERT model using the Reddit Abusive Language English (RAL-E) dataset. This dataset contains approximately 1.5 million messages sourced predominantly from subreddits banned due to offensive and abusive content, which underscores its suitability for retraining a model specifically for detecting such language. The retraining process involved optimizing the existing BERT model through the Masked LLM (MLM) objective, tailoring it towards identifying language characterized by abuse and hate within online interactions.

The evaluation of HateBERT’s effectiveness was conducted across three well-defined datasets:

  1. OffensEval 2019: A dataset consisting of Twitter posts labeled for offensive content.
  2. AbusEval: Derived from OffensEval with additional annotations classifying overt abusive language.
  3. HatEval: An annotated collection tailored towards recognizing hate speech, focusing explicitly on hateful language against specific groups like women and migrants.

Results and Implications

The experimental results affirm that HateBERT consistently outperforms the baseline BERT model across all evaluated datasets. This outcome highlights HateBERT's suitability and robustness in recognizing different forms of abusive language, offering improved performance not only in the general detection of offensive content but also in more specific tasks such as identifying abusive or hateful speech.

The paper also explores the portability of HateBERT across different abusive language phenomena, exploring its capacity to generalize from one dataset to another. HateBERT demonstrated enhanced portability, particularly when trained on datasets with more generalized annotations. The results suggest that its adaptation through further pre-training enables better representation and internalization of linguistic nuances tied to abusive content.

Future Directions

The development of HateBERT emphasizes the importance and viability of model adaptation to address specific sub-domains within NLP, such as abusive language detection. It invites future work to extend this concept to other specialized areas within natural language understanding. Future studies might focus on assessing the potential for different embedding representations that HateBERT can derive compared with the general BERT model, as well as its performance in varied real-world abusive language scenarios.

In conclusion, HateBERT sets a precedent for enhancing LLMs through targeted retraining, thereby advancing the field’s capabilities in handling specific yet pervasive problems associated with online discourse. The research delineates a clear trajectory for subsequent improvements in automatic abusive language detection, fostering refined methodologies that could lead to more effective monitoring and alleviation of hostile language in digital spaces.