Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fight Fire with Fire: Fine-tuning Hate Detectors using Large Samples of Generated Hate Speech (2109.00591v1)

Published 1 Sep 2021 in cs.CL and cs.AI

Abstract: Automatic hate speech detection is hampered by the scarcity of labeled datasetd, leading to poor generalization. We employ pretrained LLMs (LMs) to alleviate this data bottleneck. We utilize the GPT LM for generating large amounts of synthetic hate speech sequences from available labeled examples, and leverage the generated data in fine-tuning large pretrained LMs on hate detection. An empirical study using the models of BERT, RoBERTa and ALBERT, shows that this approach improves generalization significantly and consistently within and across data distributions. In fact, we find that generating relevant labeled hate speech sequences is preferable to using out-of-domain, and sometimes also within-domain, human-labeled examples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tomer Wullach (7 papers)
  2. Amir Adler (16 papers)
  3. Einat Minkov (10 papers)
Citations (36)