Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Intersectional Bias in Hate Speech and Abusive Language Datasets (2005.05921v3)

Published 12 May 2020 in cs.CL and cs.SI

Abstract: Algorithms are widely applied to detect hate speech and abusive language in social media. We investigated whether the human-annotated data used to train these algorithms are biased. We utilized a publicly available annotated Twitter dataset (Founta et al. 2018) and classified the racial, gender, and party identification dimensions of 99,996 tweets. The results showed that African American tweets were up to 3.7 times more likely to be labeled as abusive, and African American male tweets were up to 77% more likely to be labeled as hateful compared to the others. These patterns were statistically significant and robust even when party identification was added as a control variable. This study provides the first systematic evidence on intersectional bias in datasets of hate speech and abusive language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jae Yeon Kim (1 paper)
  2. Carlos Ortiz (3 papers)
  3. Sarah Nam (1 paper)
  4. Sarah Santiago (1 paper)
  5. Vivek Datta (1 paper)
Citations (40)

Summary

We haven't generated a summary for this paper yet.