Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Challenges in Automated Debiasing for Toxic Language Detection (2102.00086v1)

Published 29 Jan 2021 in cs.CL

Abstract: Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently introduced debiasing methods for text classification datasets and models, as applied to toxic language detection. Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English). Our comprehensive experiments establish that existing methods are limited in their ability to prevent biased behavior in current toxicity detectors. We then propose an automatic, dialect-aware data correction method, as a proof-of-concept. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xuhui Zhou (33 papers)
  2. Maarten Sap (86 papers)
  3. Swabha Swayamdipta (49 papers)
  4. Noah A. Smith (224 papers)
  5. Yejin Choi (287 papers)
Citations (128)

Summary

We haven't generated a summary for this paper yet.