Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detoxifying Language Models Risks Marginalizing Minority Voices (2104.06390v1)

Published 13 Apr 2021 in cs.CL and cs.LG

Abstract: LLMs (LMs) must be both safe and equitable to be responsibly deployed in practice. With safety in mind, numerous detoxification techniques (e.g., Dathathri et al. 2020; Krause et al. 2020) have been proposed to mitigate toxic LM generations. In this work, we show that current detoxification techniques hurt equity: they decrease the utility of LMs on language used by marginalized groups (e.g., African-American English and minority identity mentions). In particular, we perform automatic and human evaluations of text generation quality when LMs are conditioned on inputs with different dialects and group identifiers. We find that detoxification makes LMs more brittle to distribution shift, especially on language used by marginalized groups. We identify that these failures stem from detoxification methods exploiting spurious correlations in toxicity datasets. Overall, our results highlight the tension between the controllability and distributional robustness of LMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Albert Xu (9 papers)
  2. Eshaan Pathak (3 papers)
  3. Eric Wallace (42 papers)
  4. Suchin Gururangan (29 papers)
  5. Maarten Sap (86 papers)
  6. Dan Klein (99 papers)
Citations (112)