Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Take its Essence, Discard its Dross! Debiasing for Toxic Language Detection via Counterfactual Causal Effect (2406.00983v1)

Published 3 Jun 2024 in cs.CL and cs.AI

Abstract: Current methods of toxic language detection (TLD) typically rely on specific tokens to conduct decisions, which makes them suffer from lexical bias, leading to inferior performance and generalization. Lexical bias has both "useful" and "misleading" impacts on understanding toxicity. Unfortunately, instead of distinguishing between these impacts, current debiasing methods typically eliminate them indiscriminately, resulting in a degradation in the detection accuracy of the model. To this end, we propose a Counterfactual Causal Debiasing Framework (CCDF) to mitigate lexical bias in TLD. It preserves the "useful impact" of lexical bias and eliminates the "misleading impact". Specifically, we first represent the total effect of the original sentence and biased tokens on decisions from a causal view. We then conduct counterfactual inference to exclude the direct causal effect of lexical bias from the total effect. Empirical evaluations demonstrate that the debiased TLD model incorporating CCDF achieves state-of-the-art performance in both accuracy and fairness compared to competitive baselines applied on several vanilla models. The generalization capability of our model outperforms current debiased models for out-of-distribution data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Junyu Lu (32 papers)
  2. Bo Xu (212 papers)
  3. Xiaokun Zhang (29 papers)
  4. Kaiyuan Liu (8 papers)
  5. Dongyu Zhang (32 papers)
  6. Liang Yang (102 papers)
  7. Hongfei Lin (34 papers)

Summary

We haven't generated a summary for this paper yet.