Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving (2112.05194v1)

Published 9 Dec 2021 in cs.CL and cs.CY

Abstract: With widening deployments of NLP in daily life, inherited social biases from NLP models have become more severe and problematic. Previous studies have shown that word embeddings trained on human-generated corpora have strong gender biases that can produce discriminative results in downstream tasks. Previous debiasing methods focus mainly on modeling bias and only implicitly consider semantic information while completely overlooking the complex underlying causal structure among bias and semantic components. To address these issues, we propose a novel methodology that leverages a causal inference framework to effectively remove gender bias. The proposed method allows us to construct and analyze the complex causal mechanisms facilitating gender information flow while retaining oracle semantic information within word embeddings. Our comprehensive experiments show that the proposed method achieves state-of-the-art results in gender-debiasing tasks. In addition, our methods yield better performance in word similarity evaluation and various extrinsic downstream NLP tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Lei Ding (58 papers)
  2. Dengdeng Yu (9 papers)
  3. Jinhan Xie (8 papers)
  4. Wenxing Guo (3 papers)
  5. Shenggang Hu (3 papers)
  6. Meichen Liu (7 papers)
  7. Linglong Kong (55 papers)
  8. Hongsheng Dai (10 papers)
  9. Yanchun Bao (2 papers)
  10. Bei Jiang (36 papers)
Citations (28)