Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function (1905.12801v2)

Published 30 May 2019 in cs.CL

Abstract: Gender bias exists in natural language datasets which neural LLMs tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in LLMs without increasing perplexity. In comparison to existing debiasing strategies, data augmentation, and word embedding debiasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach, and show that it outperforms existing strategies in all bias evaluation metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yusu Qian (6 papers)
  2. Urwa Muaz (3 papers)
  3. Ben Zhang (4 papers)
  4. Jae Won Hyun (1 paper)
Citations (90)