Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gender Bias in Contextualized Word Embeddings (1904.03310v1)

Published 5 Apr 2019 in cs.CL
Gender Bias in Contextualized Word Embeddings

Abstract: In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo's contextualized word vectors. First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2) the trained ELMo embeddings systematically encode gender information and (3) ELMo unequally encodes gender information about male and female entities. Then, we show that a state-of-the-art coreference system that depends on ELMo inherits its bias and demonstrates significant bias on the WinoBias probing corpus. Finally, we explore two methods to mitigate such gender bias and show that the bias demonstrated on WinoBias can be eliminated.

Gender Bias in Contextualized Word Embeddings: An Expert Overview

The paper "Gender Bias in Contextualized Word Embeddings" by Jieyu Zhao et al. focuses on the persistent issue of gender bias in NLP systems, particularly in the contextualized word embeddings generated by ELMo. This paper conducts a detailed exploration into the origins, manifestations, and methods of mitigating gender bias, while providing significant insights into their propagation in downstream tasks such as coreference resolution.

Quantifying and Analyzing Bias in ELMo

The paper begins by identifying the inherent gender bias in the training data of ELMo, a prominent model utilizing deep contextualized word embeddings. The authors highlight that the One Billion Word Benchmark, ELMo's training corpus, contains a disproportionate number of male to female pronouns (5.3 million male versus 1.6 million female pronouns), which naturally leads to biased embeddings. Moreover, occupations frequently associated with male pronouns appear more regardless of traditional gender stereotypes tied to these professions.

To further dissect this bias, the paper applies principal component analysis (PCA) on ELMo embeddings. It reveals a dedicated subspace capturing gender information, segmented into contextual and occupational dimensions. This finding indicates a complex geometry in ELMo embeddings that systematically encodes gender, thus influencing predictions based on contextual cues in text data.

Bias Propagation in Coreference Resolution

The paper examines a state-of-the-art coreference resolution system leveraging ELMo embeddings and evaluates it against the WinoBias dataset, designed to gauge gender bias. It is uncovered that ELMo-based systems exhibit a significant disparity in performance between pro-stereotypical and anti-stereotypical datasets, implying a tendency to align with traditional gender roles, which is measurably stronger compared to systems using GloVe embeddings.

Mitigating Gender Bias

To address these biases, the paper explores two mitigation strategies: data augmentation and representation neutralization. The data augmentation approach involves gender-swapped augmentations of the training data, while the neutralization strategy averages embeddings from original and gender-swapped sentences during inference. Notably, data augmentation demonstrated a substantial reduction in bias, achieving near parity in system performance across stereotyping conditions, especially in semantically challenging scenarios. The authors suggest that complete bias removal remains complex and context-dependent, aligning with recent literature on bias in embeddings.

Concluding Remarks and Implications

This paper offers a critical lens on ELMo's gender bias, elucidating its deep-seated presence in NLP systems and the far-reaching implications in real-world applications. It pragmatically acknowledges that while data augmentation can significantly abate bias, contextualized embeddings in modern NLP models warrant ongoing scrutiny and methodical bias evaluations. This work invites future research to extend this analysis to other embedding models like BERT and suggests that mitigating bias in NLP is not merely a technical challenge but a crucial step towards equitable AI.

Such efforts towards bias mitigation are vital for developing NLP systems that reflect and respect the diversity and nuances of human language, ensuring their ethical and inclusive deployment in diverse linguistic contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jieyu Zhao (54 papers)
  2. Tianlu Wang (33 papers)
  3. Mark Yatskar (38 papers)
  4. Ryan Cotterell (226 papers)
  5. Vicente Ordonez (52 papers)
  6. Kai-Wei Chang (292 papers)
Citations (400)
Youtube Logo Streamline Icon: https://streamlinehq.com