Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Measures of Biases and Harms in NLP (2108.03362v2)

Published 7 Aug 2021 in cs.CL and cs.CY

Abstract: Recent studies show that NLP technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality. To create interventions and mitigate these biases and associated harms, it is vital to be able to detect and measure such biases. While existing works propose bias evaluation and mitigation methods for various tasks, there remains a need to cohesively understand the biases and the specific harms they measure, and how different measures compare with each other. To address this gap, this work presents a practical framework of harms and a series of questions that practitioners can answer to guide the development of bias measures. As a validation of our framework and documentation questions, we also present several case studies of how existing bias measures in NLP -- both intrinsic measures of bias in representations and extrinsic measures of bias of downstream applications -- can be aligned with different harms and how our proposed documentation questions facilitates more holistic understanding of what bias measures are measuring.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Sunipa Dev (28 papers)
  2. Emily Sheng (17 papers)
  3. Jieyu Zhao (54 papers)
  4. Aubrie Amstutz (1 paper)
  5. Jiao Sun (29 papers)
  6. Yu Hou (43 papers)
  7. Mattie Sanseverino (1 paper)
  8. Jiin Kim (5 papers)
  9. Akihiro Nishi (6 papers)
  10. Nanyun Peng (205 papers)
  11. Kai-Wei Chang (292 papers)
Citations (75)

Summary

We haven't generated a summary for this paper yet.