Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Intrinsic Bias Metrics Do Not Correlate with Application Bias (2012.15859v5)

Published 31 Dec 2020 in cs.CL

Abstract: NLP systems learn harmful societal biases that cause them to amplify inequality as they are deployed in more and more situations. To guide efforts at debiasing these systems, the NLP community relies on a variety of metrics that quantify bias in models. Some of these metrics are intrinsic, measuring bias in word embedding spaces, and some are extrinsic, measuring bias in downstream tasks that the word embeddings enable. Do these intrinsic and extrinsic metrics correlate with each other? We compare intrinsic and extrinsic metrics across hundreds of trained models covering different tasks and experimental conditions. Our results show no reliable correlation between these metrics that holds in all scenarios across tasks and languages. We urge researchers working on debiasing to focus on extrinsic measures of bias, and to make using these measures more feasible via creation of new challenge sets and annotated test data. To aid this effort, we release code, a new intrinsic metric, and an annotated test set focused on gender bias in hate speech.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Seraphina Goldfarb-Tarrant (17 papers)
  2. Rebecca Marchant (1 paper)
  3. Ricardo Muñoz Sanchez (1 paper)
  4. Mugdha Pandya (5 papers)
  5. Adam Lopez (29 papers)
Citations (157)

Summary

We haven't generated a summary for this paper yet.