Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models (2112.07447v1)

Published 14 Dec 2021 in cs.CL, cs.CY, and cs.LG

Abstract: An increasing awareness of biased patterns in natural language processing resources, like BERT, has motivated many metrics to quantify bias' andfairness'. But comparing the results of different metrics and the works that evaluate with such metrics remains difficult, if not outright impossible. We survey the existing literature on fairness metrics for pretrained LLMs and experimentally evaluate compatibility, including both biases in LLMs as in their downstream tasks. We do this by a mixture of traditional literature survey and correlation analysis, as well as by running empirical evaluations. We find that many metrics are not compatible and highly depend on (i) templates, (ii) attribute and target seeds and (iii) the choice of embeddings. These results indicate that fairness or bias evaluation remains challenging for contextualized LLMs, if not at least highly subjective. To improve future comparisons and fairness evaluations, we recommend avoiding embedding-based metrics and focusing on fairness evaluations in downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Pieter Delobelle (15 papers)
  2. Ewoenam Kwaku Tokpo (4 papers)
  3. Toon Calders (17 papers)
  4. Bettina Berendt (20 papers)
Citations (24)