Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

COBIAS: Contextual Reliability in Bias Assessment (2402.14889v3)

Published 22 Feb 2024 in cs.CL and cs.AI

Abstract: LLMs often inherit biases from the web data they are trained on, which contains stereotypes and prejudices. Current methods for evaluating and mitigating these biases rely on bias-benchmark datasets. These benchmarks measure bias by observing an LLM's behavior on biased statements. However, these statements lack contextual considerations of the situations they try to present. To address this, we introduce a contextual reliability framework, which evaluates model robustness to biased statements by considering the various contexts in which they may appear. We develop the Context-Oriented Bias Indicator and Assessment Score (COBIAS) to measure a biased statement's reliability in detecting bias based on the variance in model behavior across different contexts. To evaluate the metric, we augment 2,291 stereotyped statements from two existing benchmark datasets by adding contextual information. We show that COBIAS aligns with human judgment on the contextual reliability of biased statements (Spearman's $\rho = 0.65$, $p = 3.4 * 10{-60}$) and can be used to create reliable datasets, which would assist bias mitigation works.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Priyanshul Govil (3 papers)
  2. Vamshi Krishna Bonagiri (6 papers)
  3. Manas Gaur (59 papers)
  4. Ponnurangam Kumaraguru (129 papers)
  5. Sanorita Dey (2 papers)
  6. Hemang Jain (2 papers)
  7. Aman Chadha (110 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.