Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Identifying and Correcting Label Bias in Machine Learning (1901.04966v1)

Published 15 Jan 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Datasets often contain biases which unfairly disadvantage certain groups, and classifiers trained on such datasets can inherit these biases. In this paper, we provide a mathematical formulation of how this bias can arise. We do so by assuming the existence of underlying, unknown, and unbiased labels which are overwritten by an agent who intends to provide accurate labels but may have biases against certain groups. Despite the fact that we only observe the biased labels, we are able to show that the bias may nevertheless be corrected by re-weighting the data points without changing the labels. We show, with theoretical guarantees, that training on the re-weighted dataset corresponds to training on the unobserved but unbiased labels, thus leading to an unbiased machine learning classifier. Our procedure is fast and robust and can be used with virtually any learning algorithm. We evaluate on a number of standard machine learning fairness datasets and a variety of fairness notions, finding that our method outperforms standard approaches in achieving fair classification.

Insights into Identifying and Correcting Label Bias in Machine Learning

This paper addresses the crucial topic of label bias in machine learning datasets, proposing a mathematical framework to detect and correct such biases. The authors conceptualize label bias as deriving from an agent's biased label assignment, despite their intention to label accurately. The crux of this work is a re-weighting approach, which trains classifiers on biased data without altering the observed labels, ultimately ensuring that the classifiers learn as if from unbiased labels. This innovative solution is tested on various datasets, demonstrating improved fairness in classification tasks.

The paper’s mathematical formulation is grounded in the assumption of an unknown unbiased label function that is corrupted by bias. With theoretical rigor, the authors present a closed-form expression for the observed label function using the KL-divergence, and introduce a method to correct bias via re-weighted data points. The re-weighting approach is theoretically robust, offering guarantees that training on the weighted dataset mirrors training on unbiased labels.

Across multiple fairness definitions—including demographic parity, equal opportunity, equalized odds, and disparate impact—the method proves practical and versatile. The authors showcase that it surpasses traditional methods like post-processing and Lagrangian approaches by better reconciling fairness with accuracy. Specifically, their approach avoids the complexity and instability associated with the Lagrangian method, which traditionally requires constraint approximations for non-convex fairness constraints.

The implications of this research are extensive, pushing forward the development of fairer machine learning models without modifying observed data—addressing potential legal concerns about data alteration. Future work could build on this framework to extend these methods to multi-label settings or explore its applicability in different data domains.

In summary, this paper presents a fundamentally sound, mathematically tested method to address fairness in machine learning by targeting label bias at its source, providing effective means to improve model equity in diverse real-world applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Heinrich Jiang (32 papers)
  2. Ofir Nachum (64 papers)
Citations (263)