Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on Bias and Fairness in Machine Learning (1908.09635v3)

Published 23 Aug 2019 in cs.LG
A Survey on Bias and Fairness in Machine Learning

Abstract: With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.

A Survey on Bias and Fairness in Machine Learning

Overview

The paper "A Survey on Bias and Fairness in Machine Learning," authored by Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan of USC-ISI, provides a comprehensive review of existing biases in AI systems and proposes methods to ensure fairness in ML models. This survey explores how biases can emerge in data, algorithms, and user interactions, creating unfairness in AI outcomes. It introduces a taxonomy for defining fairness in ML and presents approaches to mitigate observed biases in different AI subdomains.

Real-World Implications of Bias

The paper highlights several high-profile instances where biased AI systems led to discriminatory and unfair outcomes. For example, the COMPAS tool used for recidivism predictions in U.S. courts has shown higher false positive rates for African-American individuals compared to Caucasian individuals. Biases have also been reported in facial recognition systems and gender-based disparities in job advertisements.

Taxonomy of Bias Types

The authors categorize bias into three primary sources: data, algorithms, and user interactions. This classification elucidates the complexities of biases in AI systems. Key types of bias include:

  • Data Bias: Measurement bias, omitted variable bias, representation bias, aggregation bias, sampling bias, longitudinal data fallacy, and linking bias.
  • Algorithm Bias: Algorithmic bias and user interaction bias, with further subdivisions into presentation bias and ranking bias.
  • User Interaction Bias: Popularity bias, emergent bias, and evaluation bias.

Fairness Definitions

Various definitions of fairness are discussed, such as:

  • Equalized Odds: Ensuring true positive and false positive rates are equal across different groups.
  • Equal Opportunity: Ensuring true positive rates are equal for protected and unprotected groups.
  • Demographic Parity: The likelihood of positive outcomes should be independent of protected attributes.
  • Fairness Through Awareness/Unawareness: Decisions should be invariant or oblivious to sensitive attributes.
  • Counterfactual Fairness: Predictions should remain unchanged under hypothetical scenarios where the individual's demographic group changes.

The paper also introduces the concept of individual fairness, group fairness, and subgroup fairness, emphasizing the need for context-sensitive applications of these definitions.

Methods for Fair Machine Learning

There are three primary categories for bias mitigation methods:

  • Pre-Processing: Transforming data to remove biases before model training.
  • In-Processing: Adjusting learning algorithms to incorporate fairness considerations during training.
  • Post-Processing: Modifying the prediction outputs to ensure fairness without altering the underlying model or data.

Key techniques and their applications across various domains are reviewed, including fair classification, regression, principal component analysis (PCA), community detection, and NLP.

Domain-Specific Mitigation Strategies

The paper surveys domain-specific methods for bias mitigation, including:

  • Variational Autoencoders (VAEs): Learning fair representations by treating protected attributes as nuisance variables.
  • Adversarial Learning: Utilizing adversarial networks to maximize prediction accuracy while minimizing the adversary's ability to predict sensitive variables.
  • Fair NLP: Addressing biases in word embeddings, coreference resolution, LLMs, sentence encoders, machine translation, and named entity recognition (NER).

Future Research Directions

The paper identifies open challenges and potential research opportunities, such as:

  • Developing a unified definition of fairness to streamline evaluation.
  • Shifting the focus from equality to equity, ensuring resources are allocated based on individual or group needs.
  • Designing methods to automatically detect unfairness in datasets or algorithms.

Conclusion

Overall, this survey provides a detailed examination of biases and fairness in machine learning, categorizing the sources of bias and introducing diverse definitions of fairness. It underscores the importance of context-sensitive applications of fairness principles and illustrates various methods to mitigate bias across multiple AI subdomains. This extensive review serves as a valuable resource for researchers looking to design fair AI systems and navigate the complexities of bias in machine learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ninareh Mehrabi (26 papers)
  2. Fred Morstatter (64 papers)
  3. Nripsuta Saxena (4 papers)
  4. Kristina Lerman (197 papers)
  5. Aram Galstyan (142 papers)
Citations (3,670)