Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning (1902.04783v4)

Published 13 Feb 2019 in cs.CY

Abstract: Fairness for Machine Learning has received considerable attention, recently. Various mathematical formulations of fairness have been proposed, and it has been shown that it is impossible to satisfy all of them simultaneously. The literature so far has dealt with these impossibility results by quantifying the tradeoffs between different formulations of fairness. Our work takes a different perspective on this issue. Rather than requiring all notions of fairness to (partially) hold at the same time, we ask which one of them is the most appropriate given the societal domain in which the decision-making model is to be deployed. We take a descriptive approach and set out to identify the notion of fairness that best captures \emph{lay people's perception of fairness}. We run adaptive experiments designed to pinpoint the most compatible notion of fairness with each participant's choices through a small number of tests. Perhaps surprisingly, we find that the most simplistic mathematical definition of fairness---namely, demographic parity---most closely matches people's idea of fairness in two distinct application scenarios. This conclusion remains intact even when we explicitly tell the participants about the alternative, more complicated definitions of fairness, and we reduce the cognitive burden of evaluating those notions for them. Our findings have important implications for the Fair ML literature and the discourse on formalizing algorithmic fairness.

A Descriptive Examination of Fairness in Machine Learning: Aligning Mathematical Definitions with Human Perception

The paper "Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning" by Megha Srivastava, Hoda Heidari, and Andreas Krause, explores a critical issue in the development and deployment of ML models: fairness. As algorithms increasingly influence decisions in sensitive domains such as criminal justice, medicine, and credit lending, understanding how fairness should be defined and measured becomes essential. The authors challenge the capacity of existing mathematical definitions of fairness to comprehensively reflect societal perceptions and propose an empirical investigation into the alignment of these definitions with human ethical judgments.

Key Findings and Methodology

The paper employs a descriptive ethics approach to determine which mathematical formulation of fairness resonates most with lay individuals' perceptions across different contextual scenarios. The authors recognize the inherent tension between various formal fairness definitions, such as demographic parity, equality of odds, and calibration, noting that they often cannot all be satisfied simultaneously. Instead of proposing a universally applicable solution, the paper advocates for selecting the most contextually relevant notion of fairness.

Through an adaptive experimental design leveraging active learning, the authors assessed participants’ alignment with fairness notions like demographic parity (DP), error parity (EP), false discovery rate parity (FDR), and false negative rate parity (FNR). The experimental results reveal that demographic parity, the simplest of the considered fairness metrics, aligns most closely with human perceptions across examined contexts, even when participants are informed of more complex definitions. This outcome suggests an intriguing divergence: while theoretically simpler, DP might capture a more intuitive, human-centric view of fairness in algorithmic contexts.

Implications for Fair ML and Future Research

This insight has substantial implications for the algorithmic fairness discourse. It suggests the necessity for a nuanced understanding of how fairness notions interact with human ethical perspectives. This consideration is crucial not only for the development of fair algorithms but also for fostering public trust and acceptance. Furthermore, the paper implies that engaging potential algorithmic decision subjects in the fairness determination process can lead to fairer and more socially aligned model deployments.

The research invites further exploration into expanding the adaptive experiment paradigms to include broader demographic diversity, different contexts, and additional fairness definitions. Future investigations could also consider the effect of informed expert opinions versus lay perceptions on fairness notions and how personal stakes or direct implications influence fairness judgments.

Conclusion

The paper by Srivastava, Heidari, and Krause provides a valuable contribution to the ongoing discussion of fairness in machine learning by highlighting the necessity of aligning mathematical fairness models with human perceptions. Their findings advocate for a collaborative approach in which lay perspectives are integral to defining and implementing fair ML practices, thereby promoting models that are both ethically sound and socially palatable. As the field of AI evolves, embracing this intersection of empirical human-centered research and theoretical algorithmic development will be crucial for the responsible integration of AI in societal frameworks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Megha Srivastava (15 papers)
  2. Hoda Heidari (46 papers)
  3. Andreas Krause (269 papers)
Citations (182)