Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Parity to Preference-based Notions of Fairness in Classification (1707.00010v2)

Published 30 Jun 2017 in stat.ML and cs.LG

Abstract: The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fair-division and envy-freeness literature in economics and game theory and propose preference-based notions of fairness -- given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness.

An Examination of Preference-Based Fairness in Automated Classification

The document titled "From Parity to Preference-based Notions of Fairness in Classification" presents a compelling paper that shifts the focus of fairness in automated decision systems from traditional parity-based methods to preference-based approaches. This research is driven by the challenges associated with achieving fairness while maintaining high decision accuracy in machine learning systems handling sensitive tasks like credit scoring or criminal justice predictions.

Overview

The authors identify the limitations of parity-based fairness, where equality in treatment and outcomes has proven to be too restrictive and can lead to a notable decline in decision accuracy. Inspired by the concepts of fair division and envy-freeness from economic and game theory, the paper introduces preference-based fairness notions. These notions aim to provide decisions that each demographic group would collectively prefer over traditional parity-based outcomes.

Methodology

The paper presents two new fairness criteria: preferred treatment and preferred impact. Preferred Treatment ensures that each group gets outcomes they prefer compared to an alternative treatment assigned to another group. Preferred Impact guarantees that groups receive a higher benefit than they would under strict impact parity.

To operationalize these fairness criteria, the paper introduces margin-based classifiers with specific constraints to align with preference-based fairness. The methodology is evaluated using both synthetic and real-world datasets, including the COMPAS recidivism prediction, Adult income from the UCI repository, and the NYPD Stop-question-and-frisk datasets.

Results

The experimental comparisons show that preference-based classifiers generally achieve higher decision accuracy than parity-based approaches across various datasets. The classifiers satisfying the new preferences achieved accuracy improvements while respecting each group's treatment and outcome preferences. Particularly, the preference-based fairness models mitigated the accuracy costs imposed by parity requirements, without significantly disadvantaging any group involved.

These outcomes suggest that when predictive features vary greatly between groups, preference-based fairness allows classifiers to provide more tailored and accurate outcomes. However, combining preferred treatment and impact might still lead to accuracy trade-offs, hinting at the inherent complexity in balancing fairness dimensions.

Implications and Future Directions

The introduction of preference-based fairness marks a significant step towards more nuanced criteria that may better align with societal expectations of equity. Practically, this framework suggests that preference-based approaches could enhance decision-making fairness without unduly compromising accuracy, thus offering a better-suited alternative for applications with sensitive social implications.

Theoretically, this work opens avenues to explore group fairness concepts in decision theory, where individual and group-level preferences could reshape existing fairness paradigms. Future research could explore the applicability of these methods to non-convex classifiers and examine further extensions into multi-class settings. One intriguing direction is to analyze how these concepts can align with individual preference-based fairness.

In conclusion, the preference-based framework provides flexibility by accommodating group preferences, thereby achieving more practical decision-making fairness. This could potentially harmonize machine learning fairness with broader societal values, improving trust and acceptance of automated decision systems among different demographic groups.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Muhammad Bilal Zafar (27 papers)
  2. Isabel Valera (46 papers)
  3. Manuel Gomez Rodriguez (30 papers)
  4. Krishna P. Gummadi (68 papers)
  5. Adrian Weller (150 papers)
Citations (204)