Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Intersectional Definition of Fairness (1807.08362v3)

Published 22 Jul 2018 in cs.LG, cs.CY, and stat.ML

Abstract: We propose definitions of fairness in machine learning and artificial intelligence systems that are informed by the framework of intersectionality, a critical lens arising from the Humanities literature which analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including gender, race, sexual orientation, class, and disability. We show that our criteria behave sensibly for any subset of the set of protected attributes, and we prove economic, privacy, and generalization guarantees. We provide a learning algorithm which respects our intersectional fairness criteria. Case studies on census data and the COMPAS criminal recidivism dataset demonstrate the utility of our methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. James Foulds (17 papers)
  2. Rashidul Islam (24 papers)
  3. Kamrun Naher Keya (6 papers)
  4. Shimei Pan (28 papers)
Citations (174)

Summary

An Intersectional Definition of Fairness

This paper proposes an intersectional framework for defining fairness in AI and ML, addressing the complex dynamics of power and oppression across intersecting social categories such as gender, race, and class. The research introduces differential fairness (DF), a statistical method designed to operate cohesively with intersectionality principles, ensuring equitable AI/ML systems. The development of these fairness definitions is contextualized within a legal, economic, and societal framework to ensure a broad applicability across different sectors influenced by AI.

Key Contributions

  1. Fairness Metrics: The paper introduces three novel metrics for assessing fairness:
    • Differential Fairness (DF), which ensures an intersectional approach by balancing outcome probabilities across various protected groups.
    • DF bias amplification, which measures algorithm-induced bias increases and serves as a more conservative fairness definition.
    • Differential Fairness with Confounders (DFC), which accounts for external variables impacting outcomes.
  2. Algorithm Development: The authors develop a learning algorithm that enforces DF criteria, tuning fairness and accuracy through regulation terms. This algorithm addresses the balance necessary when making fairness amendments to existing ML systems.
  3. Case Studies: Empirical validation is presented, utilizing datasets like census data and the widely studied COMPAS dataset. These studies confirm the practicability and benefits of the intersectional fairness measures introduced, showing improvements over existing subgroup fairness methods.

Theoretical Framework

The metric of differential fairness is analogously built upon concepts from differential privacy. By bounding the ratios of outcome distributions for different groups, this approach ensures fairness throughout intersections of social attributes, which traditional methods considering single attributes fail to adequately address. The theoretical underpinnings utilize causal inference principles to adjust for confounding variables, demonstrating robustness in various socioeconomic contexts.

Implications and Future Directions

The implications of this intersectional approach to fairness are profound, particularly in addressing AI bias in socially sensitive contexts such as criminal justice, healthcare, and employment. The methods promise a recalibration of how fairness is mathematically encoded into AI systems, potentially transforming policy and practice by providing a nuanced understanding of complex social realities. Future work could involve refining these metrics for broader contexts, expanding the algorithm's efficiency in larger datasets, and further grounding the metrics in diverse real-world applications. This research is foundational for developing policies and standards for fair AI governance, helping to ensure that AI systems contribute positively to social justice outcomes.