Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness (1711.05144v5)

Published 14 Nov 2017 in cs.LG, cs.DS, and cs.GT

Abstract: The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier across these groups. Constraints of this form are susceptible to intentional or inadvertent "fairness gerrymandering", in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness and recently proposed individual notions of fairness, but raises several computational challenges. It is no longer clear how to audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses. We then derive two algorithms that provably converge to the best fair classifier, given access to oracles which can solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner and an Auditor. Our first algorithm provably converges in a polynomial number of steps. Our second algorithm enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the simpler algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.

An Essay on "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness"

The paper "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness," authored by Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu, addresses critical challenges in ensuring fairness in machine learning systems. Specifically, it tackles the susceptibility of popular statistical fairness notions to fairness gerrymandering, where classifiers appear fair on pre-defined groups but exhibit significant unfairness on more granular subgroups.

Context and Motivation

Machine learning models are increasingly applied in high-stakes domains such as policing, criminal sentencing, and lending, making fairness in their decisions critical. Common fairness definitions, such as demographic parity and equal opportunity, often focus on a few pre-defined high-level groups like race or gender. However, this approach is vulnerable to fairness gerrymandering, wherein a model satisfies fairness criteria for these coarse groups while potentially violating them for numerous, more granular subgroups defined over combinations of attributes.

Contributions

The authors make several significant contributions to address these challenges:

  1. Problem Formalization and Computational Hardness:
    • The paper formalizes the auditing and learning problems for subgroup fairness across a large number of subgroups. It proves that the computational problem of auditing subgroup fairness ties closely with the problem of weak agnostic learning. This equivalence implies that auditing for subgroup fairness is computationally challenging, inheriting the worst-case hardness from agnostic learning.
  2. Game-Theoretic Learning Algorithms:
    • The authors propose two algorithms for learning classifiers that ensure subgroup fairness. They frame the problem as a two-player zero-sum game between a learner and an auditor. They present an algorithm using Follow the Perturbed Leader (FTPL) for the learner and best response for the auditor, rigorously proving convergence to an approximate Nash equilibrium in polynomial time. Moreover, they introduce a simpler, often more practical, algorithm based on Fictitious Play, which, despite only provable asymptotic convergence, is computationally efficient and demonstrates rapid convergence in practice.
  3. Implementation and Empirical Validation:
    • The paper provides an empirical evaluation of their Fictitious Play algorithm on real datasets, demonstrating its effectiveness in enforcing subgroup fairness. It shows that the algorithm can train fair classifiers concerning a large class of subgroups while maintaining non-trivial accuracy.

Key Numerical Results and Implications

The authors prove that any unfair subgroup certificate gg for a classifier DD must exhibit significant predictive power, thereby linking fairness violation detection with agnostic learning of gg. This insight leads to a practical heuristic: learning gg accurately implies detecting fairness violations effectively. Despite the proven worst-case computational hardness, heuristic learning algorithms (such as boosting or SVMs) used in practice can often solve the auditing problem efficiently on real-world datasets.

Theoretical and Practical Implications

The theoretical implications are profound. By establishing a connection between fairness auditing and weak agnostic learning, the paper leverages extensive learning theory literature to understand and address fairness concerns in machine learning models. Practically, the development of polynomial-time algorithms for learning fair classifiers, corroborated by empirical evaluations, provides a robust framework for deploying fair machine learning systems in real-world applications.

Future Directions

Future research could focus on:

  • Exploring Richer Model Classes: Extending the auditors' and learners' model classes can lead to discovering more intricate fairness violations.
  • Refined Trade-offs: Investigating finer granular trade-offs between accuracy and fairness to optimize outcomes in various practical settings.
  • Scalability and Efficiency: Enhancing the scalability of the proposed algorithms to handle larger datasets and more complex subgroups efficiently.
  • Theoretical Guarantees: Strengthening theoretical convergence guarantees of the practical implementations like Fictitious Play.

In conclusion, the paper sets a high bar for subsequent research by providing both meaningful theoretical insights and practical algorithms to mitigate fairness gerrymandering in machine learning models. The proposed frameworks and methodologies pave the way for developing more fair and equitable AI systems, crucial for their trustworthiness and acceptance in sensitive application domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Michael Kearns (65 papers)
  2. Seth Neel (27 papers)
  3. Aaron Roth (138 papers)
  4. Zhiwei Steven Wu (143 papers)
Citations (736)
Youtube Logo Streamline Icon: https://streamlinehq.com