Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Optimization for Fairness with Noisy Protected Groups (2002.09343v3)

Published 21 Feb 2020 in cs.LG and stat.ML

Abstract: Many existing fairness criteria for machine learning involve equalizing some metric across protected groups such as race or gender. However, practitioners trying to audit or enforce such group-based criteria can easily face the problem of noisy or biased protected group information. First, we study the consequences of naively relying on noisy protected group labels: we provide an upper bound on the fairness violations on the true groups G when the fairness criteria are satisfied on noisy groups $\hat{G}$. Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective. We provide theoretical guarantees that one such approach converges to an optimal feasible solution. Using two case studies, we show empirically that the robust approaches achieve better true group fairness guarantees than the naive approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Serena Wang (20 papers)
  2. Wenshuo Guo (22 papers)
  3. Harikrishna Narasimhan (30 papers)
  4. Andrew Cotter (19 papers)
  5. Maya Gupta (22 papers)
  6. Michael I. Jordan (438 papers)
Citations (116)

Summary

We haven't generated a summary for this paper yet.