Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On preserving non-discrimination when combining expert advice (1810.11829v2)

Published 28 Oct 2018 in cs.LG, cs.DS, and stat.ML

Abstract: We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We consider the most basic extension of classical online learning: "Given a class of predictors that are individually non-discriminatory with respect to a particular metric, how can we combine them to perform as well as the best predictor, while preserving non-discrimination?" Surprisingly we show that this task is unachievable for the prevalent notion of "equalized odds" that requires equal false negative rates and equal false positive rates across groups. On the positive side, for another notion of non-discrimination, "equalized error rates", we show that running separate instances of the classical multiplicative weights algorithm for each group achieves this guarantee. Interestingly, even for this notion, we show that algorithms with stronger performance guarantees than multiplicative weights cannot preserve non-discrimination.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Avrim Blum (70 papers)
  2. Suriya Gunasekar (34 papers)
  3. Thodoris Lykouris (22 papers)
  4. Nathan Srebro (145 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.