Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Individual Fairness under Uncertainty (2302.08015v2)

Published 16 Feb 2023 in cs.LG, cs.AI, and cs.CY

Abstract: Algorithmic fairness, the research field of making ML algorithms fair, is an established area in ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration during the building of ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i.e., the class label is given as a precondition. Unlike prior studies in fairness, we propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels, while enforcing similar individuals to be treated similarly from a ranking perspective, free of the Lipschitz condition in the conventional individual fairness definition. We argue that this perspective represents a more realistic model of fairness research for real-world application deployment and show how learning with such a relaxed precondition draws new insights that better explains algorithmic fairness. We conducted experiments on four real-world datasets to evaluate our proposed method compared to other fairness models, demonstrating its superiority in minimizing discrimination while maintaining predictive performance with uncertainty present.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wenbin Zhang (71 papers)
  2. Zichong Wang (14 papers)
  3. Juyong Kim (4 papers)
  4. Cheng Cheng (188 papers)
  5. Thomas Oommen (2 papers)
  6. Pradeep Ravikumar (101 papers)
  7. Jeremy Weiss (3 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.