Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Reducing Biases in Combining Multiple Experts Online (1908.07009v4)

Published 19 Aug 2019 in cs.LG and stat.ML

Abstract: In many real life situations, including job and loan applications, gatekeepers must make justified and fair real-time decisions about a person's fitness for a particular opportunity. In this paper, we aim to accomplish approximate group fairness in an online stochastic decision-making process, where the fairness metric we consider is equalized odds. Our work follows from the classical learning-from-experts scheme, assuming a finite set of classifiers (human experts, rules, options, etc) that cannot be modified. We run separate instances of the algorithm for each label class as well as sensitive groups, where the probability of choosing each instance is optimized for both fairness and regret. Our theoretical results show that approximately equalized odds can be achieved without sacrificing much regret. We also demonstrate the performance of the algorithm on real data sets commonly used by the fairness community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yi Sun (146 papers)
  2. Ivan Ramirez (21 papers)
  3. Alfredo Cuesta-Infante (10 papers)
  4. Kalyan Veeramachaneni (38 papers)

Summary

We haven't generated a summary for this paper yet.