Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness (1905.00569v2)

Published 2 May 2019 in cs.LG and stat.ML

Abstract: Machine Learning (ML) models trained on data from multiple demographic groups can inherit representation disparity (Hashimoto et al., 2018) that may exist in the data: the model may be less favorable to groups contributing less to the training process; this in turn can degrade population retention in these groups over time, and exacerbate representation disparity in the long run. In this study, we seek to understand the interplay between ML decisions and the underlying group representation, how they evolve in a sequential framework, and how the use of fairness criteria plays a role in this process. We show that the representation disparity can easily worsen over time under a natural user dynamics (arrival and departure) model when decisions are made based on a commonly used objective and fairness criteria, resulting in some groups diminishing entirely from the sample pool in the long run. It highlights the fact that fairness criteria have to be defined while taking into consideration the impact of decisions on user dynamics. Toward this end, we explain how a proper fairness criterion can be selected based on a general user dynamics model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xueru Zhang (31 papers)
  2. Mohammad Mahdi Khalili (22 papers)
  3. Cem Tekin (47 papers)
  4. Mingyan Liu (70 papers)
Citations (60)

Summary

We haven't generated a summary for this paper yet.