Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning under selective labels in the presence of expert consistency (1807.00905v2)

Published 2 Jul 2018 in cs.LG and stat.ML

Abstract: We explore the problem of learning under selective labels in the context of algorithm-assisted decision making. Selective labels is a pervasive selection bias problem that arises when historical decision making blinds us to the true outcome for certain instances. Examples of this are common in many applications, ranging from predicting recidivism using pre-trial release data to diagnosing patients. In this paper we discuss why selective labels often cannot be effectively tackled by standard methods for adjusting for sample selection bias, even if there are no unobservables. We propose a data augmentation approach that can be used to either leverage expert consistency to mitigate the partial blindness that results from selective labels, or to empirically validate whether learning under such framework may lead to unreliable models prone to systemic discrimination.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Maria De-Arteaga (36 papers)
  2. Artur Dubrawski (67 papers)
  3. Alexandra Chouldechova (46 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.