When the Oracle Misleads: Modeling the Consequences of Using Observable Rather than Potential Outcomes in Risk Assessment Instruments (2104.01921v1)
Abstract: Risk Assessment Instruments (RAIs) are widely used to forecast adverse outcomes in domains such as healthcare and criminal justice. RAIs are commonly trained on observational data and are optimized to predict observable outcomes rather than potential outcomes, which are the outcomes that would occur absent a particular intervention. Examples of relevant potential outcomes include whether a patient's condition would worsen without treatment or whether a defendant would recidivate if released pretrial. We illustrate how RAIs which are trained to predict observable outcomes can lead to worse decision making, causing precisely the types of harm they are intended to prevent. This can occur even when the predictors are Bayes-optimal and there is no unmeasured confounding.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.