Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The impossibility of "fairness": a generalized impossibility result for decisions (1707.01195v3)

Published 5 Jul 2017 in stat.AP, cs.AI, and stat.ML

Abstract: Various measures can be used to estimate bias or unfairness in a predictor. Previous work has already established that some of these measures are incompatible with each other. Here we show that, when groups differ in prevalence of the predicted event, several intuitive, reasonable measures of fairness (probability of positive prediction given occurrence or non-occurrence; probability of occurrence given prediction or non-prediction; and ratio of predictions over occurrences for each group) are all mutually exclusive: if one of them is equal among groups, the other two must differ. The only exceptions are for perfect, or trivial (always-positive or always-negative) predictors. As a consequence, any non-perfect, non-trivial predictor must necessarily be "unfair" under two out of three reasonable sets of criteria. This result readily generalizes to a wide range of well-known statistical quantities (sensitivity, specificity, false positive rate, precision, etc.), all of which can be divided into three mutually exclusive groups. Importantly, The results applies to all predictors, whether algorithmic or human. We conclude with possible ways to handle this effect when assessing and designing prediction methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Thomas Miconi (16 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.