Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Algorithmic decision making and the cost of fairness (1701.08230v4)

Published 28 Jan 2017 in cs.CY and stat.AP

Abstract: Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques recently have been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

Algorithmic Decision Making and the Cost of Fairness

The paper "Algorithmic Decision Making and the Cost of Fairness" by Corbett-Davies et al. investigates the interplay between algorithmic fairness and public safety within the context of pretrial release decisions. It specifically focuses on the inherent trade-offs policymakers face when integrating fairness constraints into algorithmic decision-making systems designed to assess the risk of defendants reoffending.

Reformulation of Algorithmic Fairness

The authors propose viewing algorithmic fairness as a problem of constrained optimization. The objective is to maximize public safety while adhering to formal fairness constraints, which aim to mitigate racial disparities. This reformulation allows for a structured assessment of how different fairness definitions impact decision-making algorithms. The paper examines three prominent definitions of fairness:

  • Statistical parity: Ensuring equal treatment across groups in terms of detention rates.
  • Conditional statistical parity: Equalizing detention rates within groups that share certain legitimate attributes, like prior convictions.
  • Predictive equality: Ensuring equal false positive rates across groups.

Key Findings on Optimal Decision Rules

The paper demonstrates that, for each fairness definition, the optimal algorithm often differs from the unconstrained optimal algorithm, which applies a single threshold to all defendants. In particular:

  • Statistical parity and predictive equality: Require race-specific thresholds for detention.
  • Conditional statistical parity: Requires thresholds that depend on both group membership and legitimate attributes.
  • The unconstrained optimal algorithm: Applies a single, uniform threshold to maximize public safety while ensuring all individuals are treated equally irrespective of race.

The paper proves that threshold-based rules are optimal even when fairness constraints are only required to hold approximately. This inherent conflict implies that satisfying fairness constraints results in some decrement in public safety.

Real-world Implications and Trade-offs

Using data from Broward County, Florida, the authors empirically demonstrate the practical implications of these theoretical findings. Utilizing optimized thresholds for statistical parity, predictive equality, or conditional statistical parity results in releasing more high-risk defendants and detaining more low-risk defendants compared to the unconstrained algorithm. The paper highlights that:

  • Enforcing statistical parity results in a 9% increase in violent recidivism and 17% of detained defendants being low-risk.
  • Enforcing predictive equality leads to a 7% increase in violent recidivism with 14% of detained defendants being low-risk.
  • Enforcing conditional statistical parity results in a 4% increase in violent recidivism and 10% of detained defendants being low-risk.

These findings indicate significant public safety costs associated with adhering to popular notions of fairness in algorithmic decision-making.

The Challenge of Fair and Accurate Risk Scores

The paper also addresses the difficulty of ensuring that risk scores are not discriminatory. Calibration, often used to assess fairness, is shown to be insufficient by itself. It fails to detect cases where risk scores are manipulated to favor certain groups without altering their calibration properties. Thus, ensuring that risk scores accurately reflect true risks without racial bias requires careful scrutiny beyond calibration measures.

Implications and Future Directions

The analysis underscores the complex trade-offs between fairness and public safety that policymakers must navigate. It also highlights the legal and ethical considerations of implementing race-specific decision thresholds, given the potential for such measures to trigger strict scrutiny under constitutional law.

Looking forward, enhancing the accuracy and fairness of risk assessments through better data collection and model improvements can help mitigate these trade-offs. Future research could focus on developing decision frameworks that better balance the dual objectives of fairness and public safety, and ensuring that risk assessments are transparent and robust against manipulation.

By elucidating the costs associated with different fairness frameworks, this paper makes a significant contribution to the field of algorithmic decision-making and sets the stage for further exploration of how to craft equitable and effective decision systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sam Corbett-Davies (12 papers)
  2. Emma Pierson (38 papers)
  3. Avi Feller (38 papers)
  4. Sharad Goel (27 papers)
  5. Aziz Huq (2 papers)
Citations (1,176)
Youtube Logo Streamline Icon: https://streamlinehq.com