Algorithmic Decision Making and the Cost of Fairness
The paper "Algorithmic Decision Making and the Cost of Fairness" by Corbett-Davies et al. investigates the interplay between algorithmic fairness and public safety within the context of pretrial release decisions. It specifically focuses on the inherent trade-offs policymakers face when integrating fairness constraints into algorithmic decision-making systems designed to assess the risk of defendants reoffending.
Reformulation of Algorithmic Fairness
The authors propose viewing algorithmic fairness as a problem of constrained optimization. The objective is to maximize public safety while adhering to formal fairness constraints, which aim to mitigate racial disparities. This reformulation allows for a structured assessment of how different fairness definitions impact decision-making algorithms. The paper examines three prominent definitions of fairness:
- Statistical parity: Ensuring equal treatment across groups in terms of detention rates.
- Conditional statistical parity: Equalizing detention rates within groups that share certain legitimate attributes, like prior convictions.
- Predictive equality: Ensuring equal false positive rates across groups.
Key Findings on Optimal Decision Rules
The paper demonstrates that, for each fairness definition, the optimal algorithm often differs from the unconstrained optimal algorithm, which applies a single threshold to all defendants. In particular:
- Statistical parity and predictive equality: Require race-specific thresholds for detention.
- Conditional statistical parity: Requires thresholds that depend on both group membership and legitimate attributes.
- The unconstrained optimal algorithm: Applies a single, uniform threshold to maximize public safety while ensuring all individuals are treated equally irrespective of race.
The paper proves that threshold-based rules are optimal even when fairness constraints are only required to hold approximately. This inherent conflict implies that satisfying fairness constraints results in some decrement in public safety.
Real-world Implications and Trade-offs
Using data from Broward County, Florida, the authors empirically demonstrate the practical implications of these theoretical findings. Utilizing optimized thresholds for statistical parity, predictive equality, or conditional statistical parity results in releasing more high-risk defendants and detaining more low-risk defendants compared to the unconstrained algorithm. The paper highlights that:
- Enforcing statistical parity results in a 9% increase in violent recidivism and 17% of detained defendants being low-risk.
- Enforcing predictive equality leads to a 7% increase in violent recidivism with 14% of detained defendants being low-risk.
- Enforcing conditional statistical parity results in a 4% increase in violent recidivism and 10% of detained defendants being low-risk.
These findings indicate significant public safety costs associated with adhering to popular notions of fairness in algorithmic decision-making.
The Challenge of Fair and Accurate Risk Scores
The paper also addresses the difficulty of ensuring that risk scores are not discriminatory. Calibration, often used to assess fairness, is shown to be insufficient by itself. It fails to detect cases where risk scores are manipulated to favor certain groups without altering their calibration properties. Thus, ensuring that risk scores accurately reflect true risks without racial bias requires careful scrutiny beyond calibration measures.
Implications and Future Directions
The analysis underscores the complex trade-offs between fairness and public safety that policymakers must navigate. It also highlights the legal and ethical considerations of implementing race-specific decision thresholds, given the potential for such measures to trigger strict scrutiny under constitutional law.
Looking forward, enhancing the accuracy and fairness of risk assessments through better data collection and model improvements can help mitigate these trade-offs. Future research could focus on developing decision frameworks that better balance the dual objectives of fairness and public safety, and ensuring that risk assessments are transparent and robust against manipulation.
By elucidating the costs associated with different fairness frameworks, this paper makes a significant contribution to the field of algorithmic decision-making and sets the stage for further exploration of how to craft equitable and effective decision systems.