Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction (1802.09548v1)

Published 26 Feb 2018 in stat.ML, cs.CY, and cs.LG

Abstract: As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people's moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person's assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people's unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people's fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.

An Expert Analysis of "Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction"

The intersection of algorithmic decision-making and societal norms is a burgeoning area of interest within computer science. The paper at hand, titled "Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction", explores the critical issue of fairness perceptions in the context of algorithmic risk assessments used in judicial settings, particularly the COMPAS tool. This analysis is particularly relevant given the increasing role algorithms play in consequential decision-making processes.

Major Contributions

The authors present a descriptive paper assessing human perceptions of fairness when algorithms, specifically COMPAS, are used to predict criminal recidivism risk. Their methodological approach diverges from the predominantly normative focus of existing literature which prescribes how fairness should be operationalized. Instead, they survey 576 participants to map out how features within such algorithms are perceived in terms of fairness.

A significant contribution is the authors' framework for understanding why users might perceive certain features as fair or unfair. They propose eight latent properties—relevance, reliability, privacy, volitionality, the causal relationship with outcomes, potential for inducing vicious cycles, causing disparity in outcomes, and being caused by sensitive group membership—as dimensions along which fairness perceptions could be evaluated. Notably, the framework accounts for considerations extending beyond traditional discrimination.

Their results empirically validate the predictive power of this framework with over 85% accuracy, signifying its robustness in interpreting fairness perceptions. Interestingly, less focus was observed on attributes related to discriminatory impact, with more attention given to properties like relevance and reliability.

Key Findings

One striking result is that a significant proportion of surveyed individuals consider many features used by the COMPAS tool as unfair, despite these features not directly reflecting attributes protected under anti-discrimination laws, such as race. This underlines a broader understanding of fairness, extending past group-based discrimination.

The paper also highlights a crucial lack of consensus among participants on which features are deemed fair, attributed mainly to disagreements over the latent properties of the features. Despite this, the paper finds a consensus in the moral reasoning used by different individuals when provided with the same latent properties, suggesting common underlying heuristics in fairness judgements.

Implications and Speculations

This paper's insights suggest several implications for future work in this domain:

  1. Expanding Fairness Discussions: The findings bolster arguments for a wider array of fairness considerations in algorithmic decision-making beyond just discrimination, such as the assessment reliability and an individual's control over a feature (volitionality).
  2. Causal Reasoning Challenges: Disagreement on latent properties suggests complications in methodologies requiring predefined causal assumptions for fairness guarantees, as previously seen in causal reasoning literature. Exploring more objective means to determine causal relationships may mitigate these challenges.
  3. Cultural and Contextual Variations: The recognition of shared heuristics in fairness judgments opens avenues for exploring how these might vary across different cultural or decision-making contexts, contributing to a more nuanced understanding of fairness in global AI systems.
  4. Structural and Ethical Alignment: The framework developed holds potential to better align algorithmic structures with societal ethical standards by explicitly incorporating human perceptions of fairness into algorithm design processes.

As the field progresses, future scenarios in AI necessitate that academics and practitioners incorporate such comprehensive frameworks when designing algorithms that wield societal impact. These results not only highlight existing gaps and challenges but also pave the way for a more ethical and widely-accepted deployment of intelligent systems in social decision-making processes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nina Grgić-Hlača (13 papers)
  2. Elissa M. Redmiles (24 papers)
  3. Krishna P. Gummadi (68 papers)
  4. Adrian Weller (150 papers)
Citations (215)