Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI (2005.05906v1)

Published 12 May 2020 in cs.AI
Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

Abstract: This article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement that aligns with the European Court of Justice's "gold standard." Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law. N.B. Abridged abstract

Overview of "Why Fairness Cannot Be Automated"

The paper "Why Fairness Cannot Be Automated" by Sandra Wachter, Brent Mittelstadt, and Chris Russell presents a critical examination of the intersection between EU non-discrimination law and algorithmic fairness. The authors identify a fundamental incompatibility between the context-sensitive and ambiguous nature of European legal notions of discrimination and the statistical measures commonly used in AI fairness research.

Key Contributions

The paper makes three distinct contributions. First, it provides a detailed analysis of the evidential requirements for establishing discrimination under EU non-discrimination law. Here, the authors highlight the contextual and interpretive nature of these requirements, which renders them unsuitable for automation. Concepts such as the definition of disadvantaged and comparator groups, the severity and type of harm, and the admissibility of evidence require normative judgments made on a case-by-case basis.

Second, the authors argue that traditional legal remedies are inadequate for dealing with automated discrimination. Human discrimination provides intuitive signals that can trigger legal claims, whereas equivalent mechanisms do not exist in algorithmic systems. The abstract nature of algorithmic decisions, coupled with their complexity, poses challenges for detection and redress through existing legal frameworks.

Finally, the paper proposes "conditional demographic disparity" (CDD) as a baseline statistical measure aligned with the European Court of Justice's 'gold standard' for assessing prima facie discrimination. CDD serves as a consistent procedure for assessing potential discrimination, without dictating judicial interpretation. It enables a standardized approach to generate statistical evidence useful for legal proceedings and prevention strategies in AI systems.

Implications and Future Directions

The paper acknowledges the rapidly growing influence of algorithms in decision-making and the evolving challenges that automated discrimination presents to legal systems. It suggests that fair treatment within automated systems necessitates not only technical solutions but also an understanding of the legal and social dimensions of fairness.

While existing EU non-discrimination law provides some degree of contextual agility, which is advantageous, it also leads to subjective decision-making. The authors argue that this flexibility should be respected, and CDD offers a way to support judicial interpretation through systematic evidence generation.

The recommendation to adopt CDD is particularly important for aligning AI system design with legal principles, ensuring that biased outcomes are detected and assessed in a way that honors the contextual approach of the law. It emphasizes the need for dialogue between the technical and legal communities to build fair and equitable systems.

As the field of AI continues to grow, further interdisciplinary research is required to refine tools like CDD and establish reliable frameworks that integrate legal standards and computational methodologies. This collaboration will be crucial in addressing the ethical and operational challenges posed by automated systems and ensuring that technological advancement does not undermine fundamental legal protections and rights.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sandra Wachter (7 papers)
  2. Brent Mittelstadt (14 papers)
  3. Chris Russell (56 papers)
Citations (253)