Overview of "Why Fairness Cannot Be Automated"
The paper "Why Fairness Cannot Be Automated" by Sandra Wachter, Brent Mittelstadt, and Chris Russell presents a critical examination of the intersection between EU non-discrimination law and algorithmic fairness. The authors identify a fundamental incompatibility between the context-sensitive and ambiguous nature of European legal notions of discrimination and the statistical measures commonly used in AI fairness research.
Key Contributions
The paper makes three distinct contributions. First, it provides a detailed analysis of the evidential requirements for establishing discrimination under EU non-discrimination law. Here, the authors highlight the contextual and interpretive nature of these requirements, which renders them unsuitable for automation. Concepts such as the definition of disadvantaged and comparator groups, the severity and type of harm, and the admissibility of evidence require normative judgments made on a case-by-case basis.
Second, the authors argue that traditional legal remedies are inadequate for dealing with automated discrimination. Human discrimination provides intuitive signals that can trigger legal claims, whereas equivalent mechanisms do not exist in algorithmic systems. The abstract nature of algorithmic decisions, coupled with their complexity, poses challenges for detection and redress through existing legal frameworks.
Finally, the paper proposes "conditional demographic disparity" (CDD) as a baseline statistical measure aligned with the European Court of Justice's 'gold standard' for assessing prima facie discrimination. CDD serves as a consistent procedure for assessing potential discrimination, without dictating judicial interpretation. It enables a standardized approach to generate statistical evidence useful for legal proceedings and prevention strategies in AI systems.
Implications and Future Directions
The paper acknowledges the rapidly growing influence of algorithms in decision-making and the evolving challenges that automated discrimination presents to legal systems. It suggests that fair treatment within automated systems necessitates not only technical solutions but also an understanding of the legal and social dimensions of fairness.
While existing EU non-discrimination law provides some degree of contextual agility, which is advantageous, it also leads to subjective decision-making. The authors argue that this flexibility should be respected, and CDD offers a way to support judicial interpretation through systematic evidence generation.
The recommendation to adopt CDD is particularly important for aligning AI system design with legal principles, ensuring that biased outcomes are detected and assessed in a way that honors the contextual approach of the law. It emphasizes the need for dialogue between the technical and legal communities to build fair and equitable systems.
As the field of AI continues to grow, further interdisciplinary research is required to refine tools like CDD and establish reliable frameworks that integrate legal standards and computational methodologies. This collaboration will be crucial in addressing the ethical and operational challenges posed by automated systems and ensuring that technological advancement does not undermine fundamental legal protections and rights.