How Should AI Decisions Be Explained? Requirements from European Law
The paper presented examines the intersection of European law and the explainability of AI systems, focusing predominantly on eXplainable Artificial Intelligence (XAI). The authors explore the legal landscape within the European Union (EU), particularly German law, to derive requirements for XAI methods that ensure compliance with existing and emerging legal standards. This examination considers prominent legal frameworks such as the General Data Protection Regulation (GDPR) and upcoming AI Act to ascertain how AI decisions should be explained to satisfy legal requirements in several domains including fiduciary duties, data protection, and product liability.
Key Contributions and Findings
The authors introduce an extended taxonomy of XAI properties, detailing their applicability within various areas of the law:
- Fiduciary Decisions: In scenarios where ML aids in corporate decision-making, fiduciaries are required to exercise appropriate diligence, even when decisions are informed by AI predictions. The paper underscores the challenge posed by non-interpretability of black-box models, suggesting the necessity for XAI methods to support Correctness, Completeness, and Consistency. The paper highlights the need for properties that support informed plausibility checks of AI decisions, to mitigate potential fiduciary liability.
- GDPR and Right to Explanation: The GDPR's Article 22(3) implicitly obliges data controllers to furnish meaningful information regarding automated decision-making processes. This necessitates XAI's role in enabling AI systems' end-users to object to decisions and seek recourse. Requirements include Correctness and Covariate Complexity, along with mechanisms for Counterability, ensuring explanations are comprehensive and actionable within the legal landscape defined by data protection rights.
- Product Safety and Liability: Within this area, XAI is expected to assist in identifying defects in ML models related to product safety and liability. The emphasis is on enabling the identification of model errors, requiring a thorough understanding of Correctness and potential causes of errors. Here, global or interpretable models are advised to ensure safety and compliance with product liability standards.
Implications and Future Directions
The implications of this work are extensive, impacting the development and deployment of XAI methods within legally sensitive contexts. The paper stresses the need for a structured approach to XAI development, one that marries technical capabilities with legal requirements. The authors advocate for interdisciplinary collaboration between legal and technical experts to bridge gaps in understanding and align XAI capabilities with complex legal frameworks.
The future of AI in sensitive domains hinges on precise definitions of explainability that satisfy multifaceted legal requirements. Improved synthesis of the interdisciplinary insights involving technical, legal, and human factors perspectives will likely characterize evolution in AI explainability. This paper positions itself at the forefront of this dialogue, offering foundational insights that could inform both AI policy and technology development within the EU and possibly beyond.
In summary, while XAI presents potential benefits for legal obligations, the current technological state is deficient in fulfilling all legal demands, particularly regarding fidelity, confidence measures, and comprehensive transparency across the spectrum of possible explanations. Further research to bridge this capability gap is necessary to advance both the theoretical and practical underpinnings of AI systems' legal compliance.