Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law (2404.12762v2)

Published 19 Apr 2024 in cs.AI and cs.CY

Abstract: This paper investigates the relationship between law and eXplainable Artificial Intelligence (XAI). While there is much discussion about the AI Act, for which the trilogue of the European Parliament, Council and Commission recently concluded, other areas of law seem underexplored. This paper focuses on European (and in part German) law, although with international concepts and regulations such as fiduciary plausibility checks, the General Data Protection Regulation (GDPR), and product safety and liability. Based on XAI-taxonomies, requirements for XAI-methods are derived from each of the legal bases, resulting in the conclusion that each legal basis requires different XAI properties and that the current state of the art does not fulfill these to full satisfaction, especially regarding the correctness (sometimes called fidelity) and confidence estimates of XAI-methods. Published in the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society https://doi.org/10.1609/aies.v7i1.31648 .

How Should AI Decisions Be Explained? Requirements from European Law

The paper presented examines the intersection of European law and the explainability of AI systems, focusing predominantly on eXplainable Artificial Intelligence (XAI). The authors explore the legal landscape within the European Union (EU), particularly German law, to derive requirements for XAI methods that ensure compliance with existing and emerging legal standards. This examination considers prominent legal frameworks such as the General Data Protection Regulation (GDPR) and upcoming AI Act to ascertain how AI decisions should be explained to satisfy legal requirements in several domains including fiduciary duties, data protection, and product liability.

Key Contributions and Findings

The authors introduce an extended taxonomy of XAI properties, detailing their applicability within various areas of the law:

  • Fiduciary Decisions: In scenarios where ML aids in corporate decision-making, fiduciaries are required to exercise appropriate diligence, even when decisions are informed by AI predictions. The paper underscores the challenge posed by non-interpretability of black-box models, suggesting the necessity for XAI methods to support Correctness, Completeness, and Consistency. The paper highlights the need for properties that support informed plausibility checks of AI decisions, to mitigate potential fiduciary liability.
  • GDPR and Right to Explanation: The GDPR's Article 22(3) implicitly obliges data controllers to furnish meaningful information regarding automated decision-making processes. This necessitates XAI's role in enabling AI systems' end-users to object to decisions and seek recourse. Requirements include Correctness and Covariate Complexity, along with mechanisms for Counterability, ensuring explanations are comprehensive and actionable within the legal landscape defined by data protection rights.
  • Product Safety and Liability: Within this area, XAI is expected to assist in identifying defects in ML models related to product safety and liability. The emphasis is on enabling the identification of model errors, requiring a thorough understanding of Correctness and potential causes of errors. Here, global or interpretable models are advised to ensure safety and compliance with product liability standards.

Implications and Future Directions

The implications of this work are extensive, impacting the development and deployment of XAI methods within legally sensitive contexts. The paper stresses the need for a structured approach to XAI development, one that marries technical capabilities with legal requirements. The authors advocate for interdisciplinary collaboration between legal and technical experts to bridge gaps in understanding and align XAI capabilities with complex legal frameworks.

The future of AI in sensitive domains hinges on precise definitions of explainability that satisfy multifaceted legal requirements. Improved synthesis of the interdisciplinary insights involving technical, legal, and human factors perspectives will likely characterize evolution in AI explainability. This paper positions itself at the forefront of this dialogue, offering foundational insights that could inform both AI policy and technology development within the EU and possibly beyond.

In summary, while XAI presents potential benefits for legal obligations, the current technological state is deficient in fulfilling all legal demands, particularly regarding fidelity, confidence measures, and comprehensive transparency across the spectrum of possible explanations. Further research to bridge this capability gap is necessary to advance both the theoretical and practical underpinnings of AI systems' legal compliance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Advocate General Pikamäe. 16.03.2023. SCHUFA Holding (Scoring). Opinion, C‑634/21, ECLI EU:C:2023:220.
  2. Aws Albarghouthi. 2021. Introduction to Neural Network Verification. http://arxiv.org/pdf/2109.10317.pdf
  3. David Alvarez-Melis and Tommi S. Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. http://arxiv.org/pdf/1806.07538.pdf
  4. BGH. 20.09.2011. ISION. Judgement, II ZR 234/09. ZIP 2011, 2097.
  5. Towards eXplainable Artificial Intelligence (XAI) in Tax Law: The Need for a Minimum Legal Standard.
  6. Model Reporting for Certifiable AI: A Proposal from Merging EU Regulation into AI Development. http://arxiv.org/pdf/2307.11525.pdf
  7. Annika Buchholz and Elena Dubovitskaya. 2023. Die Geschäftsleitung und der Rat des Algorithmus. ZIP (2023), 63–73.
  8. Nadia Burkart and Marco F. Huber. 2021. A Survey on the Explainability of Supervised Machine Learning. Journal of Artificial Intelligence Research 70 (2021), 245–317. https://doi.org/10.1613/jair.1.12228
  9. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Conference on Human Factors in Computing Systems - Proceedings). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300789
  10. CJEU. 07.12.2023. SCHUFA Holding (Scoring). Judgement, C‑634/21, ECLI EU:C:2023:957.
  11. Understanding Global Feature Contributions With Additive Importance Measures. http://arxiv.org/pdf/2004.00668.pdf
  12. Alfred R. Cowger Jr. 2022–2023. Corporate Fiduciary Duty in the Age of Algorithms. Case Western Reserve Journal of Law, Technology & the Internet 14 (2022–2023), 136–207.
  13. A Critical Survey on Fairness Benefits of XAI. https://arxiv.org/pdf/2310.13007.pdf
  14. Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act? https://arxiv.org/pdf/2302.10766.pdf
  15. Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science 5 (2023), 1096257. https://doi.org/10.3389/fcomp.2023.1096257
  16. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In Proceedings of the 35th International Conference on Machine Learning (July 10-15, 2018) (Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, Stockholmsmässan, Stockholm, Sweden, 2673–2682. http://proceedings.mlr.press/v80/kim18d.html
  17. Problems with Shapley-value-based explanations as feature importance measures. https://arxiv.org/pdf/2002.11097.pdf
  18. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. , 103473 pages. https://doi.org/10.1016/j.artint.2021.103473
  19. Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777.
  20. Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 759, 19 pages. https://doi.org/10.1145/3544548.3581058
  21. Reclaiming transparency: contesting the logics of secrecy within the AI Act. European Law Open 2 (12 2022), 1–27. https://doi.org/10.1017/elo.2022.47
  22. Luke Merrick and Ankur Taly. 2019. The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory. https://arxiv.org/pdf/1909.08128.pdf
  23. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  24. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. http://arxiv.org/pdf/1712.00547v2
  25. Christoph Molnar. 2022. Interpretable Machine Learning (2 ed.). https://christophm.github.io/interpretable-ml-book
  26. Florian Möslein. 2018. Digitalisierung im Gesellschaftsrecht: Unternehmensleitung durch Algorithmen und künstliche Intelligenz? ZIP (2018), 204–212.
  27. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. 55, 13s, Article 295 (jul 2023), 42 pages. https://doi.org/10.1145/3583558
  28. Towards Interpretable ANNs: An Exact Transformation to Multi-Class Multivariate Decision Trees. https://doi.org/10.48550/arXiv.2003.04675
  29. Martin Petrin. 2019. Corporate Management in the Age of AI. Columbia Business Law Review 3 (2019), 965–1030.
  30. "Why Should I Trust You?". In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Balaji Krishnapuram, Mohak Shah, Alex Smola, Charu Aggarwal, Dou Shen, and Rajeev Rastogi (Eds.). ACM, New York, NY, USA, 1135–1144. https://doi.org/10.1145/2939672.2939778
  31. Hierarchical confounder discovery in the experiment-machine learning cycle. Patterns 3, 4 (2022), 100451. https://doi.org/10.1016/j.patter.2022.100451
  32. Cynthia Rudin. 2018. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. https://arxiv.org/pdf/1811.10154.pdf
  33. Waddah Saeed and Christian Omlin. 2023. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems 263 (2023), 110273. https://doi.org/10.1016/j.knosys.2023.110273
  34. The Tower of Babel in Explainable Artificial Intelligence (XAI). In Machine Learning and Knowledge Extraction (Lecture Notes in Computer Science), Andreas Holzinger, Peter Kieseberg, Federico Cabitza, Andrea Campagner, A. Min Tjoa, and Edgar Weippl (Eds.). Springer Nature Switzerland and Imprint Springer, Cham, 65–81. https://doi.org/10.1007/978-3-031-40837-3{_}5
  35. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. , 336–359 pages. https://doi.org/10.1007/s11263-019-01228-7
  36. Membership Inference Attacks against Machine Learning Models. http://arxiv.org/pdf/1610.05820.pdf
  37. Mukund Sundararajan and Amir Najmi. 2020. The Many Shapley Values for Model Explanation. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). JMLR.org, 9269–9278. https://proceedings.mlr.press/v119/sundararajan20b.html
  38. William R. Swartout and Johanna D. Moore. 1993. Second Generation Expert Systems. https://doi.org/10.1007/978-3-642-77927-5
  39. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems. http://arxiv.org/pdf/1806.07552.pdf
  40. Sanity Checks for Saliency Metrics. https://arxiv.org/pdf/1912.01451
  41. Evaluating Feature Relevance XAI in Network Intrusion Detection. In Explainable Artificial Intelligence (Communications in Computer and Information Science), Luca Longo (Ed.). Springer Nature Switzerland and Imprint Springer, Cham, 483–497. https://doi.org/10.1007/978-3-031-44064-9{_}25
  42. The effects of explanations on automation bias. Artificial Intelligence 322 (2023), 103952. https://doi.org/10.1016/j.artint.2023.103952
  43. Apostolos Vorras and Lilian Mitrou. 2021. Unboxing the Black Box of Artificial Intelligence: Algorithmic Transparency and/or a Right to Functional Explainability. In EU Internet Law in the Digital Single Market, Tatiana-Eleni Synodinou, Philippe Jougleux, Christiana Markou, and Thalia Prastitou-Merdi (Eds.). Springer International Publishing, Cham, 247–264. https://doi.org/10.1007/978-3-030-69583-5_10
  44. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31 (04 2018), 841–887. https://doi.org/10.2139/ssrn.3063289
  45. Sandra Wachter and Brent Daniel Mittelstadt. 2018. A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review 2019 (2018), 494–620–494–620. https://api.semanticscholar.org/CorpusID:226950761
  46. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law 7 (2017), 76–99. https://doi.org/10.2139/SSRN.2903469
  47. Joyce Zhou and Thorsten Joachims. 2023. How to Explain and Justify Almost Any Decision: Potential Pitfalls for Accountability in AI Decision-Making. In 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 12–21. https://doi.org/10.1145/3593013.3593972
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Benjamin Fresz (5 papers)
  2. Elena Dubovitskaya (1 paper)
  3. Danilo Brajovic (5 papers)
  4. Marco Huber (25 papers)
  5. Christian Horz (1 paper)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com