Bias Detection and Mitigation in Training Data for Argumentation-Based Legal AI
Investigate the presence of biases in training data used to develop argumentation-based explainable AI models for legal decision-making, and derive methods to remove or mitigate such biases to improve precision and applicability in judicial contexts.
References
This paper presents a first analysis of XAI in the legal field so further exploration remains open, like examining the presence of biases in training data and possible ways to remove or mitigate them, so that argumentation-based models more would be more precise and applicable to judicial decision-making.
— Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives
(2510.11079 - Prajescu et al., 13 Oct 2025) in Section 6 (Conclusions and Future Works)