Identify decision factors behind embedding-based Legal Judgment Prediction models
Identify and explain the factors that influence predictions made by embedding-based models for Legal Judgment Prediction, providing interpretable rationale to mitigate ethical issues such as gender bias in judicial decision support.
Sponsor
References
Interpretability. If we want to apply methods to real legal systems, we must understand how they make predictions. However, existing embedding-based methods work as a black box. What factors affected their predictions remain unknown, and this may introduce unfairness and ethical issues like gender bias to the legal systems.
— How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence
(2004.12158 - Zhong et al., 2020) in Section 4.1 — Legal Judgment Prediction, Experiments and Analysis (Interpretability)