Identify decision factors behind embedding-based Legal Judgment Prediction models

Identify and explain the factors that influence predictions made by embedding-based models for Legal Judgment Prediction, providing interpretable rationale to mitigate ethical issues such as gender bias in judicial decision support.

Background

Legal Judgment Prediction is a core LegalAI task where deep learning models (e.g., TextCNN, DPCNN, LSTM, BERT) have achieved promising performance, yet they operate as black boxes. For deployment in real legal systems, understanding how these models make decisions is essential to ensure fairness and trust.

The authors emphasize that the specific decision factors driving predictions of embedding-based methods are currently unknown, raising concerns about potential unfairness (e.g., gender bias). They advocate incorporating legal symbols and domain knowledge to improve interpretability.

References

Interpretability. If we want to apply methods to real legal systems, we must understand how they make predictions. However, existing embedding-based methods work as a black box. What factors affected their predictions remain unknown, and this may introduce unfairness and ethical issues like gender bias to the legal systems.

How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence (2004.12158 - Zhong et al., 2020) in Section 4.1 — Legal Judgment Prediction, Experiments and Analysis (Interpretability)