Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic Judgment Prediction via Legal Reading Comprehension (1809.06537v1)

Published 18 Sep 2018 in cs.AI and cs.CL
Automatic Judgment Prediction via Legal Reading Comprehension

Abstract: Automatic judgment prediction aims to predict the judicial results based on case materials. It has been studied for several decades mainly by lawyers and judges, considered as a novel and prospective application of artificial intelligence techniques in the legal field. Most existing methods follow the text classification framework, which fails to model the complex interactions among complementary case materials. To address this issue, we formalize the task as Legal Reading Comprehension according to the legal scenario. Following the working protocol of human judges, LRC predicts the final judgment results based on three types of information, including fact description, plaintiffs' pleas, and law articles. Moreover, we propose a novel LRC model, AutoJudge, which captures the complex semantic interactions among facts, pleas, and laws. In experiments, we construct a real-world civil case dataset for LRC. Experimental results on this dataset demonstrate that our model achieves significant improvement over state-of-the-art models. We will publish all source codes and datasets of this work on \urlgithub.com for further research.

The paper "Automatic Judgment Prediction via Legal Reading Comprehension" focuses on enhancing the accuracy of automated judgment prediction systems in civil law cases by introducing a framework known as Legal Reading Comprehension (LRC). This research moves beyond traditional text classification models by formalizing the task as a reading comprehension problem that leverages the complex semantic interactions between case materials and legal statutes.

Key Contributions:

  1. Framing the Problem as Legal Reading Comprehension (LRC): The authors reformulate judgment prediction as a legal reading comprehension task, a paradigm shift from mere text classification. This approach models the judicial decision-making process by considering the interactions between three types of information: fact descriptions, plaintiffs' pleas, and relevant law articles.
  2. Introduction of AutoJudge Model: A novel model named AutoJudge is proposed to instantiate the LRC framework. This model utilizes pair-wise mutual attention mechanisms to capture and process the semantic interactions between the heterogeneous inputs of fact descriptions, pleas, and law articles. The reading comprehension design is inspired by question answering systems, adopting a similar attention-based approach to enhance interpretability and accuracy.
  3. Dataset Construction: The authors collect and preprocess a substantial dataset consisting of 100,000 real-world civil cases from the Supreme People's Court of China’s database. The dataset is specifically designed to reflect diverse pleas and judicial decisions, especially in divorce proceedings, which are characterized by multiple independent issues such as custody and the granting of divorce.

Methodological Insights:

  • Text Encoding and Pair-Wise Attentive Reader:

The model begins with encoding input texts using bidirectional GRUs tailored for fact descriptions, pleas, and law articles. A pair-wise attentive reader module is then employed, which utilizes mutual attention to glean relevant information from facts regarding each plea or law article, enhancing the representation learned by the neural network.

  • Output Layer:

A convolutional neural network (CNN) layer processes the concatenated outputs of the attentive reader, aiming to capture local structures in the text for final prediction. This layer is critical in summarizing the sequence interactions imitating human-like reading and decision-making processes.

Experimental Results:

The proposed AutoJudge model demonstrates superior performance compared to various state-of-the-art baselines, both in the realms of neural text classification and reading comprehension models applied to legal texts. The improvements are noted in terms of precision, recall, F1-score, and accuracy, highlighting the effectiveness of integrating law articles through an attention mechanism.

Ablation Studies and Observations:

  • Importance of Reading Mechanisms:

The flexibility and appropriateness of the reading mechanism are underscored by substantial declines in performance when it is omitted. The attention mechanism contributes significantly by integrating law articles effectively.

  • Role of Law Articles:

While the outright inclusion of law articles is beneficial, their preprocessing and selection are crucial. Experiments reveal that using ground-truth articles or filtering using unsupervised methods improves the model performance.

  • Data Preprocessing:

Strategies such as name replacement and law article filtration are shown to further enhance the model's capacity to generalize and predict judicial outcomes.

In conclusion, the paper not only proposes a novel methodological framework but also provides empirical evidence supporting the effectiveness of Legal Reading Comprehension models over traditional approaches, paving the way for more nuanced applications in the automation of legal judgments. Future work may expand toward handling more complex judgment forms and exploring additional civil case scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shangbang Long (13 papers)
  2. Cunchao Tu (11 papers)
  3. Zhiyuan Liu (433 papers)
  4. Maosong Sun (337 papers)
Citations (82)