Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration (2310.09241v1)

Published 13 Oct 2023 in cs.CL

Abstract: Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI, i.e., predicting the judgment of the case in terms of case fact description. Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems. Thus, it is worthwhile to explore the utilization of precedents in the LJP. Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task. These can be broken down into two categories: LLMs and domain-specific models. LLMs are capable of interpreting and generating complex natural language, while domain models are efficient in learning task-specific information. In this paper, we propose the precedent-enhanced LJP framework (PLJP), a system that leverages the strength of both LLM and domain models in the context of precedents. Specifically, the domain models are designed to provide candidate labels and find the proper precedents efficiently, and the large models will make the final prediction with an in-context precedents comprehension. Experiments on the real-world dataset demonstrate the effectiveness of our PLJP. Moreover, our work shows a promising direction for LLM and domain-model collaboration that can be generalized to other vertical domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yiquan Wu (15 papers)
  2. Siying Zhou (1 paper)
  3. Yifei Liu (43 papers)
  4. Weiming Lu (54 papers)
  5. Xiaozhong Liu (71 papers)
  6. Yating Zhang (21 papers)
  7. Changlong Sun (37 papers)
  8. Fei Wu (317 papers)
  9. Kun Kuang (114 papers)
Citations (23)