Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context-Aware Legal Citation Recommendation using Deep Learning (2106.10776v1)

Published 20 Jun 2021 in cs.IR and cs.CL

Abstract: Lawyers and judges spend a large amount of time researching the proper legal authority to cite while drafting decisions. In this paper, we develop a citation recommendation tool that can help improve efficiency in the process of opinion drafting. We train four types of machine learning models, including a citation-list based method (collaborative filtering) and three context-based methods (text similarity, BiLSTM and RoBERTa classifiers). Our experiments show that leveraging local textual context improves recommendation, and that deep neural models achieve decent performance. We show that non-deep text-based methods benefit from access to structured case metadata, but deep models only benefit from such access when predicting from context of insufficient length. We also find that, even after extensive training, RoBERTa does not outperform a recurrent neural model, despite its benefits of pretraining. Our behavior analysis of the RoBERTa model further shows that predictive performance is stable across time and citation classes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zihan Huang (25 papers)
  2. Charles Low (1 paper)
  3. Mengqiu Teng (2 papers)
  4. Hongyi Zhang (41 papers)
  5. Daniel E. Ho (45 papers)
  6. Mark S. Krass (2 papers)
  7. Matthias Grabmair (33 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.