Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Legal Prompt Engineering for Multilingual Legal Judgement Prediction (2212.02199v1)

Published 5 Dec 2022 in cs.CL and cs.AI

Abstract: Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide and assist a LLM with performing a natural legal language processing (NLLP) skill. Our goal is to use LPE with LLMs over long legal documents for the Legal Judgement Prediction (LJP) task. We investigate the performance of zero-shot LPE for given facts in case-texts from the European Court of Human Rights (in English) and the Federal Supreme Court of Switzerland (in German, French and Italian). Our results show that zero-shot LPE is better compared to the baselines, but it still falls short compared to current state of the art supervised approaches. Nevertheless, the results are important, since there was 1) no explicit domain-specific data used - so we show that the transfer to the legal domain is possible for general-purpose LLMs, and 2) the LLMs where directly applied without any further training or fine-tuning - which in turn saves immensely in terms of additional computational costs.

Legal Prompt Engineering for Multilingual Legal Judgement Prediction

Introduction

The paper explores the utilization of Legal Prompt Engineering (LPE) to direct LLMs for the task of Legal Judgment Prediction (LJP) across multilingual contexts. The researchers focus on examining the effectiveness of zero-shot LPE in processing and analyzing long legal documents from distinct legal systems and languages, specifically English, German, French, and Italian. This research is particularly pertinent for evaluating the transferability of general-purpose LLMs to the legal domain without the necessity for extensive domain-specific data or additional computational costs associated with further training or fine-tuning.

Approach and Methodology

The authors adopt a manual and iterative approach to legal prompt engineering, aiming to transform the LJP task into a naturally phrased question template that guides the LLM towards generating relevant legal outcomes from input case texts. This process entails several refinement steps to optimize the prompt structure and maximize model performance under zero-shot conditions—where the model relies on its pre-existing knowledge without additional examples or fine-tuning.

The paper employs three different LLMs: mGPT, GPT-J-6B, and GPT-NeoX-20B, evaluating their performance across two datasets: the European Court of Human Rights (ECHR) and the Federal Supreme Court of Switzerland (FSCS). These datasets present binary legal judgment outcomes (e.g., violation/no violation or approval/dismissal) in four languages, serving as a testbed for the LPE approach.

Key Findings

The results of the paper reveal several insights:

  • Zero-shot LPE enables LLMs to outperform simple baselines consistently, highlighting the potential of prompt engineering in adapting LLMs for domain-specific tasks without the necessity for further training.
  • The performance, while superior to baseline models, does not yet match that of current supervised approaches. This gap underscores both the promise and the limitations of applying zero-shot learning in the legal domain.
  • The iterative process of prompt refinement plays a crucial role in enhancing model performance. The final prompt structure includes a clear demarcation between the document input and the question, along with specified answer options "A, Yes" and "B, No," significantly impacting the quality of the model's output.
  • Language and dataset-specific performance variations suggest that further optimization and customization of prompt templates may be required to maximize the effectiveness of LPE across different legal contexts and languages.

Implications and Future Work

The research introduces a viable framework for leveraging LLMs in legal judgment prediction tasks across multiple languages, emphasizing the potential to reduce computational costs and reliance on extensive domain-specific datasets. The findings also suggest significant avenues for future work, including:

  • Close collaboration with legal professionals to refine and optimize prompt templates, potentially improving the models' understanding and output relevance.
  • Exploration of continuous prompt engineering techniques and their application in legal NLP tasks, offering a more nuanced approach to model guidance and output interpretation.
  • Examination of larger, more sophisticated LLMs to assess the scalability and generalizability of the LPE approach within and beyond the legal domain.

In conclusion, this research contributes to the growing field of natural legal language processing by demonstrating the feasibility of applying zero-shot legal prompt engineering in multilingual LJP tasks. By highlighting both achievements and challenges, the paper paves the way for further innovation in the integration of LLMs into legal analytics and decision-making processes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dietrich Trautmann (9 papers)
  2. Alina Petrova (3 papers)
  3. Frank Schilder (5 papers)
Citations (66)