Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models (2308.11462v1)

Published 20 Aug 2023 in cs.CL, cs.AI, and cs.CY

Abstract: The advent of LLMs and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers find interesting. To enable cross-disciplinary conversations about LLMs in the law, we additionally show how popular legal frameworks for describing legal reasoning -- which distinguish between its many forms -- correspond to LegalBench tasks, thus giving lawyers and LLM developers a common vocabulary. This paper describes LegalBench, presents an empirical evaluation of 20 open-source and commercial LLMs, and illustrates the types of research explorations LegalBench enables.

LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in LLMs

The paper "LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in LLMs" undertakes a critical exploration into the capabilities of LLMs within the context of legal reasoning. The advent of LLMs has prompted significant curiosity about their applicability and performance in various specialized domains, and this paper targets the legal domain critically.

The authors introduce a benchmark called LegalBench, designed to systematically evaluate the legal reasoning capabilities of LLMs. The benchmark comprises a variety of tasks that are relevant to legal reasoning, assembled through a collaborative effort involving multiple legal professionals. This interdisciplinary approach ensures that the tasks are not only technically feasible but are also practically relevant to legal practitioners or present interest from a legal reasoning perspective.

The benchmark covers six distinct types of legal reasoning. The classification of these types aligns with existing legal frameworks widely recognized in the legal community. Such alignment not only enhances the relevance of the benchmark but also fosters a shared understanding between legal scholars and LLM developers about the diverse forms of legal reasoning that the models are expected to address. By providing this shared vocabulary, LegalBench paves the way for more profound interdisciplinary dialogues and collaborations concerning LLMs and their application to law.

In empirical evaluations, the paper documents an analysis of 20 different LLMs, both open-source and commercial variants, assessed through the LegalBench tasks. This comprehensive evaluation provides a vital reference point for comparing the capabilities of different models under a unified set of criteria specific to legal reasoning.

The paper refrains from making overly dramatic assertions about the capabilities of these models, instead presenting data that guide objective analysis. The performance results illuminate how some models excel in specific facets of legal reasoning, whereas others lag, underscoring the varying levels of sophistication required to tackle different legal reasoning tasks.

The implications of this work extend into both theoretical and practical domains. Theoretically, LegalBench serves as a foundational tool for further academic research exploring the limitations and potential of LLMs in legal contexts. Practically, the results outlined in the paper can inform legal professionals and AI developers about the suitability and limitations of current LLM capabilities in handling complex legal reasoning tasks.

Future developments in this arena could involve expanding the benchmark tasks to cover more nuanced aspects of legal reasoning, thus providing richer datasets for model training and evaluation. Additionally, leveraging these insights could drive innovations in developing more capable LLMs that better understand and simulate the intricate reasoning processes inherent in legal practice.

In conclusion, the creation and deployment of LegalBench mark a substantive contribution to both the legal and AI sectors by providing a structured framework to assess the legal reasoning capabilities of LLMs. This paper represents a crucial intersection between AI technology and legal scholarship, fostering advancement through informed interdisciplinary collaboration.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (40)
  1. Neel Guha (23 papers)
  2. Julian Nyarko (11 papers)
  3. Daniel E. Ho (45 papers)
  4. Christopher RĂ© (194 papers)
  5. Adam Chilton (1 paper)
  6. Aditya Narayana (1 paper)
  7. Alex Chohlas-Wood (4 papers)
  8. Austin Peters (2 papers)
  9. Brandon Waldon (1 paper)
  10. Daniel N. Rockmore (16 papers)
  11. Diego Zambrano (1 paper)
  12. Dmitry Talisman (1 paper)
  13. Enam Hoque (1 paper)
  14. Faiz Surani (4 papers)
  15. Frank Fagan (3 papers)
  16. Galit Sarfaty (1 paper)
  17. Gregory M. Dickinson (13 papers)
  18. Haggai Porat (1 paper)
  19. Jason Hegland (1 paper)
  20. Jessica Wu (2 papers)
Citations (101)
Youtube Logo Streamline Icon: https://streamlinehq.com