Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MultiLegalPile: A 689GB Multilingual Legal Corpus (2306.02069v3)

Published 3 Jun 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Large, high-quality datasets are crucial for training LLMs. However, so far, there are few datasets available for specialized critical domains such as law and the available ones are often only for the English language. We curate and release MultiLegalPile, a 689GB corpus in 24 languages from 17 jurisdictions. The MultiLegalPile corpus, which includes diverse legal data sources with varying licenses, allows for pretraining NLP models under fair use, with more permissive licenses for the Eurlex Resources and Legal mC4 subsets. We pretrain two RoBERTa models and one Longformer multilingually, and 24 monolingual models on each of the language-specific subsets and evaluate them on LEXTREME. Additionally, we evaluate the English and multilingual models on LexGLUE. Our multilingual models set a new SotA on LEXTREME and our English models on LexGLUE. We release the dataset, the trained models, and all of the code under the most open possible licenses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Joel Niklaus (21 papers)
  2. Veton Matoshi (3 papers)
  3. Matthias Stürmer (13 papers)
  4. Ilias Chalkidis (40 papers)
  5. Daniel E. Ho (45 papers)
Citations (29)
Youtube Logo Streamline Icon: https://streamlinehq.com