Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

JuriBERT: A Masked-Language Model Adaptation for French Legal Text (2110.01485v2)

Published 4 Oct 2021 in cs.CL

Abstract: LLMs have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a LLM adapted to French legal text with the goal of helping law professionals. We conclude that some specific tasks do not benefit from generic LLMs pre-trained on large amounts of data. We explore the use of smaller architectures in domain-specific sub-languages and their benefits for French legal text. We prove that domain-specific pre-trained models can perform better than their equivalent generalised ones in the legal domain. Finally, we release JuriBERT, a new set of BERT models adapted to the French legal domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Stella Douka (2 papers)
  2. Hadi Abdine (12 papers)
  3. Michalis Vazirgiannis (116 papers)
  4. Rajaa El Hamdani (5 papers)
  5. David Restrepo Amariles (7 papers)
Citations (29)