JuriBERT: A Masked-Language Model Adaptation for French Legal Text (2110.01485v2)
Abstract: LLMs have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a LLM adapted to French legal text with the goal of helping law professionals. We conclude that some specific tasks do not benefit from generic LLMs pre-trained on large amounts of data. We explore the use of smaller architectures in domain-specific sub-languages and their benefits for French legal text. We prove that domain-specific pre-trained models can perform better than their equivalent generalised ones in the legal domain. Finally, we release JuriBERT, a new set of BERT models adapted to the French legal domain.
- Stella Douka (2 papers)
- Hadi Abdine (12 papers)
- Michalis Vazirgiannis (116 papers)
- Rajaa El Hamdani (5 papers)
- David Restrepo Amariles (7 papers)