Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

German BERT Model for Legal Named Entity Recognition (2303.05388v1)

Published 7 Mar 2023 in cs.CL and cs.LG

Abstract: The use of BERT, one of the most popular LLMs, has led to improvements in many NLP tasks. One such task is Named Entity Recognition (NER) i.e. automatic identification of named entities such as location, person, organization, etc. from a given text. It is also an important base step for many NLP tasks such as information extraction and argumentation mining. Even though there is much research done on NER using BERT and other popular LLMs, the same is not explored in detail when it comes to Legal NLP or Legal Tech. Legal NLP applies various NLP techniques such as sentence similarity or NER specifically on legal data. There are only a handful of models for NER tasks using BERT LLMs, however, none of these are aimed at legal documents in German. In this paper, we fine-tune a popular BERT LLM trained on German data (German BERT) on a Legal Entity Recognition (LER) dataset. To make sure our model is not overfitting, we performed a stratified 10-fold cross-validation. The results we achieve by fine-tuning German BERT on the LER dataset outperform the BiLSTM-CRF+ model used by the authors of the same LER dataset. Finally, we make the model openly available via HuggingFace.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Harshil Darji (4 papers)
  2. Jelena Mitrović (16 papers)
  3. Michael Granitzer (46 papers)
Citations (10)