Papers
Topics
Authors
Recent
2000 character limit reached

Towards Robust Named Entity Recognition for Historic German

Published 18 Jun 2019 in cs.CL | (1906.07592v1)

Abstract: Recent advances in language modeling using deep neural networks have shown that these models learn representations, that vary with the network depth from morphology to semantic relationships like co-reference. We apply pre-trained LLMs to low-resource named entity recognition for Historic German. We show on a series of experiments that character-based pre-trained LLMs do not run into trouble when faced with low-resource datasets. Our pre-trained character-based LLMs improve upon classical CRF-based methods and previous work on Bi-LSTMs by boosting F1 score performance by up to 6%. Our pre-trained language and NER models are publicly available under https://github.com/stefan-it/historic-ner .

Citations (22)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.