2000 character limit reached
Towards Robust Named Entity Recognition for Historic German (1906.07592v1)
Published 18 Jun 2019 in cs.CL
Abstract: Recent advances in LLMing using deep neural networks have shown that these models learn representations, that vary with the network depth from morphology to semantic relationships like co-reference. We apply pre-trained LLMs to low-resource named entity recognition for Historic German. We show on a series of experiments that character-based pre-trained LLMs do not run into trouble when faced with low-resource datasets. Our pre-trained character-based LLMs improve upon classical CRF-based methods and previous work on Bi-LSTMs by boosting F1 score performance by up to 6%. Our pre-trained language and NER models are publicly available under https://github.com/stefan-it/historic-ner .