Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Correction Model for Open-Domain Named Entity Recognition (1909.06058v2)

Published 13 Sep 2019 in cs.CL, cs.IR, and cs.LG

Abstract: Named Entity Recognition (NER) plays an important role in a wide range of natural language processing tasks, such as relation extraction, question answering, etc. However, previous studies on NER are limited to particular genres, using small manually-annotated or large but low-quality datasets. Meanwhile, previous datasets for open-domain NER, built using distant supervision, suffer from low precision, recall and ratio of annotated tokens (RAT). In this work, to address the low precision and recall problems, we first utilize DBpedia as the source of distant supervision to annotate abstracts from Wikipedia and design a neural correction model trained with a human-annotated NER dataset, DocRED, to correct the false entity labels. In this way, we build a large and high-quality dataset called AnchorNER and then train various models with it. To address the low RAT problem of previous datasets, we introduce a multi-task learning method to exploit the context information. We evaluate our methods on five NER datasets and our experimental results show that models trained with AnchorNER and our multi-task learning method obtain state-of-the-art performances in the open-domain setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mengdi Zhu (3 papers)
  2. Zheye Deng (12 papers)
  3. Wenhan Xiong (47 papers)
  4. Mo Yu (117 papers)
  5. Ming Zhang (313 papers)
  6. William Yang Wang (254 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.