Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Unified MRC Framework for Named Entity Recognition (1910.11476v7)

Published 25 Oct 2019 in cs.CL

Abstract: The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not. Models are usually separately developed for the two tasks, since sequence labeling models, the most widely used backbone for flat NER, are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels. In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task. For example, extracting entities with the \textsc{per} label is formalized as extracting answer spans to the question "{\it which person is mentioned in the text?}". This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities for different categories requires answering two independent questions. Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER. We conduct experiments on both {\em nested} and {\em flat} NER datasets. Experimental results demonstrate the effectiveness of the proposed formulation. We are able to achieve vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, along with SOTA results on flat NER datasets, i.e.,+0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0.

A Unified MRC Framework for Named Entity Recognition

The paper presents a novel approach to address Named Entity Recognition (NER) by reformulating the task as a Machine Reading Comprehension (MRC) problem. This method effectively integrates flat and nested NER tasks, providing a unified framework that leverages question answering techniques to handle overlapping entities.

Problem Formulation

Traditional NER tasks are split into flat and nested categories. Flat NER is traditionally tackled through sequence labeling where each token is assigned a single tag. This approach, however, becomes inefficient for nested NER due to entity overlap where tokens may belong to multiple entities. The paper proposes circumventing these limitations by adapting NER into an MRC task, where detecting entities becomes equivalent to answering context-specific questions.

Methodology

The proposed framework utilizes BERT as its backbone, a choice driven by BERT's superior performance in capturing contextual relationships through pre-trained embeddings. In this setting, each entity type corresponds to a predefined question, enabling independent retrieval of overlapping entities by asking multiple context-driven questions. This approach inherently addresses the nested NER challenge, allowing for better representation and extraction of overlapping entity spans.

In generating queries, the authors opt for leveraging annotation guidelines, which encapsulate richer semantic information compared to traditional label indices or basic templates. This allows the model to embed more informative prior knowledge, reducing ambiguities often encountered in entity classification.

Experimental Outcomes

The experimentation spans both nested and flat NER datasets, delivering notable performance improvements. For nested datasets like ACE04, ACE05, GENIA, and KBP17, the framework shows substantial F1-score enhancements of up to +6.37 compared to state-of-the-art (SOTA) models. For flat datasets such as CoNLL2003 and OntoNotes across English and Chinese texts, improvements range up to +1.95 in F1-score.

Such gains are attributed to the MRC formulation's ability to encode semantic nuances through queries, driving effective disambiguation and reducing reliance on large data volumes. The framework’s robustness is further demonstrated in zero-shot learning scenarios, where it outperforms BERT-tagger settings by leveraging the generalized query-answering capabilities.

Implications and Future Directions

This paper brings forward several theoretical and practical implications for NER tasks. The integration of entity recognition with MRC not only provides a solution for nested complexities but also enhances model generalization across unseen categories. The success of leveraging annotation guideline notes for query formulation opens avenues for further enriching question templates, potentially augmenting efficiency in resource-constrained settings.

Future developments could see this framework extend into domains requiring more intricate span recognition, such as relation extraction or nuanced event detection. Additionally, exploring the scalability of the framework against diverse languages and domains could further solidify its utility in AI-driven textual analysis.

In conclusion, this unified framework marks a commendable step forward in addressing complex NER challenges, offering a more adaptable and abstracted methodology through the lens of machine reading comprehension.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiaoya Li (42 papers)
  2. Jingrong Feng (1 paper)
  3. Yuxian Meng (37 papers)
  4. Qinghong Han (11 papers)
  5. Fei Wu (317 papers)
  6. Jiwei Li (137 papers)
Citations (591)