Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention (2010.01057v1)

Published 2 Oct 2020 in cs.CL and cs.LG

Abstract: Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked LLM of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

This paper presents LUKE, a model that introduces deep contextualized representations specifically for entity-aware tasks. Capitalizing on the transformer architecture, LUKE employs a novel self-attention mechanism to address the limitations of current contextualized word representations (CWRs) in handling entities effectively.

Key Innovations

LUKE treats both words and entities as independent tokens, enhancing the model's capability to produce contextualized representations suitable for various entity tasks. The model introduces an entity-aware self-attention mechanism that differentiates between words and entities, potentially improving the handling and understanding of relationships between entities.

A notable advancement of LUKE over traditional CWRs like BERT and RoBERTa lies in its novel pretraining strategy. It uses a masked LLM (MLM) applied to a large corpus enriched with entity annotations from Wikipedia. This allows the model to engage in more sophisticated pretraining tasks that involve predicting masked entities, enabling LUKE to build more nuanced representations of both words and entities.

Empirical Performance

LUKE achieves state-of-the-art performance across five benchmark datasets, covering diverse entity-related tasks:

  1. Entity Typing: On the Open Entity dataset, LUKE achieves an F1 score of 78.2, outperforming previous models.
  2. Relation Classification: It achieves a 72.7 F1 score on the TACRED dataset, illustrating its efficacy in discerning relationships between entities.
  3. Named Entity Recognition (NER): The model reaches a 94.3 F1 score on the CoNLL-2003 dataset, setting a new benchmark in the field.
  4. Cloze-style Question Answering: LUKE performs exceptionally on the ReCoRD dataset, with scores of 90.6 EM and 91.2 F1.
  5. Extractive Question Answering: On the SQuAD 1.1 dataset, it achieves scores of 90.2 EM and 95.4 F1.

These results indicate the model's strong capacity for various tasks that require sophisticated understanding and representation of entities.

Implications and Future Directions

LUKE's advancements reflect a critical step towards better integrating entity representation in LLMs, which has broad implications for enhancing tasks that require entity reasoning and relationship understanding. Its architecture and pretraining methodology can better support applications where entities are central, such as in information extraction, knowledge-based reasoning, and complex QA systems.

Potential future developments may involve further optimization of the entity-aware self-attention mechanism and exploring the integration of LUKE in domain-specific contexts, such as biomedical or legal text processing. Additionally, the application of LUKE to zero-shot or few-shot learning scenarios for entity-centric tasks could also be investigated, leveraging its robust pretraining approach.

In conclusion, LUKE provides a significant and practical advancement in the field of entity representation in NLP, offering a potent tool for researchers and practitioners interested in pushing the boundaries of current entity-aware tasks and methodologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ikuya Yamada (22 papers)
  2. Akari Asai (35 papers)
  3. Hiroyuki Shindo (21 papers)
  4. Hideaki Takeda (14 papers)
  5. Yuji Matsumoto (52 papers)
Citations (630)
Github Logo Streamline Icon: https://streamlinehq.com