Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SPOT: Knowledge-Enhanced Language Representations for Information Extraction (2208.09625v2)

Published 20 Aug 2022 in cs.CL and cs.AI

Abstract: Knowledge-enhanced pre-trained models for language representation have been shown to be more effective in knowledge base construction tasks (i.e.,~relation extraction) than LLMs such as BERT. These knowledge-enhanced LLMs incorporate knowledge into pre-training to generate representations of entities or relationships. However, existing methods typically represent each entity with a separate embedding. As a result, these methods struggle to represent out-of-vocabulary entities and a large amount of parameters, on top of their underlying token models (i.e.,~the transformer), must be used and the number of entities that can be handled is limited in practice due to memory constraints. Moreover, existing models still struggle to represent entities and relationships simultaneously. To address these problems, we propose a new pre-trained model that learns representations of both entities and relationships from token spans and span pairs in the text respectively. By encoding spans efficiently with span modules, our model can represent both entities and their relationships but requires fewer parameters than existing models. We pre-trained our model with the knowledge graph extracted from Wikipedia and test it on a broad range of supervised and unsupervised information extraction tasks. Results show that our model learns better representations for both entities and relationships than baselines, while in supervised settings, fine-tuning our model outperforms RoBERTa consistently and achieves competitive results on information extraction tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiacheng Li (54 papers)
  2. Yannis Katsis (13 papers)
  3. Tyler Baldwin (4 papers)
  4. Ho-Cheol Kim (5 papers)
  5. Andrew Bartko (2 papers)
  6. Julian McAuley (238 papers)
  7. Chun-Nan Hsu (11 papers)
Citations (11)