Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction (2010.16059v1)

Published 30 Oct 2020 in cs.CL, cs.AI, cs.DB, cs.IR, and cs.LG

Abstract: Current supervised relational triple extraction approaches require huge amounts of labeled data and thus suffer from poor performance in few-shot settings. However, people can grasp new knowledge by learning a few instances. To this end, we take the first step to study the few-shot relational triple extraction, which has not been well understood. Unlike previous single-task few-shot problems, relational triple extraction is more challenging as the entities and relations have implicit correlations. In this paper, We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples, namely, entity pairs and corresponding relations. To be specific, we design a hybrid prototypical learning mechanism that bridges text and knowledge concerning both entities and relations. Thus, implicit correlations between entities and relations are injected. Additionally, we propose a prototype-aware regularization to learn more representative prototypes. Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Haiyang Yu (109 papers)
  2. Ningyu Zhang (148 papers)
  3. Shumin Deng (65 papers)
  4. Hongbin Ye (16 papers)
  5. Wei Zhang (1489 papers)
  6. Huajun Chen (198 papers)
Citations (45)