Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction (2010.16059v1)
Abstract: Current supervised relational triple extraction approaches require huge amounts of labeled data and thus suffer from poor performance in few-shot settings. However, people can grasp new knowledge by learning a few instances. To this end, we take the first step to study the few-shot relational triple extraction, which has not been well understood. Unlike previous single-task few-shot problems, relational triple extraction is more challenging as the entities and relations have implicit correlations. In this paper, We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples, namely, entity pairs and corresponding relations. To be specific, we design a hybrid prototypical learning mechanism that bridges text and knowledge concerning both entities and relations. Thus, implicit correlations between entities and relations are injected. Additionally, we propose a prototype-aware regularization to learn more representative prototypes. Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.
- Haiyang Yu (109 papers)
- Ningyu Zhang (148 papers)
- Shumin Deng (65 papers)
- Hongbin Ye (16 papers)
- Wei Zhang (1489 papers)
- Huajun Chen (198 papers)