Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Relation-Specific Representations for Few-shot Knowledge Graph Completion (2203.11639v2)

Published 22 Mar 2022 in cs.CL and cs.AI

Abstract: Recent years have witnessed increasing interest in few-shot knowledge graph completion (FKGC), which aims to infer unseen query triples for a few-shot relation using a few reference triples about the relation. The primary focus of existing FKGC methods lies in learning relation representations that can reflect the common information shared by the query and reference triples. To this end, these methods learn entity-pair representations from the direct neighbors of head and tail entities, and then aggregate the representations of reference entity pairs. However, the entity-pair representations learned only from direct neighbors may have low expressiveness when the involved entities have sparse direct neighbors or share a common local neighborhood with other entities. Moreover, merely modeling the semantic information of head and tail entities is insufficient to accurately infer their relational information especially when they have multiple relations. To address these issues, we propose a Relation-Specific Context Learning (RSCL) framework, which exploits graph contexts of triples to learn global and local relation-specific representations for few-shot relations. Specifically, we first extract graph contexts for each triple, which can provide long-term entity-relation dependencies. To encode the extracted graph contexts, we then present a hierarchical attention network to capture contextualized information of triples and highlight valuable local neighborhood information of entities. Finally, we design a hybrid attention aggregator to evaluate the likelihood of the query triples at the global and local levels. Experimental results on two public datasets demonstrate that RSCL outperforms state-of-the-art FKGC methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuling Li (5 papers)
  2. Kui Yu (35 papers)
  3. Yuhong Zhang (27 papers)
  4. Xindong Wu (49 papers)
Citations (3)