Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-Shot Knowledge Graph Completion (1911.11298v1)

Published 26 Nov 2019 in cs.CL, cs.AI, and cs.LG

Abstract: Knowledge graphs (KGs) serve as useful resources for various natural language processing applications. Previous KG completion approaches require a large number of training instances (i.e., head-tail entity pairs) for every relation. The real case is that for most of the relations, very few entity pairs are available. Existing work of one-shot learning limits method generalizability for few-shot scenarios and does not fully use the supervisory information; however, few-shot KG completion has not been well studied yet. In this work, we propose a novel few-shot relation learning model (FSRL) that aims at discovering facts of new relations with few-shot references. FSRL can effectively capture knowledge from heterogeneous graph structure, aggregate representations of few-shot references, and match similar entity pairs of reference set for every relation. Extensive experiments on two public datasets demonstrate that FSRL outperforms the state-of-the-art.

Overview of Few-Shot Knowledge Graph Completion

Knowledge graphs (KGs) are increasingly important resources in NLP, representing relations between entities as graph edges between nodes. However, the incompleteness of KGs necessitates new methods for automated KG completion, particularly for relations that feature few entity pairs due to the long-tail distribution of real-world data. This paper presents a distinctive approach called Few-Shot Relation Learning (FSRL), which addresses these challenges by developing an innovative model capable of inferring facts about unseen relations using limited reference data.

The paper evaluates FSRL against existing techniques for KG completion, such as RESCAL, TransE, DistMult, and ComplEx, along with state-of-the-art neighbor encoder models. The primary contribution is the introduction of a robust method that combines heterogeneous neighbor encoding with few-shot learning principles for enhanced KG completion. The model constructs entity embeddings using a relation-aware heterogeneous neighbor encoder, which employs an attention mechanism to differentiate the significance of various relational neighbors. This encoder enables precise characterizations of entities within KGs by considering disparities in neighbor influence based on relational specificity.

FSRL aggregates reference entity pairs through a recurrent autoencoder network that models the interactions among these instances, bolstering the expressiveness of the reference set. The matching network then employs a recurrent mechanism to gauge the similarity between query pairs and the aggregated reference set. This methodological choice enables the model to effectively rank candidate entities for unknown relation-based queries, leveraging few-shot learning paradigms within KGs.

The experimental evaluation demonstrates that FSRL consistently surpasses baseline approaches across multiple datasets, exhibiting superior performance metrics such as Hits@k and Mean Reciprocal Rank (MRR). Key findings reveal that FSRL performs optimally in predicting relational facts even for KG relations with minimal prior instance pair data. The ablation studies further validate the effectiveness of each component within FSRL.

Providing insights into future developments, the authors suggest incorporating model-agnostic meta-learning frameworks or leveraging contextual information like entity attributes or descriptive texts to enhance entity embedding quality. Such advancements could substantially augment the model's capability in practical applications of KG completion.

Overall, the innovative design and empirical success of FSRL underscore its significance in advancing the few-shot learning paradigm for knowledge graph completion, offering a scalable solution to the pervasive issue of KG incompleteness in the domain of NLP.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chuxu Zhang (51 papers)
  2. Huaxiu Yao (103 papers)
  3. Chao Huang (244 papers)
  4. Meng Jiang (126 papers)
  5. Zhenhui Li (34 papers)
  6. Nitesh V. Chawla (111 papers)
Citations (180)