Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs (1909.01515v1)

Published 4 Sep 2019 in cs.CL and stat.ML

Abstract: Link prediction is an important way to complete knowledge graphs (KGs), while embedding-based methods, effective for link prediction in KGs, perform poorly on relations that only have a few associative triples. In this work, we propose a Meta Relational Learning (MetaR) framework to do the common but challenging few-shot link prediction in KGs, namely predicting new triples about a relation by only observing a few associative triples. We solve few-shot link prediction by focusing on transferring relation-specific meta information to make model learn the most important knowledge and learn faster, corresponding to relation meta and gradient meta respectively in MetaR. Empirically, our model achieves state-of-the-art results on few-shot link prediction KG benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mingyang Chen (45 papers)
  2. Wen Zhang (170 papers)
  3. Wei Zhang (1489 papers)
  4. Qiang Chen (98 papers)
  5. Huajun Chen (198 papers)
Citations (172)

Summary

Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs

The paper introduces a novel framework, Meta Relational Learning (MetaR), designed to address the few-shot link prediction problem in knowledge graphs (KGs). Traditional knowledge graph embedding methods typically rely on a substantial number of examples for effective training. However, many relations within KGs, especially in large datasets like Wikidata, remain under-represented, posing a significant challenge for link prediction methods that depend on abundant data. The MetaR framework innovatively circumvents this limitation by employing a meta-learning approach that leverages relation-specific meta information to achieve efficient learning from limited relational examples.

Methodology Overview

MetaR focuses on transferring meta information specific to relations to boost the model's learning capabilities. Two types of meta information are pivotal in this schema: relation meta and gradient meta. The framework's architecture comprises a relation-meta learner and an embedding learner.

  • Relation-Meta Learner: This component extracts relation meta information from the support set, serving as a condensed representation of the common knowledge associated with a specific relation.
  • Embedding Learner: It utilizes these learned relation meta representations to predict the likelihood of triples in the query set. The gradient meta, which involves the loss gradient concerning the relation meta on the support set, provides a rapid update mechanism, thereby expediting learning.

The framework iteratively refines the relation meta through backpropagation of gradient meta, ensuring that it captures the most pertinent features for accurate predictions.

Empirical Evaluation

The MetaR framework was evaluated on two few-shot link prediction datasets, NELL-One and Wiki-One, both curated to represent real-world few-shot scenarios with varying entity sparsity. The empirical results demonstrated that MetaR achieved state-of-the-art performance on key metrics including Mean Reciprocal Rank (MRR) and Hits@N (where N is 1, 5, or 10), notably surpassing the previous benchmark established by the GMatching method, which relies heavily on background knowledge graphs.

Key Findings

  1. Effectiveness of Meta Information: The ablation studies confirmed that both relation and gradient meta information are crucial for the MetaR's performance, with relation meta contributing slightly more than gradient meta.
  2. Robustness Against Entity Sparsity: MetaR's independence from background knowledge graphs underscores its robustness, particularly in conditions where background graphs are unavailable or undercomplete.
  3. Impact of Training Sample Size: The performance of the method showed a correlation with the number of training tasks and the sparsity of entities in datasets, highlighting the importance of dataset characteristics on model efficacy.

Implications and Future Directions

The proposed MetaR framework represents a significant advancement in few-shot learning paradigms, particularly within the field of knowledge graphs. It offers a scalable solution for extending the reach of KGs to underrepresented relations without requiring extensive pre-existing datasets. From a theoretical perspective, the utilization of relation-specific meta learning could open new research directions exploring its applications to various domain-specific knowledge extraction tasks.

Looking forward, extending the capabilities of MetaR into areas like multi-hop reasoning and its integration with graph neural networks present promising avenues for enhanced inference over sparse relational data. Additionally, exploring dynamic meta-learning strategies, which adaptively adjust the meta parameters based on task complexity, could further elevate the model's performance in handling diverse scenarios in real-time applications.