Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

One-Shot Relational Learning for Knowledge Graphs (1808.09040v1)

Published 27 Aug 2018 in cs.CL

Abstract: Knowledge graphs (KGs) are the key components of various natural language processing applications. To further expand KGs' coverage, previous studies on knowledge graph completion usually require a large number of training instances for each relation. However, we observe that long-tail relations are actually more common in KGs and those newly added relations often do not have many known triples for training. In this work, we aim at predicting new facts under a challenging setting where only one training instance is available. We propose a one-shot relational learning framework, which utilizes the knowledge extracted by embedding models and learns a matching metric by considering both the learned embeddings and one-hop graph structures. Empirically, our model yields considerable performance improvements over existing embedding models, and also eliminates the need of re-training the embedding models when dealing with newly added relations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenhan Xiong (47 papers)
  2. Mo Yu (117 papers)
  3. Shiyu Chang (120 papers)
  4. Xiaoxiao Guo (38 papers)
  5. William Yang Wang (254 papers)
Citations (198)

Summary

One-Shot Relational Learning for Knowledge Graphs: Summary and Implications

The paper "One-Shot Relational Learning for Knowledge Graphs" presents a significant approach towards enhancing the completion of knowledge graphs (KGs) by focusing on one-shot learning scenarios. Traditional methods for completing KGs often require extensive training data to predict missing facts; however, these approaches fall short when faced with sparsely represented long-tail relations. The authors address this challenge by proposing a novel framework that predicts new facts with minimal data, specifically one training instance per relation.

Summary of Methodology

The authors introduce a one-shot relational learning framework that leverages entity embeddings alongside local graph structures to predict new relations. Central to the model is a similarity matching metric learned through a permutation-invariant network architecture, paired with a recurrent neural network (RNN) that enables multi-step matching between entity pairs. The proposed model's architecture comprises two integral components:

  1. Neighbor Encoder: This module enhances entity representations by aggregating information from one-hop neighbors using a mean-pooling strategy, thereby incorporating local structural information of the graph into the entity embeddings.
  2. Matching Processor: An RNN-based matching processor processes encoded entity representations to compute similarity scores between the reference triple and candidate triples.

The effectiveness of this model is demonstrated on two newly constructed datasets derived from NELL and Wikidata, showcasing consistent improvements over baseline embedding models in a challenging one-shot setting.

Key Results

The empirical evaluation spans multiple experiments, including comparisons with several established KG embedding models such as RESCAL and TransE. The proposed method consistently outperforms these benchmarks in terms of mean reciprocal rank (MRR) and Hits@K metrics across both datasets. Notably, the authors report the model's robustness to unseen relations without requiring retraining, a significant advantage over conventional models that necessitate retraining to address new relations.

Implications and Future Directions

The research offers both practical and theoretical implications. Practically, the ability of the model to perform well on newly added relations with minimal data makes it an excellent candidate for dynamically evolving real-world KGs. The model's adaptability could greatly reduce human effort in KG maintenance, especially in scenarios where new information is frequently integrated.

Theoretically, this paper introduces a unique facet of few-shot learning into the domain of relational data, paving the way for more adaptive and resilient KG completion methodologies. Future research could explore extending this framework to accommodate few-shot settings with more than one example, potentially leveraging attention mechanisms to aggregate multiple support examples. Furthermore, integrating external unstructured data, such as text descriptions, could enhance the model’s usability in open-world settings with unseen entities.

In conclusion, this paper provides a robust solution to one of the pressing challenges in KG completion: handling sparse, long-tail relations effectively. By leveraging local graph structures and efficient metric learning, it sets a new precedent for minimal-data learning in KGs, promising improvements in both scalability and adaptability of relational learning models.