Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An original model for multi-target learning of logical rules for knowledge graph reasoning (2112.06189v2)

Published 12 Dec 2021 in cs.AI and cs.LG

Abstract: Large-scale knowledge graphs provide structured representations of human knowledge. However, as it is impossible to collect all knowledge, knowledge graphs are usually incomplete. Reasoning based on existing facts paves a way to discover missing facts. In this paper, we study the problem of learning logical rules for reasoning on knowledge graphs for completing missing factual triplets. Learning logical rules equips a model with strong interpretability as well as the ability to generalize to similar tasks. We propose a model able to fully use training data which also considers multi-target scenarios. In addition, considering the deficiency in evaluating the performance of models and the quality of mined rules, we further propose two novel indicators to help with the problem. Experimental results empirically demonstrate that our model outperforms state-of-the-art methods on five benchmark datasets. The results also prove the effectiveness of the indicators.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuliang Wei (6 papers)
  2. Haotian Li (72 papers)
  3. Guodong Xin (3 papers)
  4. Yao Wang (331 papers)
  5. Bailing Wang (8 papers)

Summary

We haven't generated a summary for this paper yet.