Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning (1903.08948v1)

Published 21 Mar 2019 in cs.AI and cs.CL

Abstract: Reasoning is essential for the development of large knowledge graphs, especially for completion, which aims to infer new triples based on existing ones. Both rules and embeddings can be used for knowledge graph reasoning and they have their own advantages and difficulties. Rule-based reasoning is accurate and explainable but rule learning with searching over the graph always suffers from efficiency due to huge search space. Embedding-based reasoning is more scalable and efficient as the reasoning is conducted via computation between embeddings, but it has difficulty learning good representations for sparse entities because a good embedding relies heavily on data richness. Based on this observation, in this paper we explore how embedding and rule learning can be combined together and complement each other's difficulties with their advantages. We propose a novel framework IterE iteratively learning embeddings and rules, in which rules are learned from embeddings with proper pruning strategy and embeddings are learned from existing triples and new triples inferred by rules. Evaluations on embedding qualities of IterE show that rules help improve the quality of sparse entity embeddings and their link prediction results. We also evaluate the efficiency of rule learning and quality of rules from IterE compared with AMIE+, showing that IterE is capable of generating high quality rules more efficiently. Experiments show that iteratively learning embeddings and rules benefit each other during learning and prediction.

Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning

The paper "Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning" presents a novel framework named IterE, which aims to address challenges within knowledge graph reasoning by integrating the strengths of embedding-based reasoning and rule-based reasoning. Knowledge graphs (KGs) store vast amounts of factual data in triples, which in turn require reasoning approaches to infer new knowledge and maintain consistency. These KGs benefit from two primary reasoning methodologies: embeddings, which transform the entities and relationships into vector spaces for more efficient computation, and rules, which provide precise logical deductions.

Overview of IterE Framework

IterE leverages the respective advantages of embeddings and rules through an iterative process that enhances the reasoning capabilities of KGs. It functions in three core stages: embedding learning, axiom induction, and axiom injection. During embedding learning, IterE integrates inferred triples alongside existing triples, thereby enriching sparse entities' representations and addressing data sparsity challenges inherent to embedding-based reasoning. Conversely, axiom induction utilizes vector space characteristics of embedding models to generate and assess axioms, efficiently trimming down the extensive search space typical of rule learning.

Emphasizing the linear map assumption for embeddings allows IterE to facilitate axiom induction directly within the vector space, creating rules from axioms defined in the OWL 2 Web Ontology Language. The axioms chosen pertain specifically to object properties, enabling both the deduction of new triples and the maintaining of semantic integrity from a KG perspective.

Experimental Evaluation and Numerical Results

The experiments conducted using IterE on datasets such as WN18-sparse and FB15k-237-sparse reveal its capability to improve link prediction performance and generate quality rules. Numerical analysis from link prediction tasks shows that IterE achieves superior Mean Reciprocal Rank (MRR) scores and higher Hit@10 percentages compared to other methods like TransE, ComplEx, and DistMult. This evidences the efficacy of IterE in addressing the sparsity inherent in traditional embedding methods thereby refining sparse entity representations. Notably, IterE exhibits improved learning efficiency and generates rules of higher quality compared to the baseline AMIE+, reducing computational time significantly while increasing the yield of high-quality rules.

Implications and Future Prospects

The paper presents a compelling argument for integrating deductive and inductive reasoning within KG ecosystems. With IterE, there is potential to markedly advance tasks like link prediction and KG completion, where data richness is heterogeneous. The iterative approach provides an innovative pathway for learning that scales efficiently with increasing KG complexity. Moreover, potential future applications may include greater integration of diverse axiom types and exploring broader definitions beyond OWL 2 for richer rule structures, enhancing graph-based AI systems.

As AI continues to expand into complex data environments, frameworks like IterE indicate promising directions, encouraging closer synergy between established reasoning mechanisms. The methodology opens avenues for more robust KG operations where optimal data representation coexists with logical consistency. Future research could focus on the adaptability of such models to various KGs and the automation of hyperparameter tuning processes to further increase practical scalability and efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Wen Zhang (170 papers)
  2. Bibek Paudel (9 papers)
  3. Liang Wang (512 papers)
  4. Jiaoyan Chen (85 papers)
  5. Hai Zhu (33 papers)
  6. Wei Zhang (1489 papers)
  7. Abraham Bernstein (25 papers)
  8. Huajun Chen (198 papers)
Citations (190)