ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning (2309.01538v3)
Abstract: Logical rules are essential for uncovering the logical connections between relations, which could improve reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, LLMs have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of LLMs for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, the ranked rules can be used to conduct reasoning over KGs. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
- Beamqa: Multi-hop knowledge graph question answering with sequence-to-sequence prediction and beam search. In SIGIR, pages 781–790, 2023.
- Jon Barwise. An introduction to first-order logic. In Studies in Logic and the Foundations of Mathematics, volume 90, pages 5–46. Elsevier, 1977.
- Language models are few-shot learners. NeurIPS, 33:1877–1901, 2020.
- Ontological pathfinding. In SIGMOD, pages 835–846, 2016.
- Neural compositional rule learning for knowledge graph reasoning. In ICLR, 2022.
- Rlogic: Recursive logical rule learning from knowledge graphs. In KDD, pages 179–189, 2022.
- Convolutional 2d knowledge graph embeddings. In AAAI, volume 32, 2018.
- Amie: association rule mining under incomplete evidence in ontological knowledge bases. In WWW, pages 413–422, 2013.
- Geoffrey E Hinton et al. Learning distributed representations of concepts. In Proceedings of the eighth annual conference of the cognitive science society, volume 1, page 12. Amherst, MA, 1986.
- Rule-aware reinforcement learning for knowledge graph reasoning. In ACL, pages 4687–4692, 2021.
- Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38, 2023.
- Mistral 7b, 2023.
- Statistical predicate invention. In ICML, pages 433–440, 2007.
- Ni Lao and William W Cohen. Relational retrieval using a combination of path-constrained random walks. Machine learning, 81:53–67, 2010.
- Neural multi-hop reasoning with logical rules on biomedical knowledge graphs. In ESWC, pages 375–391. Springer, 2021.
- Tlogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs. In AAAI, volume 36, pages 4120–4127, 2022.
- Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172, 2023.
- R5: Rule discovery with reinforced and recurrent relational reasoning. In ICLR, 2021.
- Reasoning on graphs: Faithful and interpretable large language model reasoning. arXiv preprint arXiv:2310.01061, 2023.
- Scalable rule learning via learning representation. In IJCAI, pages 2149–2155, 2018.
- OpenAI. Gpt-4 technical report, 2023.
- Unifying large language models and knowledge graphs: A roadmap. arXiv preprint arXiv:2306.08302, 2023.
- Probabilistic logic neural networks for reasoning. NeurIPS, 32, 2019.
- Rnnlogic: Learning logic rules for reasoning on knowledge graphs. In ICLR, 2020.
- Drum: End-to-end differentiable rule mining on knowledge graphs. NeurIPS, 32, 2019.
- Sound and complete forward and backward chainings of graph rules. In International Conference on Conceptual Structures, pages 248–262, 1996.
- Yago: a core of semantic knowledge. In WWW, pages 697–706, 2007.
- Evaluation of chatgpt as a question answering system for answering complex questions. arXiv preprint arXiv:2303.07992, 2023.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
- Explainable reasoning over knowledge graphs for recommendation. In AAAI, volume 33, pages 5329–5336, 2019.
- Finetuned language models are zero-shot learners. In ICLR, 2021.
- Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 35:24824–24837, 2022.
- Ruleformer: Context-aware rule mining over knowledge graph. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2551–2560, 2022.
- Yuan Yang and Le Song. Learn to explain efficiently via neural logic inductive learning. In International Conference on Learning Representations, 2020.
- Differentiable learning of logical rules for knowledge base reasoning. NeurIPS, 30, 2017.
- A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
- Iteratively questioning and answering for interpretable legal judgment prediction. In AAAI, volume 34, pages 1250–1257, 2020.
- Evaluating commonsense in pre-trained language models. In AAAI, volume 34, pages 9733–9740, 2020.
- Linhao Luo (31 papers)
- Jiaxin Ju (6 papers)
- Bo Xiong (84 papers)
- Yuan-Fang Li (90 papers)
- Gholamreza Haffari (141 papers)
- Shirui Pan (198 papers)