Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inductive Learning of Logical Theories with LLMs: A Complexity-graded Analysis (2408.16779v1)

Published 15 Aug 2024 in cs.CL, cs.AI, and cs.LO

Abstract: This work presents a novel systematic methodology to analyse the capabilities and limitations of LLMs with feedback from a formal inference engine, on logic theory induction. The analysis is complexity-graded w.r.t. rule dependency structure, allowing quantification of specific inference challenges on LLM performance. Integrating LLMs with formal methods is a promising frontier in the Natural Language Processing field, as an important avenue for improving model inference control and explainability. In particular, inductive learning over complex sets of facts and rules, poses unique challenges for current autoregressive models, as they lack explicit symbolic grounding. While they can be complemented by formal systems, the properties delivered by LLMs regarding inductive learning, are not well understood and quantified. Empirical results indicate that the largest LLMs can achieve competitive results against a SOTA Inductive Logic Programming (ILP) system baseline, but also that tracking long predicate relationship chains is a more difficult obstacle than theory complexity for the LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. João Pedro Gandarela (2 papers)
  2. Danilo S. Carvalho (23 papers)
  3. André Freitas (156 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.