Papers
Topics
Authors
Recent
2000 character limit reached

PrOntoQA Benchmark

Updated 30 December 2025
  • PrOntoQA is a synthetic question-answering benchmark that evaluates large language models using formal, multi-hop reasoning tests derived from a first-order logic world model.
  • It generates controlled synthetic tasks by sampling ontologies, constructing logical proofs, and converting them into natural language to assess chain-of-thought accuracy.
  • Recent advances incorporate ATP-augmented neuro-symbolic architectures and error correction modules, significantly improving proof accuracy and reducing semantic errors.

PrOntoQA is a synthetic question-answering benchmark designed to rigorously evaluate the compositional and multi-hop reasoning capabilities of LLMs in structured, formally-defined environments. Each PrOntoQA example is generated from a synthetic world model represented in first-order logic, facilitating the systematic parsing of model-generated chain-of-thoughts (CoTs) into symbolic proofs for granular error analysis. The benchmark provides controlled synthetic tasks ("steamroller problems") with varying depth, ontology types, and distractor sentences to differentiate genuine logical inference from heuristic shortcuts or memorization effects (Saparov et al., 2022, McGinness et al., 2024).

1. Formal Construction and Logical Foundations

PrOntoQA instances are constructed via a formal world model with a first-order signature Σ=(T,P,F)\Sigma = (T, P, F), where TT is a finite set of object constants, PP a set of unary predicate and negated-property symbols, and F=F = \emptyset in the core benchmark (i.e., no non-constant function symbols). Atomic formulas take the form p(t)p(t) with pPp \in P, tTt \in T. The ontology consists of axioms in the forms:

  • x (p(x)q(x))\forall x\ (p(x) \to q(x)) ("Every pp is a qq")
  • x (p(x)¬r(x))\forall x\ (p(x) \to \neg r(x)) ("Every pp is not rr")
  • p(a)p(a) ("aa is a pp")

Deduction is performed by chained applications of a restricted proof calculus ("Hop" rule: universal instantiation + modus ponens), formally: $\infer[\textsc{Hop}]{q(c)}{ \forall x\, (p(x)\to q(x)) \quad p(c) }$

This design allows every CoT step produced by an LLM to be parsed into a sequence of formal proof steps, enabling symbolic tracking and error categorization.

2. Benchmark Generation Pipeline

Construction of PrOntoQA test cases proceeds through several controlled steps:

  • Ontology Sampling: Random “trees” (depth 3D103\leq D\leq 10) are sampled, with nodes labeled conceptually (fictional or real-world). Each tree edge parentchildparent\to child yields an axiom x (child(x)parent(x))\forall x\ (child(x)\to parent(x)). Individual nodes may have “negative properties” for additional axioms x (node(x)¬prop(x))\forall x\ (node(x)\to \neg prop(x)).
  • Proof Construction: Leaf concepts and constants are selected. The benchmark emits a kk-step proof by walking up the ontology tree, chaining applications of Hop, optionally producing negated-property targets.
  • Natural-Language Instantiation: Logical facts and rules are converted to simple English templates ("Every cat is a carnivore.", "True or false: Fae is a carnivore?"), enabling direct LLM interfacing. Distractor sentences are inserted to eliminate string-matching shortcuts.
  • Controlled Experimental Conditions: Parameters varied include proof depth (k{1,3,5}k \in \{1,3,5\}), traversal order (preorder vs. postorder), and ontology type (fictional, “true” real-world, “false” real-world). For each setting, hundreds of test cases are generated per model variant (Saparov et al., 2022).

3. Chain-of-Thought Parsing and Symbolic Proof Analysis

Each LLM-generated CoT sentence is parsed by a recursive-descent grammar into a formal logic formula φi\varphi_i. A proof state SS accumulates all derived formulas. The validity of each CoT step is assessed against the known ontology:

  • Strictly-valid atomic: matches a canonical Hop step in the gold proof.
  • Strictly-valid non-atomic: derives a correct formula by chaining 2\geq 2 Hops ("skip step").
  • Broadly-valid: derivable only via additional rules (e.g., transitivity).
  • Misleading/Invalid: incorrect conclusions not present in the gold proof or unreachable under the restricted calculus.

Correctness is defined as deriving the target formula φtargetS\varphi_{target} \in S after all CoT steps are processed.

Proof-step Taxonomy

Type Definition/Example
Strictly-valid atomic correct Matches gold Hop step
Strictly-valid atomic misleading Valid Hop, not on gold path
Strictly-valid non-atomic correct Chains 2\geq 2 Hops, arrives at gold step
Strictly-valid non-atomic misleading Valid skip, wrong conclusion
Broadly-valid correct Needs transitivity + Hop to reach gold
Broadly-valid misleading Off-path but broadly valid
Invalid No proof under restricted/extended calculus

4. Evaluation Metrics and Error Analysis

PrOntoQA uses a set of fine-grained metrics for both step- and proof-level evaluation:

  • Step categorization: Each CoT step is labeled per the taxonomy above.
  • Proof-level accuracy: Strict, skip, broad, and valid accuracy rates computed by inclusion of increasingly permissive step types.
  • Proof-planning failure metric: At ontology branches, checks if the LLM selects the gold-path premise. The first non-canonical error type and its depth are recorded.

Empirical findings include:

  • Larger LLMs (e.g., text-davinci-002 \sim175B) achieve high valid proof accuracy (>90%>90\%) on “true” ontologies, even at five hops. Performance drops on “fictional” and “false” ontologies (to $50$–65%65\% for five hops).
  • Most errors in wrong proofs are “misleading atomic” steps taking incorrect branches; recovery likelihood decays exponentially with consecutive errors.
  • Proof-ordering (postorder vs. preorder) impacts accuracy at higher depths (Saparov et al., 2022).

5. Advances: Neuro-Symbolic Reasoning and Error Correction

Recent work explores ATP-augmented LLM architectures for PrOntoQA (McGinness et al., 2024). The framework operates in three phases:

  1. LLM Front End: Translates NL problem to Prolog-style definite clauses.
  2. Symbolic Solver Back End: Uses ATPs (e.g., Beagle) or logic engines (e.g., Fusemate) for deductive inference.
  3. SEDAC Module (Editor's term, Semantic Error Detection And Correction): Inspects LLM outputs for syntactic and semantic errors, auto-corrects using shallow rewrites or deep ATP-guided correction.

Error analysis reveals:

  • Syntactic errors: Symbol errors, punctuation, NL leakage, extraneous communication markers.
  • Semantic errors: Shallow (naming, quantifier omission) vs. deep (implication direction, negation).
  • Deep semantic errors dominate, especially "flip" and "negation" mistakes.

Integration of full-SEDAC with ATP yields near-perfect performance for GPT-3.5/4 (accuracy $0.98$–$0.995$), with substantial reduction in semantic errors (Table below).

Prompt Strategy GPT-3 GPT-4 Gemini-Pro
Normal 0.48±0.06 0.83±0.12 0.47±0.04
Chain-of-Thought + 1-shot 0.65±0.15 0.94±0.04 0.74±0.12
Fusemate + full SEDAC 0.98±0.01 0.995±0.005 0.96±0.04

6. Usage Guidelines and Limitations

PrOntoQA is distributed with open-source generation scripts enabling arbitrary sampling of ontologies, depth, branching, predicate arity, and distractor density. The restricted proof calculus (Hop) supports robust parsing and error analysis, but the benchmark:

  • Only covers chained Hop (modus ponens) proofs; multi-premise reasoning and existential quantifiers are out of scope.
  • Uses syntactically simple NL, minimizing parsing overhead.
  • Real-world reasoning demands richer knowledge integration beyond current Hop-centric structure.

Benchmarking new LLMs involves generating test worlds, prompting with CoT or program translation styles, parsing outputs, and computing valid proof accuracy.

Open challenges include enhancing proof-planning strategies (beam search, backtracking), extending to richer ontology domains (nested quantifiers, arithmetic), and comparing LLM versus human failure profiles on toy domains (Saparov et al., 2022, McGinness et al., 2024).

7. Research Impact and Future Directions

Findings from PrOntoQA suggest that LLMs are capable "greedy reasoners": they execute correct deduction steps but struggle with systematic proof planning in branching contexts. ATP-augmented neuro-symbolic architectures substantially close the reasoning gap and provide symbolic proof artifacts. As semantic errors outnumber syntactic, future prompt engineering and error-correction frameworks should target meaning preservation over syntactic fidelity.

Extensions of PrOntoQA's methodology are plausible in other reasoning benchmarks, provided the semantic content can be reliably formalized or mapped to trusted knowledge bases. Further research is directed toward richer logical domains, efficient local LLM translation, and scalable deployment strategies in practical AI reasoning systems (McGinness et al., 2024).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to PrOntoQA Benchmark.