Papers
Topics
Authors
Recent
Search
2000 character limit reached

Laws of Reasoning (LoRe): Foundations in Physics & AI

Updated 22 December 2025
  • LoRe is a framework that formalizes fundamental inference principles in both physical logic and computational reasoning, integrating axiomatic logic with quantum measure theory and LRM behaviors.
  • It distinguishes between classical Boolean, quantum multiplicative logics, and normative reasoning laws in LRMs, ensuring deductive soundness while addressing interference and complexity.
  • LoRe employs measurable proxies like monotonicity and compositionality, facilitating targeted fine-tuning and empirical evaluation to optimize reasoning models in complex inference systems.

The Laws of Reasoning (LoRe) constitute a rigorous set of fundamental principles designed to formalize and constrain the permissible structures of inference—either within physical reality as in Sorkin's "physical logic" framework or in the operational reasoning of large reasoning models (LRMs). LoRe connects axiomatic logic, the structure of physical and computational inference, and empirical measurement of reasoning patterns. Two distinct but conceptually related lines of work have developed: one in quantum foundations, focused on how physical laws restrict logical inference, and one in machine reasoning, targeting the normative behavior of algorithmic reasoners. Both lines boil down to specifying explicit "laws" that govern when reasoning is considered valid, and how it relates to underlying complexity, compositionality, and deductive soundness.

1. Formalization of Laws: Physical Logic and Model Reasoning

In the foundational setting of physical logic (Clements et al., 2012), LoRe is operationalized through axioms that connect events (subsets of histories) with affirmation/denial maps called co-events. The event algebra A=2ΩA=2^\Omega—the Boolean algebra of all subsets of fine-grained spacetime histories—is the context. The co-event ϕ:AF2\phi:A\rightarrow\mathbb{F}_2 assigns “affirmed” (1) or “denied” (0) to events. Classical physical logic is enforced by:

  • LoRe I: World is affirmed (unitality): ϕ(Ω)=1\phi(\Omega) = 1
  • LoRe II: Modus Ponens: ϕ(A)=1\phi(A) = 1, ϕ(AB)=1\phi(A \rightarrow B) = 1 \Rightarrow ϕ(B)=1\phi(B) = 1, where AB=Ω+A+ABA \rightarrow B = \Omega + A + AB
  • LoRe III: Complementarity (Law of Excluded Middle): ϕ(A)=0ϕ(¬A)=1\phi(A) = 0 \Rightarrow \phi(\lnot A) = 1

This collection is both necessary and sufficient for ϕ\phi to be a Boolean algebra homomorphism. In quantum measure theory, classical LoRe breaks down due to interference effects, prompting replacement of complementarity (III) with:

  • LoRe IV: Finest-Grainedness: Among all preclusive, multiplicative co-events, choose ϕ\phi minimal with respect to the order ψϕ    \psi \preceq \phi \iff [for all AA: ϕ(A)=1ψ(A)=1\phi(A)=1 \Rightarrow \psi(A)=1]

For computational reasoners, LoRe describes expected behavioral laws that an ideal LRM should follow (Zhang et al., 19 Dec 2025):

  • Compute Law: Expected compute scales linearly in the shortest valid solution: Cθ(x)=αθκ(x)+o(κ(x))C_\theta(x) = \alpha_\theta \kappa(x) + o(\kappa(x))
  • Accuracy Law: Expected accuracy decays exponentially with complexity: Aθ(x)=exp(λθκ(x))A_\theta(x) = \exp(-\lambda_\theta \kappa(x))

2. Axiomatic Structure and Logical Consequences

The operative heart of LoRe is in how the axioms structure permissible inference:

  • In classical logic, LoRe I–III guarantee Booleanity, allowing standard deduction and affirming the law of excluded middle. The equivalence theorem demonstrates that these three axioms enforce precisely the homomorphic structure required for all classical deduction rules.
  • In quantum measure theory, LoRe III is typically inconsistent, as quantum interference can render all fine-grained histories null (zero measure), whereas LoRe I, II, and IV give rise to a "multiplicative," coarse-grained logic that upholds Modus Ponens but may violate complementarity.
  • For LRMs, the "laws" serve as empirical hypotheses: compute should increase with complexity, and accuracy should decrease, allowing quantitative measurement and optimization of reasoning.

A plausible implication is that axiomatic consistency (in either physical or computational systems) must be carefully tailored to the context—classical LoRe cannot be naively imported into the quantum or deep learning regimes without modification.

3. Tractable Proxies: Monotonicity and Compositionality

Direct computation of question complexity κ(x)\kappa(x) is intractable. The LoRe framework (Zhang et al., 19 Dec 2025) thus introduces measurable proxies:

  • Monotonicity: If κ(x1)κ(x2)\kappa(x_1) \leq \kappa(x_2), then Cθ(x1)Cθ(x2)C_\theta(x_1) \leq C_\theta(x_2) and Aθ(x1)Aθ(x2)A_\theta(x_1) \geq A_\theta(x_2). Operationalized in LoRe-Mono, this is empirically verified via high Spearman correlation (ρ+1\rho \rightarrow +1 for compute, ρ1\rho \rightarrow -1 for accuracy) between systematically varied question complexity and observed model outputs.
  • Compositionality: For independent questions x1,x2x_1, x_2, it requires Cθ(x1x2)Cθ(x1)+Cθ(x2)C_\theta(x_1 \oplus x_2) \approx C_\theta(x_1)+C_\theta(x_2) and Aθ(x1x2)=Aθ(x1)Aθ(x2)A_\theta(x_1 \oplus x_2) = A_\theta(x_1)A_\theta(x_2). LoRe-Compo measures deviations using normalized mean absolute deviation (nMAD).

Most LRMs satisfy monotonicity (ρ0.95\rho \gtrsim 0.95 for compute, ρ0.9\rho \lesssim -0.9 for accuracy) but have substantial compositionality deficits (nMADCθ:0.3\mathrm{nMAD}_{C_\theta}: 0.3–$0.5$, nMADlogAθ1.0\mathrm{nMAD}_{\log A_\theta} \gtrsim 1.0) (Zhang et al., 19 Dec 2025).

4. Experimental Methodology and LoRe-Bench

To systematically investigate LoRe compliance, LoRe-Bench is proposed with the following design (Zhang et al., 19 Dec 2025):

Property Empirical Test Measurement
Monotonicity LoRe-Mono Spearman’s ρ\rho
Compositionality LoRe-Compo nMAD (compute, accuracy)
  • LoRe-Mono: For math, science, language, and code, 10 seed templates yield 30 variants each with increasing complexity; models are evaluated on reasoning compute and accuracy trends.
  • LoRe-Compo: 250 triplets (x1,x2,x1x2)(x_1, x_2, x_1 \oplus x_2) from disjoint MATH500 topics assess compositionality via nMAD.

This benchmarking reveals that length-control heuristics (e.g., Thinkless, AdaptThink) do not suffice to induce compositionality.

5. Enforcement Through Fine-Tuning and Empirical Outcomes

A supervised fine-tuning approach, SFT-Compo, is introduced to enforce compute-law compositionality. The procedure (Zhang et al., 19 Dec 2025):

  • Constructs composite and independent question triples,
  • Samples multiple chain-of-thought (CoT) solutions,
  • Selects, among all triples with correct answers, the triple minimizing (r1)+(r2)(r12)|\ell(r_1)+\ell(r_2)-\ell(r_{12})|,
  • Fine-tunes on these examples with standard cross-entropy.

No architectural changes are needed beyond causal-LM training. Key findings include:

  • SFT-Compo reduces nMADCθ\mathrm{nMAD}_{C_\theta} by 22–40% (e.g., 0.528→0.314 on 1.5B parameter model),
  • Achieves average Pass@1 improvements of 4–5% across benchmarks,
  • Outperforms control fine-tuning without compositional selection.

A synergistic effect is observed: enforcing compute compositionality also improves monotonicity (ρ(Cθ,index)\rho(C_\theta,\text{index}) rising from 0.875 to 0.977 on 1.5B) and dramatically enhances accuracy compositionality (nMADlogAθ\mathrm{nMAD}_{\log A_\theta} drops 71% on 1.5B, 35% on 7B).

6. Comparative Perspective: Classical, Quantum, and Model Reasoning

LoRe unifies perspectives across domains:

  • Classical Physics: LoRe I–III enforce a one-history, Boolean logic with full deductive closure and robust handling of null sets.
  • Quantum Measure Theory: Replacement of complementarity with finest-grainedness reflects interference-driven effects, leading to a “multiplicative” logic and coarse-grained realities.
  • Large Reasoning Models: The compute and accuracy laws formalize ideal computational reasoning behaviors, allowing precise diagnosis and remedy of suboptimal model habits.

An important observation is that while classical LoRe axioms are overly rigid in quantum or LRM regimes, variants adapted to the context yield tractable, empirically relevant guidance.

7. Significance and Outlook

The LoRe frameworks provide both a principled foundation for the logic of inference (in physics and computation) and actionable tools for diagnosing and optimizing reasoning models. In LRMs, LoRe laws and benchmarks diagnose monotonicity/compositionality gaps and enable targeted fine-tuning, yielding measurable gains in both compliance and general reasoning. In physical logic, LoRe axioms reveal how foundational structure must shift from Boolean to multiplicative character in response to quantum phenomena.

A plausible implication is that, in both physical and computational systems, the design and enforcement of reasoning laws are indispensable for aligning practical inference with normative principles, with the specific content of those laws determined by the underlying structure—classical, quantum, or algorithmic—of the system in question (Clements et al., 2012, Zhang et al., 19 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Laws of Reasoning (LoRe).