Logic-Parametric Framework for Neuro-Symbolic NLI
- The paper introduces a logic-parametric framework that treats logical formalisms as tunable parameters to enhance neuro-symbolic NLI.
- The framework integrates deep neural models and symbolic logic engines with dynamic logic selection to improve robustness and systematic generalization.
- Empirical results show that adaptive logic routing increases NLI accuracy significantly while providing clearer, domain-specific explanations.
A logic-parametric framework for neuro-symbolic natural language inference (NLI) is a system in which logical formalisms—such as first-order, modal, deontic, or natural logics—are treated as tunable parameters, not fixed background structures. This approach leverages both deep neural architectures and symbolic logic engines, allowing the choice of logic to be dynamically adjusted depending on the inference context, domain, or reasoning requirements. Unlike static neuro-symbolic models, logic-parametric frameworks support modularity, domain adaptability, and explicit control over the formal properties (e.g., soundness, explainability) of NLI pipelines. Recent work demonstrates that making the underlying logic a first-class, controllable component improves robustness, systematic generalization, interpretability, and domain-specific accuracy in NLI tasks (Farjami et al., 9 Jan 2026, Xu et al., 8 Oct 2025, Allen et al., 13 Jul 2025, Feng et al., 2022).
1. Logical Formalism as a Tunable Parameter
Contemporary neuro-symbolic NLI frameworks traditionally commit to a single logical system, e.g., classical first-order logic (FOL) or natural logic. In logic-parametric architectures, the logical system is an explicit input to the pipeline and can be varied at runtime. The LogiKEy methodology exemplifies this: several logics, including FOL, modal KD, and dyadic deontic logics, are encoded as swappable modules within Isabelle/HOL via shallow semantic embeddings (Farjami et al., 9 Jan 2026). This allows for comparative evaluation of logic-external (“axioms on FOL”) versus logic-internal (“built-in normative axioms”) strategies. The logic parameter controls:
- The syntax and type system for autoformalization (translation of NL input into logical formulae).
- The operational semantics and proof calculus available to the symbolic back-end.
- The set of structural and domain assumptions implicitly active during inference.
This setup enables pipelines to dynamically optimize for domain-specific challenges such as normative reasoning (ethics, bioethics) or commonsense entailment, where the logical formalisms governing permissions, obligations, or defaults are inherently distinct.
2. Neuro-Symbolic Architecture and Workflow
Logic-parametric frameworks typically instantiate a hybrid loop alternating between LLMs and symbolic logic engines:
- Autoformalization: The LLM parses and translates a premise–hypothesis pair and possibly an explanation into a set of logical formulae , parameterized by the selected logic . This translation is logic-sensitive, producing, for example, FOL axioms, modal operators, or deontic statements (Farjami et al., 9 Jan 2026, Xu et al., 8 Oct 2025).
- Verification: A theorem prover or symbolic engine, loaded with the relevant module, checks the consistency and type-correctness of the formulae and attempts a proof , where encodes the hypothesis (Farjami et al., 9 Jan 2026).
- Explanation and Feedback: If proof fails, the symbolic engine provides failed steps (e.g., missing bridge rules) as feedback to the LLM, which modifies the explanation or the formalization—a process referred to as explanation refinement.
- Iterative Repair: The loop continues up to a fixed iteration limit, supporting dynamic refinement and increasing the likelihood of verifiable proof-obligation satisfaction in task-relevant logics.
- Logic-Dependent Control: Specific architectural aspects, from the autoformalization prompt to proof strategies, are isolated into logic-dependent modules, allowing the rest of the pipeline (LLM prompt, refinement logic) to remain logic-agnostic.
This modularity supports rapid swapping of the embedded logic, enabling systematic empirical comparisons and compositional reasoning.
3. Parametric Logics in Previous Neuro-Symbolic NLI Models
Earlier neuro-symbolic NLI frameworks provided groundwork for this approach:
- The natural logic-based framework of (Feng et al., 2022) models local inference relations as elements of a set with monotonicity-aware projection functions and join operators . These logic rules, parameterized by context, are intrinsic to the model, but the logic is fixed.
- NeuralLog (Chen et al., 2021) combines monotonicity-based symbolic reasoning with neural paraphrase and phrase alignment, formalizing the NLI problem as search across modular reasoning actions; however, while inference rules are parameterized at the module level, logic selection is not first-class.
- Recent adaptive neuro-symbolic frameworks, such as those in (Xu et al., 8 Oct 2025), explicitly regard the logic fragment (e.g., LP, FOL, SMT) as a parameter for both autoformalization and downstream solver dispatch, enabling dynamic selection per instance.
The table below summarizes core differentiators among selected frameworks:
| Framework | Logic Parameterization | Proof Search | Explanation Refinement |
|---|---|---|---|
| (Farjami et al., 9 Jan 2026) | Dynamic via logic modules | Isabelle/HOL | TP→LLM feedback loop |
| (Xu et al., 8 Oct 2025) | Discrete (LP, FOL, etc.) | Specialized solvers | Multi-paradigm ensemble |
| (Feng et al., 2022) | Strict context projections | RL-guided sampling | Introspective revision |
| (Chen et al., 2021) | Fixed natural logic, modular | Beam search | Path scoring/contradiction |
4. Theoretical Guarantees and Empirical Findings
Logic-parametric frameworks have enabled new analytical and experimental insights:
- Soundness and Completeness: LLM-grounded interpretations in paraconsistent logics (e.g., Angell’s AC) yield sound and complete inference when the LLM’s atomic evaluations are cached, preserving logical semantics within neuro-symbolic pipelines (Allen et al., 13 Jul 2025).
- Proof Efficiency and Explanation Quality: Embedding normative patterns internally (via logic modules such as KD or DDLE) reduces refinement (repair) depth and increases valid NLI explanation rates, particularly in ethical and modal domains (Farjami et al., 9 Jan 2026). On the BENR dataset, KD achieved 77.7% valid explanations with low computational cost, outperforming FOL in bioethical reasoning.
- Domain Sensitivity: First-order logics are more stable for commonsense NLI, whereas modal/deontic logics provide explanations and coverage unattainable by FOL in domains characterized by obligations or permissions, such as bioethics.
- Adaptivity and Scalability: Dynamic routing to formal solvers based on a logic classifier increases mixed-task NLI accuracy from 65.1% (pure LLM) to 92.1% (full adaptive pipeline), with ablation revealing a dramatic drop to 29.0% if logic/sovler assignment is randomized (Xu et al., 8 Oct 2025).
- Limitations: Non-classical logics embedded in HOL can suffer increased syntactic error rates, tying expressivity to system robustness (Farjami et al., 9 Jan 2026). Solver-based systems may experience bottlenecks in autoformalization and compositional complexity.
5. Paradigmatic Implementation Strategies
Representative logic-parametric architectures implement the following strategies:
- Logic Modules: Each logic is realized as a plug-in, swapping syntax, connectives, and proof rules while leaving the outer LLM–TP interface unchanged (Farjami et al., 9 Jan 2026).
- Autoformalization Interfaces: Prompt templates and semantic parsers are parameterized by logic, ensuring correct translation from NL to logic script regardless of end logic (Xu et al., 8 Oct 2025).
- Fine-Grained Explanation Refinement: Failed proof explanations are mapped to domain-specific failures (e.g., missing modal detachment in KD) with logic-internal approaches yielding cleaner refinement paths.
- Dynamic Routing: An LLM-based classifier or scoring function determines the most appropriate logic/solver per inference instance, integrating logics such as LP, FOL, CSP, or SMT as needed (Xu et al., 8 Oct 2025).
- Training: End-to-end or staged training can include logic-parametric objectives, e.g., minimizing loss functions where parameterizes the LLM and controls solver-specific settings (Xu et al., 8 Oct 2025), although many frameworks rely on pipeline composition with fine-tuning localized to logic-dependent modules.
6. Impact, Challenges, and Future Directions
Logic-parametric frameworks drive increases in interpretability, robustness, and domain adaptability. By decoupling logic selection from static model structure, they provide:
- Modularity: Logic-specific encoding and proof strategies are encapsulated, supporting rapid adaptation to new reasoning regimes.
- Domain-Driven Robustness: Domains with complex normative or default reasoning patterns benefit when the logic can be specialized (KD, DDL, etc.), an effect observable in empirical success rates and refinement depth (Farjami et al., 9 Jan 2026).
- Generalization and Adaptivity: Adaptive pipelines can support heterogeneous NLI, outperforming single-logic or single-strategy baselines on mixed benchmarks by >25 points (Xu et al., 8 Oct 2025).
Challenges remain, particularly in the areas of:
- Syntactic Robustness: Non-classical logic embeddings increase rates of parsing or formalization error in the symbolic backend.
- Autoformalization Quality: Translating arbitrary NL into domain-appropriate, logically correct formalizations remains a bottleneck.
- Computational Cost: Increased expressivity may entail slower inference or higher rates of failed proofs, especially for small or mid-sized LLMs.
- Scalability: As the diversity of logic modules and solver interfaces increases, maintaining seamless integration and collective learning is nontrivial.
Anticipated future directions include extension to quantified modal logics, integration of probabilistic or non-monotonic modules, and dynamic meta-reasoning for logic selection. This body of work underscores that flexible, logic-parametric architectures are poised to become standard practice for interpretable, robust, and domain-aware neuro-symbolic NLI (Farjami et al., 9 Jan 2026, Xu et al., 8 Oct 2025, Feng et al., 2022, Allen et al., 13 Jul 2025).