Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Zero-Shot Fact Verification via Natural Logic and Large Language Models (2410.03341v1)

Published 4 Oct 2024 in cs.CL

Abstract: The recent development of fact verification systems with natural logic has enhanced their explainability by aligning claims with evidence through set-theoretic operators, providing faithful justifications. Despite these advancements, such systems often rely on a large amount of training data annotated with natural logic. To address this issue, we propose a zero-shot method that utilizes the generalization capabilities of instruction-tuned LLMs. To comprehensively assess the zero-shot capabilities of our method and other fact verification systems, we evaluate all models on both artificial and real-world claims, including multilingual datasets. We also compare our method against other fact verification systems in two setups. First, in the zero-shot generalization setup, we demonstrate that our approach outperforms other systems that were not specifically trained on natural logic data, achieving an average accuracy improvement of 8.96 points over the best-performing baseline. Second, in the zero-shot transfer setup, we show that current systems trained on natural logic data do not generalize well to other domains, and our method outperforms these systems across all datasets with real-world claims.

Summary

  • The paper introduces Zero-NatVer, a system that leverages instruction-tuned LLMs to perform fact verification without relying on annotated training data.
  • It employs a structured pipeline including chunking, alignment, NatOp assignment, and DFA-based proof execution to generate explainable verdicts.
  • Evaluations show an average accuracy improvement of 8.96 points over baselines and demonstrate strong zero-shot transfer across diverse datasets.

Zero-Shot Fact Verification via Natural Logic and LLMs

The paper under review presents a novel approach to fact verification (FV) by leveraging zero-shot capabilities of instruction-tuned LLMs, specifically implementing a method named Zero-NatVer. This approach is grounded in natural logic (NatLog), thereby enhancing explainability and providing faithful justifications for the verification process.

Major Contributions and Findings

Zero-NatVer addresses the dependency of traditional NatLog-based FV systems on substantial annotated training data by employing zero-shot methods to operate without such data. The system uses a natural logic framework bolstered by the generalization potential of instruction-tuned LLMs, offering an alternative that reduces the need for data-intensive operations.

In evaluations, the proposed method was compared with existing fact verification systems in two setups: zero-shot generalization and zero-shot transfer. The results reveal several significant findings:

  • Zero-Shot Generalization: Zero-NatVer demonstrated superior accuracy, with an average improvement of 8.96 points over the best-performing baseline when evaluated on English datasets. Additionally, when compared to traditional direct-QA methods using the same backbone models, Zero-NatVer showed competitive performance, underscoring its effectiveness in generating explainable results while maintaining accuracy.
  • Zero-Shot Transfer: Tested across various datasets, Zero-NatVer outperformed other NatLog-based systems that relied on FEVER-like data for training. This suggests that the method can effectively generalize to different domains without requiring retraining on domain-specific data.

Methodology

The authors implement Zero-NatVer in a structured pipeline comprising four steps:

  1. Chunking: The claims are segmented into smaller chunks that can be independently verified. This segmentation is performed using constrained LLM decoding to prevent hallucination.
  2. Alignment: The claim chunks are aligned with corresponding evidence. This process includes generating alignment explanations to transfer global information for subsequent inference stages.
  3. NatOp Assignment: Built on a question-answering ensemble framework, Zero-NatVer assigns natural logic operators (NatOps) to aligned pairs. Calibration issues are mitigated using a weighted ensemble of 10 question prompts per candidate NatOp.
  4. Proof Execution: Proofs are executed on a deterministic finite automaton (DFA), producing the claim's final verdict.

Evaluation and Results

The methodology is evaluated on a diverse set of datasets containing both real-world and artificial claims. On English datasets, Zero-NatVer outperformed its NatLog baselines and was competitive with non-NatLog direct-answering models. Zero-NatVer displayed strong performance across multilingual datasets without needing language-specific modifications, demonstrating the potential for broad applicability.

Implications and Future Directions

The implementation of Zero-NatVer addresses the scalability concerns associated with data-dependent natural logic systems, enabling broader applications in domains where labeled training data is scarce or costly to produce. The results highlight the feasibility of leveraging LLMs for zero-shot fact-checking with high accuracy, providing valuable insights into the combination of neural and symbolic reasoning for explainability.

Future investigations could focus on further refining NatOp assignments, exploring more sophisticated techniques to resolve multi-NatOp conflicts, and enhancing the expressivity of natural logic to capture more complex reasoning phenomena. Moreover, adaptations to specific domains or languages could be explored to further extend the reach and applicability of zero-shot fact verification systems.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 68 likes.

Upgrade to Pro to view all of the tweets about this paper: