Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 190 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

FEVER Fact Verification Task

Updated 26 September 2025
  • FEVER Fact Verification Task is a large-scale benchmark that verifies claims against Wikipedia by categorizing them as Supported, Refuted, or NotEnoughInfo.
  • Its multi-stage methodology, including document retrieval, sentence selection, and recognizing textual entailment, drives advances in evidence-based fact checking.
  • The rigorous annotation protocols and baseline metrics emphasize challenges in multi-hop reasoning and evidence alignment, spurring innovations in explainable AI.

The FEVER Fact Verification Task is a large-scale, evidence-based benchmark designed to evaluate systems on their ability to verify the veracity of claims against Wikipedia. Featuring 185,445 claims spanning supported, refuted, and not-enough-information categories, FEVER catalyzed advances in document/sentence retrieval, natural language inference, and explainable fact checking by imposing challenging multi-class classification and evidence selection requirements. Its annotation rigor and explicit incorporation of evidence provenance have rendered it a central resource for both pipeline and end-to-end fact verification research.

1. Dataset Construction and Structure

FEVER is constructed by extracting sentences from Wikipedia's introductory sections and systematically mutating them through paraphrasing, negation, substitution, or altering specificity, to generate artificial but plausible claims. Each of the 185,445 claims is then manually verified and labeled as one of three classes:

  • Supported: There exists evidence in Wikipedia that the claim is true.
  • Refuted: There exists evidence that the claim is false.
  • NotEnoughInfo (NEI): Wikipedia does not provide enough information to verify the claim.

For Supported and Refuted labels, annotators explicitly record the minimal set of sentences that constitute sufficient evidence, with approximately 16.82% of evidential annotations requiring multiple sentences, and 12.15% requiring evidence from multiple pages. Splits are provided for training, development, test, and a reserved test set; the smaller splits are balanced across classes.

2. Annotation Methodology and Quality Assurance

The annotation process is divided into two primary stages:

  1. Claim Generation: Annotators produce both direct and "mutated" claims from extracted Wikipedia sentences.
  2. Claim Verification: Independent annotators review claims (without access to the original sentence) using the entire introductory section and, via a dictionary, linked pages. They assign a class label and, where applicable, select the exact supporting or refuting evidence.

Inter-annotator agreement is quantified using Fleiss’s kappa (κ=0.6841\kappa = 0.6841), indicating substantial consistency. Precision and recall for evidence retrieval are 95.42% and 72.36%, respectively, as determined by “super-annotator” assessments. The kappa statistic is computed via

κ=PˉPˉe1Pˉe\kappa = \frac{\bar{P} - \bar{P}_e}{1 - \bar{P}_e}

where Pˉ\bar{P} is the mean observed agreement, and Pˉe\bar{P}_e is the mean expected agreement by chance.

3. Baseline Pipeline Architecture

To characterize FEVER's challenge and establish a baseline, a multi-stage pipeline is implemented, comprising:

  • Document Retrieval: TF-IDF-based similarity matching (with binning over unigrams/bigrams) retrieves the top-kk Wikipedia documents, leveraging both standard and DrQA-modified methods.
  • Sentence Selection: Within candidate documents, sentences are ranked according to TF-IDF similarity with the claim. Selection is controlled by cutoff parameter ll.
  • Recognizing Textual Entailment (RTE): Two candidate models are used:
    • Multi-Layer Perceptron (MLP) using term frequency and cosine similarity features.
    • Decomposable Attention (DA) Network, which directly compares claim and concatenated evidence.

For NEI-labeled instances, pseudo-evidence is sampled (either nearest article or random sentences), enabling the model to learn to abstain when evidence is unrelated.

4. Evaluation Metrics and Baseline Results

Task evaluation is framed as a three-way classification—supported, refuted, NEI—with strict evidence match requirements for positive cases (the retrieved evidence set must exactly align with human annotations). Two accuracy settings are reported:

  • Evidence-required accuracy (label must be correct; evidence set must match): 31.87%.
  • Label-only accuracy (evidence ignored): 50.91%.

The ~19% performance drop underscores the complexity of evidence retrieval. Oracle ablations (injecting ground-truth document/sentence retrieval) identify document and sentence selection as principal bottlenecks.

5. Modeling Challenges and Research Implications

Key challenges highlighted by FEVER include:

  • Evidence Retrieval: Identifying and assembling minimal sufficing evidence sets from an extensive, heterogeneous corpus (often requiring multi-hop reasoning).
  • Multi-hop Reasoning: 16.82% of claims necessitate aggregating evidence spanning multiple, not necessarily contiguous, sentences; 12.15% demand cross-page evidence aggregation.
  • Evidence-Label Alignment: Achieving high label accuracy is insufficient; systems must link each claim decision with specific, minimal evidence, imposing robust explainability requirements.

These complexities differentiate FEVER from traditional NLI and factoid QA tasks, demanding progress on both scalable IR and explainable inference.

6. Impact, Extensions, and Future Directions

FEVER established a rigorous benchmark and spurred development of new architectures for fact verification, including improved retrieval pipelines, multi-sentence inference models, graph-based aggregation, and modular or end-to-end learning systems. The dataset’s breadth and annotation detail made it a focal point for empirical studies on claim verification, evidence alignment, negative example sampling, and bias diagnostics.

Additionally, FEVER inspired methodologies for constructing new datasets (with cross-domain and cross-lingual variants), research into adversarial and robust model evaluation, and advances in scalable, explainable neural architectures. The requirement for systems to connect their verdicts to explicit, human-readable evidence facilitated application to domains such as automatic fact checking, misinformation mitigation, and explainable AI research.

7. Summary Table: Central Characteristics of FEVER

Aspect Details Notes
Scale 185,445 claims Large-scale, Wikipedia-derived
Evidence Annotation Multi-sentence, cross-page, explicit selection 16.82% multi-sentence; 12.15% multi-page evidence
Classes Supported, Refuted, NotEnoughInfo Balanced in dev/test splits
Inter-annotator Agreement Fleiss κ\kappa = 0.6841 High consistency, substantial agreement
Evidence Precision / Recall 95.42% / 72.36% (super-annotator evaluations) Indicates annotation quality
Baseline Pipeline Accuracy 31.87% (evidence required), 50.91% (label only) DA/RTE-based model
Key Bottlenecks Document and sentence selection Verified via oracle ablations
Motivated Research Directions Retrieval, multi-hop reasoning, explainability Applications in AI fact-checking, combating misinformation

FEVER thus serves as a foundational testbed for research into large-scale, evidence-intensive, and explainable fact verification, emphasizing the joint challenges of scalable, fine-grained retrieval and rigorous inference.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to FEVER Fact Verification Task.