Papers
Topics
Authors
Recent
2000 character limit reached

FactScore: Factual Evaluation Framework

Updated 8 November 2025
  • FactScore is a fine-grained factuality evaluation framework that decomposes texts into atomic claims and verifies each against trusted references.
  • It employs a decompose-then-verify pipeline, using retrievers and validators to assess claim-level factual accuracy in long-form and structured data.
  • Its applications span multilingual summarization, clinical reporting, knowledge graph evaluation, and model alignment with robust empirical benchmarks.

FactScore is a fine-grained factuality evaluation framework designed to measure the factual precision of long-form text generation, LLM outputs, and structured knowledge artifacts. By decomposing model outputs into granular atomic facts and verifying each one for factual support in a trusted source, FactScore yields interpretable, claim-level measures of factual accuracy that are widely adopted for both English and multilingual assessment, knowledge graph evaluation, clinical report generation, and for calibrating or benchmarking truthfulness in LLMs.

1. Formal Definition and Metric Computation

FactScore operates by decomposing a generated text, summary, knowledge graph, or report into a set of atomic or minimal facts, and then evaluating the proportion supported by a reference source. The canonical computation, as introduced by Min et al. (Min et al., 2023), is:

FactScore=# Supported atomic facts# Total atomic facts (excluding irrelevant facts)\text{FactScore} = \frac{\#~\text{Supported atomic facts}}{\#~\text{Total atomic facts (excluding irrelevant facts)}}

For a model-generated text MM decomposed into a set AA of atomic facts, with each a∈Aa \in A labeled as supported (1) or not supported (0), the score is:

FactScore(M)=1∣A∣∑a∈A1[a is supported]\text{FactScore}(M) = \frac{1}{|A|}\sum_{a \in A}\mathbb{1}[a\ \text{is supported}]

This structure extends naturally to knowledge graphs, where each triple Ï„\tau is checked for contextual support:

FActScore∗(G)=1∣G∣∑τ∈GI[τ is supported by C(τ)]\text{FActScore}^*(\mathcal{G}) = \frac{1}{|\mathcal{G}|}\sum_{\tau \in \mathcal{G}}\mathbb{I}[\tau \text{ is supported by } \mathcal{C}(\tau)]

The "supported" verdict is typically established via human annotation or model-based retrieval and entailment, using resources such as Wikipedia, domain ontologies, or clinical label sets, and, for automation, closed- or open-source LLMs as validators (Lage et al., 8 Jul 2025).

2. Methodology: Claim Decomposition and Verification

FactScore is a prototypical "decompose-then-verify" framework. The evaluation pipeline has four principal stages (2406.19415):

  1. Claim/Fact Extraction: Model output is segmented into atomic, independently verifiable facts. For text, this often involves LLM-assisted decomposition at or below the sentence level; for knowledge graphs, each triple is treated as a fact.
  2. Retriever: For each fact, retrieve relevant evidence from the knowledge base (e.g., Wikipedia, PubMed, ground-truth label sets).
  3. Fact Validation (Scoring): Each atomic fact is individually verified against retrieved evidence, either by human annotators or LLM-based validators, yielding a binary label ("supported", "not supported"). Strict versions may delete unverifiable or subjective content before scoring.
  4. Aggregation: The proportion of supported atomic facts among those extracted is the FactScore.

Automated FactScore pipelines have been optimized for cost and scalability by coupling dense retrievers (e.g., GTR Large) with LLM prompts for true/false entailment, reducing human labeling error to under 2% for biography generation (Min et al., 2023, Lage et al., 8 Jul 2025).

3. Practical Applications Across Domains

FactScore is deployed as a factuality metric in a range of contexts:

FactScore is also a target for open-source evaluation frameworks such as OpenFActScore, supporting Hugging Face-compatible LLMs for both atomic fact generation and validation (Lage et al., 8 Jul 2025).

4. Limitations, Vulnerabilities, and Extensions

4.1 Independence and Gaming Risks

FactScore assumes independence among atomic facts, rewarding only the correctness of components. This enables subtle vulnerabilities:

  • Montage Lie and Inter-fact Dependencies: FactScore is blind to narrative manipulations that montage correct facts in misleading order, as in the MontageLie benchmark, yielding AUC-ROC scores below 51%—barely above random for detecting deceptive summaries (Zheng et al., 21 May 2025).
  • Repetition and Triviality (Gaming): FactScore is susceptible to inflation by repeated or domain-trivial claims (e.g., "X is a person"), and can be gamed by models fine-tuned to produce numerous verifiable but uninformative facts (Jiang et al., 4 Jul 2024). The Core module addresses this by combinatorial selection maximizing informativeness and uniqueness among claims, notably dropping adversarial FActScores from 70-85% to 0–40%.

4.2 Sensitivity to Decomposition

FactScore values are sensitive to the decomposition method. Different LLM-based or rule-based decomposers yield variable subclaim sets, impacting both atomicity and total score (Wanner et al., 18 Mar 2024). Objective decomposer quality can be measured via DecompScore, which checks subclaim coherence with the original sentence and favors highly atomic, faithful decompositions. Russellian-neo-Davidsonian (R-ND) inspired LLM decomposers offer high coverage and atomicity.

4.3 Domain and Multilingual Adaptation

In low-resource or specialized domains, FactScore accuracy is bottlenecked by the reference source and retriever quality—Wikipedia sparsity or retrieval weakens the validity of the metric, particularly in multilingual settings. Expanding the knowledge base to include Internet results or LLM-generated augmentation partially mitigates this (2406.19415, Shafayat et al., 28 Feb 2024). For clinical applications, strict label-level entailment is favored (Chen et al., 23 Sep 2025).

4.4 Discourse Structure and Dialogue

FactScore, in its canonical form, treats response utterances in isolation and lumps all unverifiable statements as errors. Extensions for conversational or sequential settings—such as VISTA—track dynamic conversational context, categorize types of unverifiability (subjective, out-of-scope, contradicted, abstention), and yield more human-aligned, transparent assessments (Lewis et al., 30 Oct 2025).

5. Empirical Impact and Quantitative Benchmarks

FactScore is a principal metric for factuality benchmarking and alignment, with numerous strong empirical claims:

Model/Context FactScore (%) Reference
ChatGPT (bio. gen) 58 (Min et al., 2023)
Mask-DPO (Llama3.1-8B) 25.56–39.39 (Gu et al., 4 Mar 2025)
PFME on Alpaca 13B ↑16.2pp (to 65.7) (Deng et al., 29 Jun 2024)
GraphMERT KG extraction 69.8 (vs 40.2 LLM) (Belova et al., 10 Oct 2025)
DSCC-HS (BioGEN) 46.50 (Zheng, 17 Sep 2025)
OraPO, CheXpert+ 0.341 (F1), recall 0.832 (Chen et al., 23 Sep 2025)
Multilingual LLMs (En) Highest (Shafayat et al., 28 Feb 2024, 2406.19415)

FactScore improvements often directly correlate with factuality alignment, e.g., Mask-DPO outperforms much larger base models, and PFME or FenCE-based methods yield absolute gains over baselines and prior SOTA.

6. Open Tooling, Reproducibility, and Best Practices

FactScore is disseminated as an open-source package (pip install factscore), with auxiliary resources supporting custom decomposition, annotation, and evaluation (Min et al., 2023, Lage et al., 8 Jul 2025). OpenFActScore enables fully open evaluation pipelines, with >0.99 Pearson correlation to commercial benchmarks.

Proper deployment of FactScore-based evaluation requires:

  • High-quality, atomic decomposition (preferably with validated, domain-appropriate decomposers).
  • Robust reference sources, with awareness of domain/resource coverage limitations.
  • Modular construction, allowing for subclaim selection (e.g., Core (Jiang et al., 4 Jul 2024)) and discourse/sequence-aware extensions.
  • Careful interpretation in adversarial, multilingual, or generative settings.

FactScore's modularity and extensibility make it an anchor point for future work in factual precision, claim-level evaluation, and robust truthfulness prediction, with active research into coverage, informativeness, cross-lingual reliability, and discourse-aware extensions (2406.19415, Zheng et al., 21 May 2025, Lewis et al., 30 Oct 2025).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to FactScore.