Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 177 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

FactScore: Factual Evaluation Framework

Updated 8 November 2025
  • FactScore is a fine-grained factuality evaluation framework that decomposes texts into atomic claims and verifies each against trusted references.
  • It employs a decompose-then-verify pipeline, using retrievers and validators to assess claim-level factual accuracy in long-form and structured data.
  • Its applications span multilingual summarization, clinical reporting, knowledge graph evaluation, and model alignment with robust empirical benchmarks.

FactScore is a fine-grained factuality evaluation framework designed to measure the factual precision of long-form text generation, LLM outputs, and structured knowledge artifacts. By decomposing model outputs into granular atomic facts and verifying each one for factual support in a trusted source, FactScore yields interpretable, claim-level measures of factual accuracy that are widely adopted for both English and multilingual assessment, knowledge graph evaluation, clinical report generation, and for calibrating or benchmarking truthfulness in LLMs.

1. Formal Definition and Metric Computation

FactScore operates by decomposing a generated text, summary, knowledge graph, or report into a set of atomic or minimal facts, and then evaluating the proportion supported by a reference source. The canonical computation, as introduced by Min et al. (Min et al., 2023), is:

FactScore=# Supported atomic facts# Total atomic facts (excluding irrelevant facts)\text{FactScore} = \frac{\#~\text{Supported atomic facts}}{\#~\text{Total atomic facts (excluding irrelevant facts)}}

For a model-generated text MM decomposed into a set AA of atomic facts, with each aAa \in A labeled as supported (1) or not supported (0), the score is:

FactScore(M)=1AaA1[a is supported]\text{FactScore}(M) = \frac{1}{|A|}\sum_{a \in A}\mathbb{1}[a\ \text{is supported}]

This structure extends naturally to knowledge graphs, where each triple τ\tau is checked for contextual support:

FActScore(G)=1GτGI[τ is supported by C(τ)]\text{FActScore}^*(\mathcal{G}) = \frac{1}{|\mathcal{G}|}\sum_{\tau \in \mathcal{G}}\mathbb{I}[\tau \text{ is supported by } \mathcal{C}(\tau)]

The "supported" verdict is typically established via human annotation or model-based retrieval and entailment, using resources such as Wikipedia, domain ontologies, or clinical label sets, and, for automation, closed- or open-source LLMs as validators (Lage et al., 8 Jul 2025).

2. Methodology: Claim Decomposition and Verification

FactScore is a prototypical "decompose-then-verify" framework. The evaluation pipeline has four principal stages (2406.19415):

  1. Claim/Fact Extraction: Model output is segmented into atomic, independently verifiable facts. For text, this often involves LLM-assisted decomposition at or below the sentence level; for knowledge graphs, each triple is treated as a fact.
  2. Retriever: For each fact, retrieve relevant evidence from the knowledge base (e.g., Wikipedia, PubMed, ground-truth label sets).
  3. Fact Validation (Scoring): Each atomic fact is individually verified against retrieved evidence, either by human annotators or LLM-based validators, yielding a binary label ("supported", "not supported"). Strict versions may delete unverifiable or subjective content before scoring.
  4. Aggregation: The proportion of supported atomic facts among those extracted is the FactScore.

Automated FactScore pipelines have been optimized for cost and scalability by coupling dense retrievers (e.g., GTR Large) with LLM prompts for true/false entailment, reducing human labeling error to under 2% for biography generation (Min et al., 2023, Lage et al., 8 Jul 2025).

3. Practical Applications Across Domains

FactScore is deployed as a factuality metric in a range of contexts:

  • Biography and long-form generation: Benchmarks such as FActScore-Bio (Yaldiz et al., 10 Jul 2025), BioGEN (Zheng, 17 Sep 2025), and the core Min et al. dataset (Min et al., 2023) decompose biographies or comprehensive answers into claims.
  • Summarization: Adaptations rely on extracting (subject, relation, object) triples from summaries and comparing to source documents for factual consistency (Tong et al., 26 Sep 2024).
  • Multilingual Assessment: Pipelines translate generated facts across languages, standardize support checks against a consistent reference (English Wikipedia), and highlight the interplay of resource coverage and LLM proficiency (Shafayat et al., 28 Feb 2024, 2406.19415).
  • Knowledge Graph Construction: FactScore evaluates the factual precision of KG triples extracted from text; each triple is checked for contextual support, ontological correctness, and, optionally, general domain truth (Belova et al., 10 Oct 2025).
  • Clinical Report Generation: FactScore (denoted FactS) is used as a dense, sentence-level reward in RL/RLHF for medical text, comparing each atomic clinical fact to structured gold-standard labels and computing an F-measure over fact-level entailments (Chen et al., 23 Sep 2025).
  • Model Alignment and Editing: Used both as a metric and as a reward for hallucination detection frameworks (e.g., PFME (Deng et al., 29 Jun 2024)) or alignment/fine-tuning methods (e.g., Mask-DPO (Gu et al., 4 Mar 2025), FenCE (Xie et al., 24 Oct 2024), T3 (Tong et al., 26 Sep 2024), DSCC-HS (Zheng, 17 Sep 2025)).

FactScore is also a target for open-source evaluation frameworks such as OpenFActScore, supporting Hugging Face-compatible LLMs for both atomic fact generation and validation (Lage et al., 8 Jul 2025).

4. Limitations, Vulnerabilities, and Extensions

4.1 Independence and Gaming Risks

FactScore assumes independence among atomic facts, rewarding only the correctness of components. This enables subtle vulnerabilities:

  • Montage Lie and Inter-fact Dependencies: FactScore is blind to narrative manipulations that montage correct facts in misleading order, as in the MontageLie benchmark, yielding AUC-ROC scores below 51%—barely above random for detecting deceptive summaries (Zheng et al., 21 May 2025).
  • Repetition and Triviality (Gaming): FactScore is susceptible to inflation by repeated or domain-trivial claims (e.g., "X is a person"), and can be gamed by models fine-tuned to produce numerous verifiable but uninformative facts (Jiang et al., 4 Jul 2024). The Core module addresses this by combinatorial selection maximizing informativeness and uniqueness among claims, notably dropping adversarial FActScores from 70-85% to 0–40%.

4.2 Sensitivity to Decomposition

FactScore values are sensitive to the decomposition method. Different LLM-based or rule-based decomposers yield variable subclaim sets, impacting both atomicity and total score (Wanner et al., 18 Mar 2024). Objective decomposer quality can be measured via DecompScore, which checks subclaim coherence with the original sentence and favors highly atomic, faithful decompositions. Russellian-neo-Davidsonian (R-ND) inspired LLM decomposers offer high coverage and atomicity.

4.3 Domain and Multilingual Adaptation

In low-resource or specialized domains, FactScore accuracy is bottlenecked by the reference source and retriever quality—Wikipedia sparsity or retrieval weakens the validity of the metric, particularly in multilingual settings. Expanding the knowledge base to include Internet results or LLM-generated augmentation partially mitigates this (2406.19415, Shafayat et al., 28 Feb 2024). For clinical applications, strict label-level entailment is favored (Chen et al., 23 Sep 2025).

4.4 Discourse Structure and Dialogue

FactScore, in its canonical form, treats response utterances in isolation and lumps all unverifiable statements as errors. Extensions for conversational or sequential settings—such as VISTA—track dynamic conversational context, categorize types of unverifiability (subjective, out-of-scope, contradicted, abstention), and yield more human-aligned, transparent assessments (Lewis et al., 30 Oct 2025).

5. Empirical Impact and Quantitative Benchmarks

FactScore is a principal metric for factuality benchmarking and alignment, with numerous strong empirical claims:

Model/Context FactScore (%) Reference
ChatGPT (bio. gen) 58 (Min et al., 2023)
Mask-DPO (Llama3.1-8B) 25.56–39.39 (Gu et al., 4 Mar 2025)
PFME on Alpaca 13B ↑16.2pp (to 65.7) (Deng et al., 29 Jun 2024)
GraphMERT KG extraction 69.8 (vs 40.2 LLM) (Belova et al., 10 Oct 2025)
DSCC-HS (BioGEN) 46.50 (Zheng, 17 Sep 2025)
OraPO, CheXpert+ 0.341 (F1), recall 0.832 (Chen et al., 23 Sep 2025)
Multilingual LLMs (En) Highest (Shafayat et al., 28 Feb 2024, 2406.19415)

FactScore improvements often directly correlate with factuality alignment, e.g., Mask-DPO outperforms much larger base models, and PFME or FenCE-based methods yield absolute gains over baselines and prior SOTA.

6. Open Tooling, Reproducibility, and Best Practices

FactScore is disseminated as an open-source package (pip install factscore), with auxiliary resources supporting custom decomposition, annotation, and evaluation (Min et al., 2023, Lage et al., 8 Jul 2025). OpenFActScore enables fully open evaluation pipelines, with >0.99 Pearson correlation to commercial benchmarks.

Proper deployment of FactScore-based evaluation requires:

  • High-quality, atomic decomposition (preferably with validated, domain-appropriate decomposers).
  • Robust reference sources, with awareness of domain/resource coverage limitations.
  • Modular construction, allowing for subclaim selection (e.g., Core (Jiang et al., 4 Jul 2024)) and discourse/sequence-aware extensions.
  • Careful interpretation in adversarial, multilingual, or generative settings.

FactScore's modularity and extensibility make it an anchor point for future work in factual precision, claim-level evaluation, and robust truthfulness prediction, with active research into coverage, informativeness, cross-lingual reliability, and discourse-aware extensions (2406.19415, Zheng et al., 21 May 2025, Lewis et al., 30 Oct 2025).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to FactScore.