Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
86 tokens/sec
Gemini 2.5 Pro Premium
43 tokens/sec
GPT-5 Medium
19 tokens/sec
GPT-5 High Premium
30 tokens/sec
GPT-4o
93 tokens/sec
DeepSeek R1 via Azure Premium
88 tokens/sec
GPT OSS 120B via Groq Premium
441 tokens/sec
Kimi K2 via Groq Premium
234 tokens/sec
2000 character limit reached

FActScore: Metric for Factual Precision in LLMs

Updated 8 August 2025
  • FActScore is a metric that quantifies factual precision in LLM outputs by decomposing text into atomic facts and verifying each against reliable sources.
  • It utilizes a four-step pipeline—atomic fact generation, evidence retrieval, fact validation, and score computation—to ensure fine-grained, accurate evaluations.
  • The method achieves high agreement with human evaluators, supports multilingual assessments, and is robust against adversarial manipulations in generated content.

FActScore is a fine-grained metric for assessing the factual precision of long-form text generated by LLMs, introducing an evaluation paradigm that decomposes generated outputs into atomic factual units and computes the proportion verifiably supported by external knowledge sources. Its conceptual and technical evolution has led to broad adoption in the evaluation of factuality in both research and practical LLM deployments.

1. Metric Definition and Conceptual Framework

FActScore is defined as the ratio between the number of atomic facts (“supported claims”) in a generated output that are validated against a reliable reference (e.g., Wikipedia) and the total number of atomic facts identified:

FActScore=nsN×100%\mathrm{FActScore} = \frac{n_s}{N} \times 100 \%

where nsn_s denotes the count of supported atomic facts and NN is the total number of extracted atomic facts.

Atomic facts are short, context-independent factual statements derived from the generated text using sentence splitting and LLM-driven prompts. Rather than issuing binary judgments at the document or sentence level, FActScore isolates factuality granularly, enabling nuanced evaluation of outputs that may contain both supported and unsupported information (Min et al., 2023).

2. Decompose-Then-Verify Pipeline

The FActScore pipeline involves four major steps:

  1. Atomic Fact Generation: Each generated passage is split into sentences and further decomposed into atomic facts using an LLM (e.g., InstructGPT, ChatGPT, or an open-source alternative). The decomposition is guided by instruction templates to ensure that resulting facts are independent and minimal.
  2. Evidence Retrieval: A dense retriever model (e.g., a GTR-based passage retriever) extracts relevant knowledge snippets from an external source (primarily Wikipedia) for the target entity.
  3. Atomic Fact Validation: Each atomic fact is paired with retrieved evidence and classified—by a human expert or LLM—as “Supported,” “Not-supported,” or “Irrelevant.” Automated variants use masked LLMing probabilities with a threshold (e.g., 0.3).
  4. Score Computation: The FActScore is computed as the percentage of atomic facts labeled “Supported.”

This modular approach can be instantiated with human or automated evaluators. Notably, the automated pipeline achieves less than 2% error relative to human annotation (Min et al., 2023).

3. Human and Automated Evaluation

Human annotation remains the calibration gold standard, involving qualified fact-checkers who label each atomic fact after consulting referenced evidence. Reported inter-annotator agreement rates were 96% (InstructGPT), 90% (ChatGPT), and 88% (PerplexityAI) on biography tasks, establishing high reliability of the fine-grained annotation protocol (Min et al., 2023).

Automated fact validation leverages retrieval-augmented LLMs for both atomic fact generation and verification, reporting micro-level F1 precision and recall:

Precision=Predicted UnsupportedGold UnsupportedPredicted Unsupported\mathrm{Precision} = \frac{|\text{Predicted Unsupported} \cap \text{Gold Unsupported}|}{|\text{Predicted Unsupported}|}

Recall=Predicted UnsupportedGold UnsupportedGold Unsupported\mathrm{Recall} = \frac{|\text{Predicted Unsupported} \cap \text{Gold Unsupported}|}{|\text{Gold Unsupported}|}

F1micro=2PRP+R\mathrm{F1}_{micro} = \frac{2 \cdot P \cdot R}{P + R}

Such automated systems enable scalable evaluation over thousands of generations, with high correlation (>0.99>0.99 Pearson) between open-source implementations and proprietary benchmarks (Lage et al., 8 Jul 2025).

4. Multilingual Extensions and Knowledge Source Limitations

While the standard pipeline evaluates English text against English Wikipedia, recent works extend FActScore to multilingual scenarios. Generated content in various languages is translated into English for fact extraction and verification, standardizing the reference knowledge source (Shafayat et al., 28 Feb 2024Chataigner et al., 23 Oct 2024). This approach controls for inconsistencies in non-English Wikipedia coverage and helps mitigate cascading translation errors.

However, for medium and low-resource languages, limited Wikipedia coverage and retrieval performance can introduce underestimation or overestimation biases in FActScore (2406.19415). Mitigation strategies include:

  • Expanding the number of retrieved passages.
  • Using an Internet-wide search (e.g., Google API) as the knowledge corpus.
  • LLM-augmented knowledge (prompting GPT-4 for clarifications).

Even with such enhancements, true factuality assessment remains bounded by knowledge coverage, motivating ongoing work in dataset expansion and retrieval augmentation.

5. Robustness, Decomposition, and Manipulation

The accuracy of FActScore is sensitive to the method of atomic fact decomposition. Different strategies—semantic parses, LLM prompt variants, Russellian/Neo-Davidsonian breakdowns—produce variable sets of atomic facts, influencing score reliability (Wanner et al., 18 Mar 2024). Decomposition quality is independently measured via DecompScore, which quantifies atomicity and information coverage of claim splits.

The system is also vulnerable to adversarial manipulation; repeated trivial or tautological facts can artificially inflate a naive FActScore. Plug-and-play filtering modules such as Core apply combinatorial subclaim selection and informativeness weighting to suppress repetition and reward only unique, informative facts (Jiang et al., 4 Jul 2024).

6. Comparative Performance and Applications

Extensive evaluations reveal that commercial models such as GPT-4 and ChatGPT outperform public LLMs (e.g., Alpaca, Vicuna) on FActScore, though absolute values remain modest (ChatGPT ≈ 58%). Modular approaches for hallucination detection and editing (e.g., PFME) improve FActScore by up to 16.2 percentage points on specific benchmarks (Deng et al., 29 Jun 2024). Iterative fine-tuning frameworks (e.g., Mask-DPO) that exploit sentence-level factuality masks also yield substantial factuality gains (e.g., from 49.19% to 77.53%) (Gu et al., 4 Mar 2025), and new critique-based evaluation/fine-tuning protocols using FenCE show increases of 14–16% over baseline (Xie et al., 24 Oct 2024).

Open-source variants such as OpenFActScore (Lage et al., 8 Jul 2025) democratize access, supporting any Hugging Face–compatible model and reporting BERTScore-F1 and error rate metrics closely reproducing original FActScore benchmarks.

Applications of FActScore span:

7. Open Challenges and Future Directions

FActScore’s decomposition-centric, fact-level approach cannot detect manipulations that reorder or montage otherwise true statements into deceptive narratives. Benchmark studies (MontageLie) demonstrate that both fine-grained and coarse-grained evaluators—including FActScore—can be defeated in such cases, with AUC-ROC values falling below 65% (Zheng et al., 21 May 2025). Extensions such as DoveScore jointly verify factual accuracy and event-order consistency, yielding superior robustness.

Other emerging frontiers include:

  • Integration of Core-type filtering to prevent FP score gaming across domains (Jiang et al., 4 Jul 2024).
  • Modular open-source pipelines for atomic fact handling and validation (Lage et al., 8 Jul 2025).
  • Multilingual and cross-modal factual evaluation, accounting for cultural and knowledge coverage variances (2406.19415Chataigner et al., 23 Oct 2024).
  • RL-based online alignment frameworks incorporating factual precision, detail level, and answer relevance as reward signals, with empirically validated improvements in factuality and informativeness (Chen et al., 7 Aug 2025).

In summary, FActScore has established itself as a pivotal metric for evaluating and advancing factual accuracy in long-form LLM outputs. Its ongoing evolution, including advances in decomposition quality, multilingual adaptation, adversarial robustness, and compositional evaluation, continues to drive progress in the factual alignment and deployment of trustworthy LLMs.