Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Large Language Models Reliable Judges? A Study on the Factuality Evaluation Capabilities of LLMs (2311.00681v1)

Published 1 Nov 2023 in cs.CL

Abstract: In recent years, LLMs have gained immense attention due to their notable emergent capabilities, surpassing those seen in earlier LLMs. A particularly intriguing application of LLMs is their role as evaluators for texts produced by various generative models. In this study, we delve into the potential of LLMs as reliable assessors of factual consistency in summaries generated by text-generation models. Initially, we introduce an innovative approach for factuality assessment using LLMs. This entails employing a singular LLM for the entirety of the question-answering-based factuality scoring process. Following this, we examine the efficacy of various LLMs in direct factuality scoring, benchmarking them against traditional measures and human annotations. Contrary to initial expectations, our results indicate a lack of significant correlations between factuality metrics and human evaluations, specifically for GPT-4 and PaLM-2. Notable correlations were only observed with GPT-3.5 across two factuality subcategories. These consistent findings across various factual error categories suggest a fundamental limitation in the current LLMs' capability to accurately gauge factuality. This version presents the information more concisely while maintaining the main points and findings of the original text.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xue-Yong Fu (11 papers)
  2. Md Tahmid Rahman Laskar (30 papers)
  3. Cheng Chen (262 papers)
  4. Shashi Bhushan TN (9 papers)
Citations (11)