Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decoding AI Judgment: How LLMs Assess News Credibility and Bias (2502.04426v1)

Published 6 Feb 2025 in cs.CL, cs.AI, and cs.CY

Abstract: LLMs are increasingly used to assess news credibility, yet little is known about how they make these judgments. While prior research has examined political bias in LLM outputs or their potential for automated fact-checking, their internal evaluation processes remain largely unexamined. Understanding how LLMs assess credibility provides insights into AI behavior and how credibility is structured and applied in large-scale LLMs. This study benchmarks the reliability and political classifications of state-of-the-art LLMs - Gemini 1.5 Flash (Google), GPT-4o mini (OpenAI), and LLaMA 3.1 (Meta) - against structured, expert-driven rating systems such as NewsGuard and Media Bias Fact Check. Beyond assessing classification performance, we analyze the linguistic markers that shape LLM decisions, identifying which words and concepts drive their evaluations. We uncover patterns in how LLMs associate credibility with specific linguistic features by examining keyword frequency, contextual determinants, and rank distributions. Beyond static classification, we introduce a framework in which LLMs refine their credibility assessments by retrieving external information, querying other models, and adapting their responses. This allows us to investigate whether their assessments reflect structured reasoning or rely primarily on prior learned associations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Edoardo Loru (6 papers)
  2. Jacopo Nudo (3 papers)
  3. Niccolò Di Marco (13 papers)
  4. Matteo Cinelli (47 papers)
  5. Walter Quattrociocchi (78 papers)

Summary

Overview of "Decoding AI Judgment: How LLMs Assess News Credibility and Bias"

The paper "Decoding AI Judgment: How LLMs Assess News Credibility and Bias" provides a comprehensive examination of the methods employed by LLMs to evaluate the reliability and bias of news content. This paper is motivated by the growing reliance on LLMs for credibility assessments, an area that remains largely unexplored in terms of their internal evaluative mechanisms.

Methodological Approach

The paper benchmarks three state-of-the-art LLMs—Google's Gemini 1.5 Flash, OpenAI's GPT-4o mini, and Meta's LLaMA 3.1—against credibility ratings from established agencies like NewsGuard and Media Bias Fact Check (MBFC). These ratings serve as gold standards due to their structured, expert-driven methodology. The research explores how these LLMs classify 2,302 news outlets along the reliability spectrum and identifies linguistic markers that drive their assessments. The paper goes beyond mere classification by introducing a novel framework where LLMs refine their assessments through interaction with external sources and other models.

Key Findings

  1. Classification Accuracy: The LLMs demonstrate high accuracy in identifying "unreliable" news sources, with agreement rates ranging from 85% to 97% compared to human benchmarks. However, there is notable variability in classifying "reliable" sources, particularly by GPT-4o mini, which misclassifies 33% of reliable sources as unreliable.
  2. Political Orientation and Bias: The paper unveils a systematic bias in LLM reliability assessments, with right-leaning news outlets more frequently classified as unreliable, while center and left-leaning outlets are often overestimated as reliable.
  3. Keyword Analysis: Analysis of rank-frequency distributions reveals that reliable and unreliable classifications are associated with distinct linguistic markers. Reliable sources are linked to terms indicating neutrality and factual reporting, whereas unreliable ones correlate with words suggestive of bias or sensationalism.
  4. Agentic Workflow: The paper establishes an agentic framework that equips LLMs with tools to actively seek information and refine their judgments. This setup allows the researchers to investigate whether LLMs rely on structured reasoning or predominantly on past associations.

Implications for AI Development

The paper contributes significantly to understanding the cognitive processes in LLMs' evaluation tasks, highlighting areas where these models approximate human judgment and where they diverge. The identification of systematic biases poses critical questions about the influence of LLMs' training data and the potential replication of human biases. Moreover, the exploration of an agentic workflow suggests that LLMs have the potential for improved, context-aware decision-making strategies, opening avenues for developing autonomous evaluative agents.

Future Directions

There is a pressing need for refined methodologies that can pinpoint biases in LLMs' decision-making processes, particularly concerning political content. Future research should also aim to integrate human-AI collaborative frameworks that leverage the complementary strengths of humans and LLMs. Such collaborations could enhance the robustness and reliability of news credibility assessments in dynamically evolving information environments.

In conclusion, this paper sheds light on the intricate mechanisms of LLM-based news credibility assessments, revealing both the promise and the limitations of current models in emulating human evaluative processes.

X Twitter Logo Streamline Icon: https://streamlinehq.com