Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 97 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 92 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Kimi K2 175 tok/s Pro
2000 character limit reached

LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation (2504.20013v2)

Published 28 Apr 2025 in cs.CL, cs.CY, and cs.IR

Abstract: Online fake news moderation now faces a new challenge brought by the malicious use of LLMs in fake news production. Though existing works have shown LLM-generated fake news is hard to detect from an individual aspect, it remains underexplored how its large-scale release will impact the news ecosystem. In this study, we develop a simulation pipeline and a dataset with ~56k generated news of diverse types to investigate the effects of LLM-generated fake news within neural news recommendation systems. Our findings expose a truth decay phenomenon, where real news is gradually losing its advantageous position in news ranking against fake news as LLM-generated news is involved in news recommendation. We further provide an explanation about why truth decay occurs from a familiarity perspective and show the positive correlation between perplexity and news ranking. Finally, we discuss the threats of LLM-generated fake news and provide possible countermeasures. We urge stakeholders to address this emerging challenge to preserve the integrity of news ecosystems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation

The paper "LLM-Generated Fake News Induces Truth Decay in News Ecosystem: A Case Study on Neural News Recommendation" presents a comprehensive investigation into the implications of fake news generated by LLMs on neural news recommendation systems. The research meticulously explores the phenomenon termed "Truth Decay," where the preferential ranking of real news gradually deteriorates in comparison to fake news as LLM-generated content infiltrates various components of recommendation systems.

Key Findings and Methodology

The paper uses a large-scale dataset of approximately 56k LLM-generated news items, encompassing diverse types and scenarios, alongside human-written news. The authors carefully craft this dataset by generating news with varying degrees of LLM involvement, which they categorize into different levels such as paraphrasing, rewriting, and conditional creation. Two prominent LLMs, gpt-4o-mini and Llama-3.1, are utilized for this purpose. Through this dataset, the research team conducts simulation experiments to evaluate the impact of LLM-generated news when introduced at different stages: as candidates, in user interaction history, and eventually within training data.

The paper evaluates the performance of two neural news recommendation models, LSTUR and NRMS, using metrics like Mean Reciprocal Rank (MRR) and normalized Discounted Cumulative Gain (nDCG). A notable observation is the diminishing advantage real news originally holds over fake news in ranking, a phenomenon exacerbated by the involvement of LLM-generated content. This trend is characterized as a "Truth Decay," notably evident when generated news reaches significant penetration into user history and training datasets.

Analysis and Implications

One of the salient contributions of this paper lies in the identification of perplexity, a measure of text familiarity for LLMs, as a key factor contributing to the observed Truth Decay. LLM-generated fake news demonstrates lower perplexity values than human-written fake news, indicating a greater alignment with the model's intrinsic biases and preferences. This insight suggests the need for re-calibration in current recommendation systems to mitigate unintended biases favoring machine-generated content.

The implications of these findings are multifaceted. Practically, it suggests an urgent need for stakeholders to implement robust mechanisms to preserve news ecosystem integrity, possibly through enhanced safety measures in LLM use, and heightened awareness and resistance strategies within recommendation systems. Theoretically, it prompts a reconsideration of current recommendation frameworks and draws attention to potential vulnerabilities inherent in systems interfacing with dynamic AI-generated text.

Future Directions

The paper outlines several avenues for future exploration, including the examination of fully autonomous LLM news generation scenarios (Level 5 automation) and the integration of credibility assessments into recommendation models. Furthermore, monitoring real-world LLM-assisted news creation campaigns could provide empirical insight into evolving threats posed by AI-driven misinformation.

Ultimately, the paper calls for a multi-disciplinary effort to address the challenges posed by the advent of LLM-generated news, urging collaboration across computational, ethical, and regulatory domains to safeguard the fidelity and reliability of information realms susceptible to such technology-mediated perturbations.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com