Papers
Topics
Authors
Recent
2000 character limit reached

MPCG: Multi-Round Persona-Conditioned Generation for Modeling the Evolution of Misinformation with LLMs (2509.16564v1)

Published 20 Sep 2025 in cs.CL and cs.SI

Abstract: Misinformation evolves as it spreads, shifting in language, framing, and moral emphasis to adapt to new audiences. However, current misinformation detection approaches implicitly assume that misinformation is static. We introduce MPCG, a multi-round, persona-conditioned framework that simulates how claims are iteratively reinterpreted by agents with distinct ideological perspectives. Our approach uses an uncensored LLM to generate persona-specific claims across multiple rounds, conditioning each generation on outputs from the previous round, enabling the study of misinformation evolution. We evaluate the generated claims through human and LLM-based annotations, cognitive effort metrics (readability, perplexity), emotion evocation metrics (sentiment analysis, morality), clustering, feasibility, and downstream classification. Results show strong agreement between human and GPT-4o-mini annotations, with higher divergence in fluency judgments. Generated claims require greater cognitive effort than the original claims and consistently reflect persona-aligned emotional and moral framing. Clustering and cosine similarity analyses confirm semantic drift across rounds while preserving topical coherence. Feasibility results show a 77% feasibility rate, confirming suitability for downstream tasks. Classification results reveal that commonly used misinformation detectors experience macro-F1 performance drops of up to 49.7%. The code is available at https://github.com/bcjr1997/MPCG

Summary

  • The paper presents MPCG, a framework that simulates the dynamic evolution of misinformation through multi-round, persona-conditioned claim generation.
  • Its evaluation demonstrates high role-playing consistency between human and model assessments, highlighting gaps in current detection systems.
  • Experiments reveal that ideological conditioning leads to semantic drift in claims, significantly reducing classifier reliability.

MPCG: Multi-Round Persona-Conditioned Generation for Modeling Misinformation Evolution with LLMs

The paper introduces MPCG, a framework designed to simulate the dynamic evolution of misinformation by utilizing multi-round, persona-conditioned claim generation. This method allows the modeling of how misinformation adapts across different ideological perspectives using a LLM. The paper evaluates the framework's effectiveness through extensive experiments, highlighting its potential to stress-test current misinformation detection approaches.

Framework Overview

Multi-Round Persona-Conditioned Generation

MPCG operates by iterating the generation of claims through various ideological personas, such as Democrat, Republican, and Moderate, to model the stylistic and semantic evolution of misinformation. The process involves three main components:

  1. Dataset Curation: Utilizes articles from PolitiFact to gather misinformation sources and corresponding fact-checking evidence.
  2. Claim Generation: Employs an LLM to produce persona-aligned claims across three rounds of generation, each conditioned on the outputs from previous rounds and the original claim.
  3. Claim Labeling: Assigns veracity labels (True, Half-True, False) to generated claims for evaluation in downstream tasks.

The framework's architecture is outlined in an illustrative diagram, providing a clear depiction of its components and their interactions (Figure 1). Figure 1

Figure 1: Overview of the MPCG Framework

Persona and Role Curation

MPCG defines personas based on political typologies, capturing distinct ideological perspectives within the American political spectrum. These roles are embedded into the claim generation process to simulate how misinformation might be framed differently by various ideological agents. This role-playing aspect is crucial for reproducing belief-driven claim adaptations.

Experimental Evaluation

The paper outlines a comprehensive evaluation of MPCG, focusing on several dimensions such as human and model agreement, cognitive effort, emotional and moral framings, and classifier robustness.

Human and Model Evaluation

Utilizing both human annotators and automated assessments via GPT-4o-mini, the paper evaluates the quality of generated misinformation claims based on role-playing consistency, content relevance, fluency, and factuality. Notably, GPT-4o-mini showed strong alignment with human raters in most dimensions, albeit with higher divergence in fluency judgments. Figure 2

Figure 2: Role-Playing Consistency scores between human annotators and GPT-4o-mini

Claim Characteristics

The analysis of cognitive and emotional metrics reveals that generated claims exhibit higher syntactical complexity and persona-aligned moral framing compared to original claims. Sentiment analysis confirms that Democrats and Republicans tend to generate more negative sentiment claims, while Moderates produce neutral ones. Figure 3

Figure 3: Semantic clusters of Round 1 generated claims (Circle), Round 2 generated claims (Plus), Round 3 generated claims (Square), and the original claims (Triangle). Each color represents a group of claims associated with the same original PolitiFact URL.

Classification Robustness

The paper demonstrates that commonly used misinformation classifiers suffer significant performance degradation when exposed to transformed claim variants from MPCG. The semantic drift introduced by persona conditioning leads to substantial drops in macro-F1 scores, highlighting vulnerabilities in current detection systems.

Conclusion and Future Directions

The paper concludes that MPCG effectively models misinformation evolution, providing a robust framework for testing the limits of existing Automated Fact Checking systems. Future work could explore multilingual and multimodal extensions to broaden the applicability of the framework. Additionally, developing standardized metrics for generation quality evaluation could reduce reliance on subjective assessments.

In summary, MPCG offers a novel approach for simulating the ideological adaptation of misinformation, emphasizing the need for more resilient detection methods to address the dynamic nature of misinformation evolution.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com