Papers
Topics
Authors
Recent
Search
2000 character limit reached

Belief in False Information: Insights & Mechanisms

Updated 18 January 2026
  • Belief in false information is defined as assigning perceived veracity to untrue content, including misinformation, disinformation, and fake news.
  • Research employs quantitative methods like signal detection theory and physiological readouts to measure cognitive and belief biases.
  • Mitigation strategies combine fact-check labeling, accuracy nudges, and network interventions to reduce the spread of false information.

Belief in false information is a core concern in sociotechnical systems, intersecting the domains of psychology, computational modeling, network science, and human-centered security. The following sections synthesize current research delineating definitions, theoretical frameworks, causal mechanisms, demographic and cognitive predictors, behavioral and physiological evidence, and strategies for mitigation, with an emphasis on precise constructs and quantitative results from the contemporary literature.

1. Definitions, Constructs, and Measurement

False information (FI) encompasses untrue content—whether misinformation, disinformation, or fake news—that, when decoded by recipients, does not correspond to verifiable facts. Misinformation is defined as unintentionally misleading content, disinformation is intentionally deceptive, and fake news denotes pseudo-journalistic disinformation. Belief in FI is operationally defined as an individual's or system's assignment of sufficient perceived veracity to such content to surpass a behavioral or cognitive acceptance threshold (Walke et al., 11 Jan 2026).

Studied measurement strategies include:

  • Direct Belief Scoring: Participants judge a set of false statements, typically assigning truth labels or Likert-based believability ratings (Singh et al., 2021, Nightingale et al., 2020).
  • Signal Detection Theory (SDT) Indices: Truth sensitivity, measured by d=Z(H)Z(FA)d' = Z(H) - Z(FA), quantifies the ability to discriminate truth from falsehood. Belief bias, measured via the decision criterion c=12[Z(H)+Z(FA)]c = -\frac{1}{2}[Z(H) + Z(FA)], quantifies an inclination to accept attitude-congruent content regardless of truth (Nahon et al., 2024).
  • Physiological Readout: Electrodermal activity (EDA) and photoplethysmography (PPG) features, measured via wearables, can reveal the involuntary correlates of belief and repetition at the single-trial level, with machine-learning models achieving up to 67.8% accuracy in classifying belief states (Nguyen et al., 22 May 2025).
  • Aggregate Models: Bayesian frameworks formalize belief updating as a posterior mean ζi(y)\zeta_i(y), integrating prior (bias, uncertainty) and source credibility; public-health and infodemiology surveys generalize to large, multi-country samples (Khajehnejad et al., 2018, Singh et al., 2021).

2. Cognitive, Psychological, and Social Determinants

Belief in FI is modulated by a spectrum of demographic, personality, cognitive, and affective factors:

  • Demographics: Lower educational attainment consistently predicts higher FI belief; evidence for age and gender effects is mixed but suggests increased susceptibility among younger individuals and variable gender differences across contexts (Walke et al., 11 Jan 2026, Nightingale et al., 2020).
  • Personality Traits: High extraversion and neuroticism, low agreeableness and conscientiousness, and strong conspiracy mentality correlate with increased FI belief. Analytical thinking, particularly measured by the Cognitive Reflection Test (CRT), is the single most robust negative predictor (standardized β0.30,p<0.001\beta \approx -0.30, p < 0.001) (Walke et al., 11 Jan 2026).
  • Affective and Cognitive Heuristics: Emotion-driven processing, negative affect, and repeated exposure (illusory truth effect) increase susceptibility. Confirmation bias and belief bias (accepting congruent information regardless of truth) are substantial drivers, often outweighing deficits in truth sensitivity per se (Nahon et al., 2024, Walke et al., 11 Jan 2026, Sampson et al., 2024).
  • Motivated Reasoning: Both laboratory and field experiments show that motivated reasoning—where individuals process information in a directionally congruent manner (e.g., politically aligned)—enhances belief in FI, especially when external incentives or social rewards are present (Thaler, 2021, Nahon et al., 2024).

3. Social Network and Transmission Dynamics

The spread and persistence of belief in false information are governed by social, structural, and informational dynamics:

  • Peer Pressure and Polarization: High peer-pressure (λi\lambda_i \to \infty) in tightly-knit, hierarchically-structured networks leads to suppression of open disagreement and tacit acceptance of FI, even by privately skeptical agents. Structural polarization amplifies propagation incentives, increasing vulnerability to widespread misinformation (Liu, 9 Oct 2025).
  • Confirmation Filtering and Selective Exposure: Users preferentially consume content congruent with their beliefs, leading to homophilic echo chambers and increased polarization (Bessi et al., 2014).
  • Competing Contagion Models: Coupled opinion-dynamics models with cognitive-bias-dependent infectivity replicate empirical phenomena such as the illusory truth effect, the rebound of dying beliefs, and belief-overturning by minority-seeded FI, especially under external recruitment (e.g., bot-driven amplification) (Sampson et al., 2024).

4. Media, Algorithmic, and AI Influences

Modern media and algorithmic systems reshape belief formation at scale:

  • Virality Moderators: Posts annotated as highly believable are over 2× more likely to be reshared, while perceived harmfulness reduces viral spread by ~40% (Drolsbach et al., 2023). This signals that low-harm, high-believability FI is especially likely to dominate public discourse.
  • AI-Driven Misinformation: Deceptive AI-generated explanations can amplify belief in FI more than classification alone (marginal increase of 0.29 Likert points, Cohen's d ≈ 0.27), and are robust to individual differences in cognitive reflection and baseline trust in AI. Logically invalid explanations reduce—but do not eliminate—the persuasive effect, underscoring the need for logic-based resilience education (Danry et al., 2024).
  • Continual Pre-training Poisoning: LLMs exposed to repeated, confidently stated falsehoods display persistent representational drift away from facts, with even 10% poisoning rates flipping model beliefs in 20–30% of test cases (Churina et al., 29 Oct 2025).

5. Behavioral and Physiological Correlates

Recent research employs machine-learning classifiers on biosignals, demonstrating that belief in FI manifests in involuntary physiological correlates:

  • Electrodermal Activity (EDA): SCR amplitude and rise-time increase on exposure to false or surprising claims, and repeated exposure decreases phasic response, mirroring the illusory-truth effect.
  • Heart Rate Variability (HRV): Reduced HRV (lower SDNN and RMSSD) reflects greater cognitive effort and uncertainty during FI evaluation.
  • Classifier Performance: KNN and LightGBM classifiers on EDA and PPG signals reach up to 67.8% accuracy for belief discrimination. These findings propose real-time, minimally intrusive belief detection as a feasible augmentation to text-based misinformation systems (Nguyen et al., 22 May 2025).

6. Interventions: Mitigation and Resilience

Multiple strategies have been quantitatively shown to reduce belief in false information:

  • Accurate Labeling: Fact-check and warning labels reduce belief by up to 30%, but introduce an implied-truth effect for untagged items (Δ\Delta accuracy rating ≈ +6%) (Walke et al., 11 Jan 2026).
  • Accuracy Nudges: Prompting users to consider accuracy before sharing reduces propagation intentions by 20–25% (Walke et al., 11 Jan 2026).
  • Digital Literacy Training: Media literacy interventions (e.g., the L2D "Learn-to-Discern" campaign) yield 16pp improvement in ability to detect false news (Cohen's d ≈ –0.25), even in emerging digital markets (Thomas et al., 2021).
  • Elaboration and Humility Interventions: Encouraging deliberate, reflective processing increases truth sensitivity (dd'), but has minimal or no effect on underlying belief bias. Interventions fostering intellectual humility (attenuating overconfidence in personal beliefs) are theoretically expected to reduce belief bias more effectively than fact-checking or merely slowing processing (Nahon et al., 2024).
  • Algorithmic Moderation: Platforms are advised to prioritize early review of low-harm, high-believability FI, and to integrate user believability signals into content moderation workflows (Drolsbach et al., 2023, Singh et al., 2021).
  • Network Rewiring: Reducing structural polarization and peer-pressure is pivotal for curbing cascade propagation of FI in closely knit social networks (Liu, 9 Oct 2025, Sampson et al., 2024).

7. Theoretical and Formal Approaches

Beyond empirical psychology, formal frameworks elucidate false-belief generation:

  • Typical Model Theory: Selecting beliefs based on typical models (majority-vote over consistent worlds) minimizes the expected number of false beliefs one can make given incomplete knowledge (Lozinskii, 2011).
  • Bayesian Optimal Design: Belief formation is precisely described by Bayesian posterior integration, with susceptibility to FI as a quantifiable function of prior bias, hesitation, and source credibility—enabling rigorous analysis of containment strategies (Khajehnejad et al., 2018).
  • Hybrid Modal Logic: Cognitive “false-belief tasks” (e.g., Smarties, Sally-Anne) can be structurally analyzed using hybrid modal logic, revealing that perspective-shifts and inertia principles formalize the persistence of false beliefs without assuming full logical omniscience (Brauner, 2013).

This synthesis demonstrates that belief in false information is multi-factorial, strongly shaped by belief bias, cognitive and network dynamics, media environment, and the nature of informational and algorithmic interventions. The most effective mitigation strategies combine technical corrections (labeling, nudging, literate interface design) with structural and cognitive recalibration (reduced peer pressure, humility training, logic instruction), aiming to harden both individual and systemic epistemic security (Walke et al., 11 Jan 2026, Nahon et al., 2024, Danry et al., 2024, Liu, 9 Oct 2025, Thomas et al., 2021, Drolsbach et al., 2023, Churina et al., 29 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Belief in False Information.