Generative Propaganda: AI in Digital Influence
- Generative propaganda is the use of AI to generate political messaging, ranging from overt soft fakes to covert deepfakes.
- It employs a taxonomy categorizing messages by overt versus hidden and promotional versus derogatory characteristics to influence narratives.
- Field studies in Taiwan and India reveal its rapid, multimodal efficiency that challenges traditional detection and regulatory frameworks.
Generative propaganda is the use of generative AI to shape public opinion, encompassing a spectrum of techniques including both promotional and derogatory content that is often, but not always, deceptive. This phenomenon extends beyond deepfakes—traditionally connected to image, audio, or video manipulation—to a broader array of AI-enabled strategies for influencing, mobilizing, and distorting narratives in digital societies (Daepp et al., 23 Sep 2025). Drawing on qualitative fieldwork in environments with high information contestation (Taiwan and India), generative propaganda profoundly alters both threat models and defensive strategies due to its efficiency, multimodal character, and the evolving interplay between overt persuasion and covert manipulation.
1. Conceptual Scope and Taxonomy
Generative propaganda refers to any application of generative AI to influence public opinion by producing, translating, perturbing, or proliferating political messaging in digital environments. The field is not restricted to representational deepfakes (media content that mimics real-world referents) but includes a variety of “soft fakes,” “deep roasts,” “auth fakes,” and even AI-perturbed repetitive messaging (e.g., AIPasta) (Daepp et al., 23 Sep 2025).
A core taxonomy put forth divides uses along two axes: (1) Obvious vs. Hidden—i.e., whether the AI-generation of content is clearly signaled or concealed; and (2) Promotional vs. Derogatory—i.e., whether the message elevates or attacks a subject. Table I in (Daepp et al., 23 Sep 2025) employs these axes to define four types:
Obvious | Hidden | |
---|---|---|
Promotional | Soft fakes | Auth fakes |
Derogatory | Deep roasts | Deepfakes |
Obvious uses may contain explicit disclaimers, watermarks, or visual cues signaling the content is AI-generated. Hidden uses aim for plausibility and concealment. The classification extends to non-representational forms such as “AIPasta” (AI-generated message perturbation for evasion), “Precision Propaganda” (AI-optimized microtargeting by audience segment), and “AI Slop” (low-quality mass content).
2. Real-World Applications and Strategies
Case studies in India and Taiwan document the deployment of generative propaganda across multiple communicative modalities:
- Soft fakes: AI-generated voice or image content used to produce humorous, laudatory, or absurdist portrayals of candidates, with overt disclaimers or stylistic cues (e.g., intentionally cartoonish images).
- Auth fakes: Official campaign-endorsed content (e.g., AI-dubbed multilingual speech) where the AI role is hidden but authorized.
- Deep roasts: Open use of face-swap filters or synthetic audio for satire and ridicule, overt in their artificiality.
- Deepfakes: Covert manipulations designed to mislead about candidate behavior or stoke controversy; though less prevalent, these attract the greatest countermeasure attention from defenders.
Additionally, AIPasta is used to perturb and diversify coordinated posts, inhibiting detection by both human fact-checkers and platform algorithms. “Precision Propaganda” enables the tailored delivery of political messages—by caste, occupation, region, or culture—at scale, leveraging AI’s speed and translation capabilities. AI Slop refers to low-effort or comedic content that, while not always persuasive, further muddies the information environment.
3. Impact on Public Opinion and Information Environments
The key insight from (Daepp et al., 23 Sep 2025) is that the impact vector of generative propaganda is not confined to deception. Particularly in contexts like India, political creators pursue persuasion and narrative setting, sometimes making AI’s role in the content creation process transparent to mitigate legal and reputational risks. In Taiwan, however, defenders identified deception as one tactic in a broader landscape of strategic narrative distortion—in particular, the crowding out of legitimate discursive space with high volumes of AI-generated “AIPasta.”
The technical efficacy of generative propaganda arises from efficiency gains: rapid cross-lingual content generation, syntactic and stylistic variation to evade detection, and seamless transition between text, image, and video modalities. These capabilities diminish the reliability of traditional authenticity signals (e.g., telltale translation errors or unidiomatic phrasing), complicating both platform-based and audience-driven defenses.
4. Defenses, Threat Models, and Regulatory Considerations
Interview data from the field (Daepp et al., 23 Sep 2025) reveals that defenders (fact-checkers, journalists, civic technologists) are often disproportionately focused on defending against deceptive deepfakes. This “deepfake-centric” paradigm may inadvertently leave systems vulnerable to less covert, but equally influential, forms of generative propaganda. The threat model must be revised along several dimensions:
- Obvious, Promotional AI Use: Because reputational and legal risks deter actors with persistent identities, many politically motivated creators choose overt and/or sanctioned AI uses (e.g., watermarked promotional content).
- Hidden, Derogatory AI Use: Anonymous actors, including troll groups and adversarial campaigns, continue to produce covert deepfakes and strategic “AI slop” for destabilization and suppression of opposition narratives.
- Efficiency, Multimodality, and Detectability Evasion: Tools like AIPasta pose unique challenges; even when content is nonsensical, its variation undermines both pattern-based machine detection and manual inspection.
Policy and technical recommendations stress a shift from exclusive reliance on deepfake detection and watermarking to broader, cross-platform approaches. Effective strategies include:
- Mandating visible or audible watermarks for AI-generated content, particularly in electoral contexts; this intervention strengthens transparency but can be circumvented by sophisticated operators.
- Developing scalable, multilingual media literacy and fact-checking resources, supported by both regulatory bodies and platforms.
- Improving user verification and reputation tracking to enforce accountability among actors who repeatedly use AI in suspected manipulative campaigns.
A critical insight is that legal and reputational considerations are potent internal constraints for actors with established public identities, complementing external technical interventions.
5. Technical Details and Framework Presentation
(Daepp et al., 23 Sep 2025) offers technical frameworks for classifying and analyzing generative propaganda, such as LaTeX-formatted tables to distinguish usage categories and threat actor motives (see Tables I–II). These representations allow for the systematic comparison of observed practices, constraints, and motivations across nation-states, campaigns, influencer groups, and content farms. Notably, the paper finds that—
- Deepfake detection methods must be paired with behavioral pattern analyses to address coordinated campaigns using efficiency gains from AI.
- A taxonomy that considers both the intentionality (persuasion vs. deception) and disclosure (obvious vs. hidden) of AI use enables more targeted regulatory and technical countermeasures.
- Cross-context studies indicate that efficiency—rather than pure deceptive capability—is the primary driver of AI adoption for both overt and covert influence campaigns in current digital environments.
6. Social, Legal, and Policy Implications
The evolving landscape of generative propaganda requires a recalibration of both research and intervention focus. While hidden, derogatory manipulations (“deepfakes”) remain a significant concern, the more prevalent use case—as emerged in the Indian electoral context—is persuasion via obvious, self-declared AI use. Social constraints, such as fear of reputational harm and legal liability, shape these practices. Thus, bolstering these internal defenses—via reputation mechanisms, user accountability, and normative literacy campaigns—can complement technological measures.
At the policy level, the need is evident for comprehensive threat models capable of differentiating between adversarial and promotional AI uses and for scalable detection frameworks that operate across modalities and platforms.
7. Conclusion
Generative propaganda is defined by its use of AI to intervene across the spectrum of digital public opinion formation—even when the AI’s involvement is openly signaled. Practical applications in Taiwan and India reveal strategies including overt campaign promotion via “soft fakes” and “auth fakes,” deliberate narrative distortion using “deep roasts,” coordinated derailment with “AI slop,” and evasion-oriented perturbation (AIPasta). Impact is driven as much by efficiency and the crowding out of legitimate discourse as by deception per se. Defenses must evolve to address the full taxonomy of uses, emphasizing the reinforcement of legal, social, and reputational constraints alongside technical detection and regulatory enforcement. The domain continues to demand research that clearly differentiates among intent, modality, and visibility of AI involvement so as to inform both intervention and theory robustly (Daepp et al., 23 Sep 2025).