Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse (2504.09030v3)
Abstract: This article develops the concept of authoritarian recursion to theorize how AI systems consolidate institutional control across education, military operations, and digital discourse. Rather than treating these domains in isolation, it identifies a shared recursive architecture in which algorithmic systems mediate judgment, obscure accountability, and reshape the conditions of moral and epistemic agency. Grounded in critical discourse analysis and sociotechnical ethics, the paper synthesizes historical precedent, cultural narrative, and contemporary deployment to examine how intelligent systems normalize hierarchy under the guise of efficiency and neutrality. Case studies include automated proctoring in education, autonomous targeting in warfare, and algorithmic curation on social platforms. Cultural imaginaries such as Orwell's Nineteen Eighty-Four, The Terminator's Skynet, and Black Mirror are treated as heuristic devices that illuminate public anxieties and design assumptions embedded in technological systems. The analysis integrates frameworks from the Fairness, Accountability, and Transparency (FAccT) paradigm, relational ethics, and data justice theory to explore the normative implications of predictive infrastructures. It argues that recursive control operates through moral outsourcing, behavioral normalization, and epistemic closure. By reframing AI not as a neutral tool but as a communicative and institutional infrastructure, the article highlights the need for ethical orientations that prioritize democratic refusal, epistemic plurality, and responsible design in the governance of intelligent systems.