Papers
Topics
Authors
Recent
Search
2000 character limit reached

'AI' and Computer Science: Contradictions Emerge between Ideologies

Published 31 Mar 2026 in cs.HC | (2603.29746v1)

Abstract: We develop a conceptualization of ideology, in which a system of ideas represents social, economic, and political relationships. We use ideology as a lens for understanding and critiquing intersecting social, economic, and political aspects of how 'AI' technologies are being developed. We observe ideological shifts. We question that the present tangling of corporate and university objectives is beneficial to labor, particularly computer science students, and the general public. Corporations and computer science have a history of marketing the ideology of computing as empowerment. However, with intensification of the production of 'AI', contradictions emerge. We ask, "Who is being empowered?"

Authors (1)

Summary

  • The paper reveals that longstanding computer science empowerment ideals clash with emerging corporate-dominated AI narratives.
  • It applies Hall and Althusser’s frameworks to analyze how universities, corporations, and labor markets shape ideological shifts.
  • The study warns that rebranding LLMs as 'AI' obscures epistemic limitations and reinforces industry influence over research agendas.

Ideological Contradictions in AI and Computer Science

Ideology as Socio-Technical Infrastructure

The paper "'AI' and Computer Science: Contradictions Emerge between Ideologies" (2603.29746) advances an ideological analysis of current transformations in the intersection of AI development, the labor market, and computer science education. It deploys Hall's and Althusser’s sociological frameworks to situate AI technologies within a complex interplay of economic, political, and institutional positionalities involving corporations, universities, labor, and the public. The analysis foregrounds how implicit ideologies—once centered around empowerment—are being destabilized by new contradictions resulting from AI’s ascendancy.

The author distinguishes between economic base and ideological superstructures, emphasizing that ideologies are constituted and reproduced not just by direct relations of production but also by the activities of universities, corporations, and the proximate actors (students, researchers). The consequence is the entanglement of value systems, shaping how research directions are chosen, how labor is conceptualized and absorbed, and how technological artifacts (such as LLMs) are marketed and mythologized.

Shifting Ideologies of Computing

The historical thread traced in the paper notes a transition from the ideology of computing as empowerment—championed both in academic circles and popular media by figures such as Ted Nelson and Apple’s 1984 branding—to the current moment, where the centrality of computing is leveraged to rationalize both technological optimism and increased control by corporate actors. The commodification of “AI” as a signifier, rather than a technical term, becomes a site for obfuscating critical assessments and for consolidating power.

A key empirical claim is the inversion of labor market prospects: while computer science has been the most popular undergraduate major, the current phase—saturated with AI-driven automation and consolidation—marks a relative contraction in opportunities for CS graduates compared to the general labor market (2603.29746). This material shift contradicts the past narrative that positioned computing education as a reliable source of socioeconomic mobility.

Collapsing Boundaries: Research, Product, and Ideological Agency

The boundary collapse between academic research and product engineering, driven by the computational resource requirements and capital intensity of state-of-the-art LLMs, is identified as structurally privileging large corporations. This structure increasingly subordinates university research agendas to the logic and public relations priorities of dominant industry actors. The paper references Burrell and Metcalf's critique of the consolidation of research questions and epistemologies under the regime of scale, further noting the loss of agency for academic computer science in defining its own intellectual trajectories (2603.29746).

The AGI (Artificial General Intelligence) discourse is contextualized not as a neutral technical ambition but as an ideological project with ethical, social, and political implications. The paper draws on Gebru and Torres’s argument that an obsession with AGI is structurally linked to eugenicist and exclusionary imaginaries, further highlighting the risk that the human-centered purpose and societal accountability of computer science is subordinated to speculative technological utopias.

AI as Boundary Object and Ideological Sleight of Hand

LLMs are framed as boundary objects—entities that serve divergent meanings and strategic functions for various stakeholders (corporations, the research community, labor, the public). Importantly, the rhetoric that positions LLMs as “AI” is indicted as a deliberate ideological maneuver; it mystifies the computational and statistical foundations, exaggerates capabilities, and distracts from both epistemic limitations (e.g., hallucinations, lack of self-awareness) and unintended harmful impacts (e.g., resource consumption, labor displacement).

The analysis critiques both industry marketing (e.g., OpenAI’s CEO analogizing LLMs to “PhD-level experts”) and institutional complicity in propagating a narrative of empowerment that serves to discipline the public and academia into compliance and further technical adoption. The author’s practice of bracketing “AI” in quotation marks signals a demand for epistemological rigor and resistance to corporate myth-making.

Implications for Research, Labor, and Epistemology

The paper advances the claim that the AI/AGI-centric ideology is structurally deleterious to students, early-career researchers, and the epistemological autonomy of computer science as a discipline. Notably, even as “AI” dominates high-citation venues in HCI (e.g., CHI), the types of critical, qualitative work that characterized earlier periods are increasingly marginalized. The infamous case of Google firing Gebru and Mitchell for publishing the “Stochastic Parrots” paper [bender2021dangers] is presented as material evidence of the lengths corporate actors will pursue to suppress critique and maintain ideological hegemony.

A core implication is that the conditions for “questioning the march of AI progress” are being actively foreclosed by the intersection of economic incentives, ideology, and institutional gatekeeping. This constrains the ability of computer science departments to serve the public good or advocate for the interests of their students against the imperatives of capital and automation.

Conclusion

The paper identifies a structural contradiction between the historic ideology of computer science as a site of empowerment and the emergent reality structured by AI production, labor displacement, and ideological capture by corporate entities. The rebranding of LLMs as “AI” and the embrace of AGI serve corporate consolidation, but undermine the interests of students, public understanding, and epistemic critique.

The author’s analysis implies that absent a reinvigoration of reflexive, critical, and politically engaged scholarship, university computer science will continue to be positioned as a vassal to industry, unable to shape the social and broader human impacts of its artifacts. Future theoretical and practical developments may hinge on reclaiming agency—whether through new forms of public scholarship, disciplinary self-critique, or the foregrounding of alternative ideologies that prioritize human flourishing and societal benefit.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.