AI and Power Dynamics
- AI's role in power dynamics is an interdisciplinary topic that defines how algorithmic systems restructure decision-making and amplify both traditional and novel power hierarchies.
- The topic employs analytical frameworks—such as network-of-power analyses and quantitative impact metrics—to map and measure influence across sociotechnical, institutional, and geopolitical domains.
- It highlights the importance of governance, ethical oversight, and participatory design in mitigating bias and redistributing power within AI-driven decision systems.
AI’s role in power dynamics is defined by its capacity to restructure decision-making, agency, and influence across sociotechnical, institutional, and geopolitical domains. AI systems act both as amplifiers of established power hierarchies and as sources of novel forms of authority, delegation, and contestation. The intersection of technical architectures, institutional actors, governance regimes, and affected communities produces complex, evolving arenas where power is exercised, negotiated, and resisted. The following sections survey key dimensions of contemporary research and debate on AI and power.
1. Analytical Frameworks: Defining and Mapping Power in AI Systems
Philosophical, sociotechnical, and institutional frameworks distinguish between distributive (power-to) and relational (power-over) modalities. In the computational context, power-over is characterized as a two-place relation holding when agent can significantly shape the interests, options, or beliefs of agent , possesses genuine alternatives, and is not fully constrained by others (Lazar, 9 Apr 2024). Quantitatively, the “degree” of power can be represented as , assessing the expected magnitude of ’s impact on ’s utility. Scope and concentration reflect breadth and pervasiveness over ’s life and across affected populations.
In public sector settings, network-of-power analyses locate agencies as nodes embedded in multilevel relational fields spanning internal leadership, legal and regulatory overseers, vendors, and affected communities. Decision flows are governed by overlapping logics, authorities, and constraints (Kawakami et al., 21 May 2024). In global governance, Lukes’s taxonomy (instrumental, structural, discursive power) is extended to actors—states, firms, autonomous agents—across domains of violence, markets, and rights (Srivastava et al., 23 Oct 2024). Neo-institutionalist approaches enumerate “levers of power” (logics, governance, norms, relational channels, idea mobility) as operative social mechanisms shaping AI field-level change (Mackenzie et al., 5 Nov 2025).
2. Mechanisms and Sites of AI-Enabled Power
AI systems constitute “automatic authorities” by intensifying and automating three primary modalities of power-exercise (Lazar, 9 Apr 2024):
- Intervening on Interests: Algorithmic resource allocation in welfare, criminal justice, finance, and hiring, with deterministic or probabilistic logic often occluding recourse and accountability.
- Shaping Options: Restrictive technological management (e.g., DRM, smart contracts) constrains feasible action spaces, while recommender and search algorithms selectively classify and present possible choices.
- Shaping Beliefs and Desires: Recommender systems, LLMs, and pervasive behavioral optimization technologies effect epistemic and preference formation in public discourse, consumer choice, and social identity.
Distinctively, advanced AI agents introduce “agentic inequality”—disparities in the availability (), quality (), and quantity () of agentic capital across actors—yielding direct agent–agent competition, scalable task delegation, and infrastructural role capture (Sharp et al., 19 Oct 2025). Power amplification is super-additive when these axes align to advantage well-resourced actors.
3. Data, Bias, and the Reproduction of Structural Power
Training data embodies and perpetuates social and historical inequalities. Machine learning systems trained on biased or structurally skewed data frequently amplify existing power asymmetries; predictive policing and risk-scoring systems exemplify “runaway feedback loops” reinforcing surveillance of already-marginalized groups (Leavy et al., 2020). Fairness metrics such as statistical parity,
and disparate impact ratios,
frame formal constraints, but cannot resolve deep-seated injustices embedded in data collection, curation, and model definition.
Mitigation strategies include pre-processing (re-balancing), in-processing (regularized learning), and post-processing (threshold adjustment). However, technical remedies alone are insufficient; substantive justice requires integration of critical perspectives (feminist, critical race, intersectional) and democratization of data governance. Genuine redistribution, not just remediation, is advanced when affected communities co-govern data and model design.
4. Institutional Logics, Governance, and Participatory Interventions
Formal governance (regulations, authorities) and informal mechanisms (norms, relational channels) interact to stabilize or transform AI field dynamics (Mackenzie et al., 5 Nov 2025). Across institutional contexts—government, academia, business, civil society—levers of power include structural logics, governance templates, and category/label construction. Power is exercised not only through codified authority but also via informal networks, collective interest organizations, and field-configuring events, producing variability in institutional purview and change capacity.
In public sector AI adoption, barriers to participatory design include procurement silos, legal and contractual opacity (vendors withholding model internals), social asymmetries (leaders dismissing frontline or community input as technophobic), and a lack of engagement infrastructure (Kawakami et al., 21 May 2024). Studies recommend both research-level (toolkits, practices, case studies) and policy-level interventions (support units, procurement standards, centralized case registries) to enable genuinely participatory and equitable AI.
Participatory and co-creation models in local contexts highlight the importance of shifting influence weights (ω) among stakeholders such that community members acquire decision-making authority over AI system scope, data practices, and output interpretation (Hsu et al., 2021). Field deployment demonstrates that when agency over sensing, modeling, and interpretation is ceded to communities, AI enables rather than supplants local empowerment.
5. Political Economy, Geopolitics, and the Role of AI Labor
International political economy (IPE) frameworks explain how states, motivated by geopolitical advantage, strategically empower rather than constrain dominant AI corporations, leading to “patchy” and uneven regulation (Reis, 21 Nov 2025). States and firms are enmeshed in a “weaponized interdependence,” jointly leveraging proprietary infrastructure and market power to set global AI agendas.
Within this context, AI workers—engineers, researchers, and developers—emerge as potential actors of geopolitics via algorithmic collective action (ACA). Coordinated worker mobilizations (Project Maven, open letters, organized conference interventions) effect bottom-up rebalancing of power by leveraging critical mass, network effects, and technical positionality. Participatory design methods and process interventions (self-audit dashboards, design moratoria, code annotation for geopolitically salient code paths) instantiate reflexivity and accountability. Effective power among state (), corporation (), and worker () moves—ideally—toward a regime where .
6. Ethical Power, Legitimacy, and Contestation
A central challenge in operationalizing AI ethics is the dysfunction and imbalance of power structures in sociotechnical systems, with unchecked technical dominance marginalizing public and community interests (Jin et al., 12 Oct 2025). Four recurrent syndromes—audience scope, superficial design, evaluation via plausibility, and explainability–performance trade-off—track how technical cultures undercut substantive accountability.
Effective interventions rest on three pillars:
- Making Power Explicable and Checked: Power analysis and impact audits must be internalized in technical practice, with transparency operators ensuring ethical counter-power reaches parity.
- Reframing Narratives and Norms: Dominant discourses (e.g., explainability as a “brake” on performance) must be replaced with justice-aligned metaphors and blending of ethical and technical objectives.
- Encoding Ethics into Technical Standards: Methodological innovations—critical epistemology, limitations analysis, formal ethical constraints—must be embedded in model development and reporting.
Legitimacy requires AI systems to satisfy substantive justification (“what,” e.g., group fairness), procedural legitimacy (“how,” e.g., transparency, contestability), and proper authority (“who,” e.g., democratic or deliberative control) (Lazar, 9 Apr 2024).
7. Autonomous Agents, Agentic Inequality, and Infrastructure
The diffusion of autonomous AI agents constitutes a qualitatively novel dimension of power asymmetry, termed agentic inequality: disparities in (availability), (quality), and (quantity) of agentic capital (Sharp et al., 19 Oct 2025). Power asymmetries are realized through scalable goal delegation and agent–agent competition. Technical and socioeconomic drivers include compute costs, model governance (proprietary versus open-weight), platform integration, market incentives, digital literacy, and geopolitical constraints.
Distinct governance challenges include legal attribution for agentic harms, the Collingridge dilemma (timing of intervention), regulatory pacing, and capture. Policy proposals span empirical tracking of , participatory standard setting, public “compute commons,” universal baseline agents (“Universal Basic Agency”), and outcome-based regulation of agentic interactions. The systemic distribution of agentic capital will determine whether advanced AI agents entrench existing hierarchies or enable democratic empowerment.
References
- (Lazar, 9 Apr 2024) "Automatic Authorities: Power and AI"
- (Jin et al., 12 Oct 2025) "Making Power Explicable in AI: Analyzing, Understanding, and Redirecting Power to Operationalize Ethics in AI Technical Practice"
- (Srivastava et al., 23 Oct 2024) "AI, Global Governance, and Digital Sovereignty"
- (Reis, 21 Nov 2025) "AI Workers, Geopolitics, and Algorithmic Collective Action"
- (Kawakami et al., 21 May 2024) "Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use"
- (Mackenzie et al., 5 Nov 2025) "Levers of Power in the Field of AI"
- (Sharp et al., 19 Oct 2025) "Agentic Inequality"
- (Hsu et al., 2021) "Empowering Local Communities Using Artificial Intelligence"
- (Leavy et al., 2020) "Data, Power and Bias in Artificial Intelligence"
- (Han et al., 19 Dec 2024) "Who is Helping Whom? Student Concerns about AI-Teacher Collaboration in Higher Education Classrooms"
- (Zheng et al., 2023) "Competent but Rigid: Identifying the Gap in Empowering AI to Participate Equally in Group Decision-Making"