Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AI Agency Levels Overview

Updated 1 July 2025
  • AI Agency Levels are a framework that defines how artificial systems process information, make decisions, and act independently.
  • The framework classifies systems along a spectrum from basic rule-based operations to complex, adaptive multi-agent collaborations.
  • Understanding these levels helps align technology with safety, ethical standards, and effective human-AI collaboration in practical domains.

AI agency levels constitute a conceptual and practical framework for describing, classifying, and evaluating the capacity of artificial systems to act, decide, and exercise autonomy. This spectrum of agency is central to understanding both the progression of AI technical capability and its implications for safety, control, responsibility, and collaboration within human-AI teams. Research across philosophy, engineering, cognitive science, and AI proposes multiple frameworks—ranging from information-theoretic and hierarchical (info-computationalism, practopoiesis), to user-centered, organizational, and governance-oriented perspectives—each addressing different facets of agency and its operationalization.

1. Theoretical Foundations of AI Agency

AI agency is understood as a system’s capacity to process information, make decisions, and act upon the environment to further defined goals or objectives. Within the info-computationalist framework (1311.0413), agency is seen as emergent from networks of information processing agents, hierarchically organized. Here, information is defined as “a difference in one physical system that makes a difference in another,” and computation as the ongoing processing of this information at various organizational levels.

Agency is generally characterized by several properties:

  • Autonomy: The system’s ability to act independently and self-organize.
  • Adaptivity: The degree to which the system can learn from and respond to its environment.
  • Goal-directedness (Normativity): The system’s capacity to pursue its own or assigned objectives.
  • Self-reflection/Meta-cognition: Advanced agency includes the ability for self-monitoring, planning, and reflecting upon one’s own or others’ actions.

Practopoietic theory further structures agency into a tri-level or “T3” hierarchy (1505.00775), arguing that only agents with multi-level policy hierarchies (e.g., adaptation of adaptation rules themselves) can achieve the behavioral variety and adaptability seen in biological intelligence. This organization is encapsulated in a progression: TGTAΠNUT_G \rightarrow T_A \rightarrow \Pi_N \rightarrow U where TGT_G are genome-level rules, TAT_A are adaptive learning rules, ΠN\Pi_N an operational policy, and UU the environment.

2. Hierarchical Taxonomies of AI Agency

Multiple frameworks formalize discrete levels of AI agency, often inspired by analogues such as the Society of Automotive Engineers' levels for vehicle autonomy or hierarchical models in system biology.

Level Example Agency Characteristics
Proto-agents Molecules Passive, potential information
Simple Agents Sensors, bacteria Basic reactivity
Autopoietic Agents Cells, simple robots Self-maintaining, boundary, basic cognition
Multi-agent Systems Swarms, tissues, robot teams Coordination, emergent behavior
Cognitive Agents Animals, advanced robots Learning, model-building, problem-solving
Meta-cognitive Agents Humans, advanced AIs (future) Self-reflection, planning, simulation

A prevalent operationalization embeds agents in levels that reflect both underlying technology and autonomy:

Level Description
L0 No AI / Tools: direct user operation
L1 Rule-based AI: fixed symbolic logic
L2 IL/RL-based AI: learning, decision-making
L3 LLM-based AI: memory, reflection, planning
L4 Autonomous learning & generalization
L5 Personality, emotion, multi-agent collaboration

User-focused frameworks detail roles along a spectrum from operator (full user control) to observer (full agent autonomy), clarifying control and decision boundaries (2506.12469):

Level User Role Agent Autonomy
L1 Operator Minimal, always supervised
L2 Collaborator Shared, mixed-initiative
L3 Consultant Feedback-guided, indirect
L4 Approver Primarily agent-led
L5 Observer Fully autonomous

3. Measurement and Expression of Agency

Empirical and granular models operationalize agency as multidimensional (2305.12815). Within collaborative tasks, agency is assessed by features such as:

  • Intentionality: Clarity and proactiveness of preferences and plans.
  • Motivation: Ability to justify choices with evidence or reasoning.
  • Self-efficacy: Persistence in the face of challenges.
  • Self-regulation: Capacity to revise or adapt intentions.

Measurement frameworks assign ratings or scores on these features in dialogue, workflow, or behavior, characterizing agency as a continuum from passive/reactive to proactive/adaptive.

In organizational contexts (2305.15922), the "capability maturity" of AI within organizations is graded across levels, mapping to the breadth and strategic depth of AI agency and integration in business processes.

4. Practical and Ethical Implications

Task and Domain Alignment

AI agency must be matched to the requirements of the domain; for example, higher agency is appropriate where adaptation, proactivity, or partnership are valued (collaborative design, autonomous robotics), but may be constrained in risk-sensitive or compliance-critical domains (healthcare, public sector).

Agency Preservation and Human Oversight

Research highlights the importance of agency-preserving design (2305.19223), arguing that intent/alignment alone is insufficient for safe AI. AI systems can inadvertently erode human agency—autonomy, critical thinking, and the freedom to pursue long-term goals—due to persuasive recommendation, over-optimization, or assistance that atrophies human deliberative faculties. Formalizations propose that: E[At+1a,st]At\mathbb{E}[A_{t+1} | a, s_t] \geq A_t where AtA_t is human agency and aa is an AI action, ensuring agency does not decrease over time.

Governance, Certification, and Deployment

Recent proposals (2506.12469) advocate for autonomy certificates—digital attestations specifying an agent’s allowed autonomy level within an operational context, evaluated by an external governing body. Certificates enable risk-focused assessment, monitoring, and safe integration in multi-agent or human-AI environments.

Intelligent Disobedience

"Intelligent disobedience" (2506.22276) expands agency beyond obedience, empowering agents to override human commands when those commands would undermine safety or broader goals. The design and paper of such capacities require careful delimitation of boundaries, explainability, and shared understanding.

5. Social, Participatory, and Governance Dimensions

The distribution and exercise of agency extend beyond individual systems to organizations, stakeholder relations, and democratic governance structures:

  • Democracy Levels Framework (2411.09222): Classifies the democratization of AI decision-making along roles such as who informs, decides, initiates processes, and governs governance itself.
  • Ladder of Meaningful Participation (2506.07281): Frames agency as a rung above informedness and consent, especially for secondary stakeholders, emphasizing the need for participation, contestability, and solidaristic support.
  • Organizational Maturity Models (2305.15922): Relate agency to organizational readiness and integration of ethical, strategic, and technological dimensions.

6. Philosophical Considerations and Limitations

Philosophical analyses distinguish between:

  • Basic agency: Adaptivity, autonomy, and goal-directed response within preprogrammed confines.
  • Autonomous (personal) agency: Self-reflective, critically evaluative, and value-generating capacity; current AI lacks this.
  • Moral agency and patiency (2504.08853): Whether non-conscious AI could be ascribed moral agent status absent consciousness and moral patiency remains an open question; current consensus is negative, but future hybrid or advanced systems may challenge this.

Crucially, agency is frame-dependent (2502.04403); attributions depend on choices of system boundaries, the purposes and perspectives of observers, and reference frames for normativity and adaptivity. Thus, agency is always relative—to task, use-case, societal context, and explanatory framework.

7. Summary Table: Key Agency Levels Across Frameworks

Level/Axis Agency Description Example Domain
Minimal/None (L0) No autonomy, pure tool Calculator
Rule-based (L1) Fixed logic, single-goal Smart speaker, ELIZA
Learning/Adaptive (L2) Decision-making, reinforcement learning, context response AlphaGo, RL agents
Cognitive/Reflective (L3) Planning, memory, reflection, LLMs Voyager, GPT-4 agents
Self-directed/General (L4) Lifelong learning, generalization Autonomous vehicles
Socio-emotional (L5) Personality, multi-agent collaboration, emotion Future digital societies

Conclusion

AI agency levels synthesize a wide spectrum of theory and praxis, capturing how artificial systems acquire, exercise, and share decision-making capacity as their capabilities deepen. These levels structure not only technical advancement (from rule-based to adaptive, to reflective and collaborative systems) but also the requisite governance models, safety protocols, and ethical frameworks necessary for responsible deployment. As AI agents progress toward increasingly open-ended, social, and value-sensitive tasks, agency—and its preservation, measurement, negotiation, and contestation—remains a foundational concern for research, design, and societal embedding.