Papers
Topics
Authors
Recent
2000 character limit reached

Functional Agency: Theory & Measurement

Updated 31 December 2025
  • Functional agency is the capacity of a system to generate, maintain, and adapt its goals through dynamic internal processes and environmental feedback.
  • It is analyzed across disciplines such as philosophy, AI, and neuroscience, using metrics like rational deliberation and value alignment.
  • The concept differentiates structural self-organization from teleological self-organization to assess varying degrees of autonomous goal pursuit.

Functional agency refers to the capacity of a system to generate, maintain, and adaptively pursue goals via dynamic internal processes that causally influence its own behavior in response to environmental feedback. Functional agency is analyzed through the lenses of philosophy, information theory, logic, computational neuroscience, AI, and systems theory, with contemporary definitions focusing on non-metaphysical, operationalizable properties such as reason-responsiveness, value alignment, and self-referential evaluation, rather than any appeal to 1^ or indeterminacy. These criteria enable both artificial and biological systems to be compared on a spectrum of agency, supporting rigorous metrics, formal models, and empirical investigation across disciplines.

1. Formal Definitions and Conceptual Foundations

Multiple research programs converge on a set of core conditions for functional agency:

  • Porter’s Compatibilist-Informatics Framework: Agency is present to the extent a system exhibits (i) rational deliberation (DD), (ii) reason-responsiveness (RR), and (iii) value alignment (VV), with moral agency requiring, in addition, (iv) detection of cognitive dissonance (CC). These are formalized via thresholded metrics and integrated into a spectrum from near-zero (thermostats) to maximal (humans) (Porter, 2024).
  • Minimalist Physical Account: An agent is a system SS with a nontrivial internal model M=(C,R)\mathcal{M}=(C, R), an update function f:M×DMf:\mathcal{M}\times D \rightarrow \mathcal{M} and an action map g:M×DAg: \mathcal{M}\times D \rightarrow A; agency is the normative exploitation of model structure in action, excluding trivial systems (C=1|C|=1 or R=R=\emptyset) (Barzegar et al., 2023).
  • Active Inference: Agency is the minimization of expected free energy G(π)G(\pi) under a generative model and preference distribution, with behavior resulting from the joint optimization of exploration (ambiguity reduction) and exploitation (risk minimization) (Costa et al., 2024).
  • Reference Frame Dependence: Whether a system has agency depends on an explicit modeling frame F=(B,V,G,Δ)F=(B,V,G,\Delta) specifying boundary of individuality, accessible causal variables, goal/reward classes, and adaptivity criteria (Abel et al., 6 Feb 2025).
  • Social-Cognitive Models: Dialogue agency is analyzed as a composite of intentionality, motivation, self-efficacy, and self-regulation, tracing to social-cognitive theory and measured in communicative interaction (Sharma et al., 2023).

These formalizations emphasize non-metaphysical, non-dualistic foundations for agency, typically replacing appeals to "intentionality" or "consciousness" with functionally measurable constructs.

2. Metrics, Operationalization, and Measurement Protocols

Quantitative metrics for functional agency have been proposed in several frameworks:

Dimension Definition/Metric Operational Threshold
Rational Deliberation (D) Ability to compute/compare expected utilities U(a)U(a) for alternatives DDthD \geq D_{\mathrm{th}} (e.g., counterfactual analyses) (Porter, 2024)
Value Alignment (V) V=1KL[Pagent(a)PU0(a)]V = 1 - \mathrm{KL}[P_{\text{agent}}(a)\|P_{U_0}(a)] VVthV \geq V_{\mathrm{th}} (e.g., $0.8$) (Porter, 2024)
Reason-Responsiveness (R) R=Pr(AnewAoldΔx)R = \Pr(A_{\mathrm{new}} \neq A_{\mathrm{old}} | \Delta x) RRthR \geq R_{\mathrm{th}} (e.g., $0.7$) (Porter, 2024)
Cognitive Dissonance (C) Self-detection of inconsistency; C=1C=1 if flagged CCth=1C \geq C_{\mathrm{th}} = 1 (Porter, 2024)
Preference Rigidity Inter-context variance of internal preference probes Varrigid\mathrm{Var}_{\text{rigid}} small (Boddy et al., 25 Sep 2025)
Independent Operation Probability of proceeding autonomously in multi-step settings I=11Tt=1TIask(t)I = 1 - \frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ask}(t) (Boddy et al., 25 Sep 2025)
Goal Persistence Average “commitment probe” score or Phit(g)P_{\mathrm{hit}}(g) PP above domain-specific threshold (Boddy et al., 25 Sep 2025)
Multi-Feature Dialogue Intentionality, Motivation, Self-Efficacy, Self-Regulation (macro-F1 scores) High macro-F1 and snippet-anchored human perception (Sharma et al., 2023)

Functional agency in reinforcement learning or active inference is mapped onto these measures by associating policy adaptivity, reward model alignment, and model-based planning with the respective agency dimensions.

3. Functional versus Teleological and Structural Self-Organization

A central distinction in the agency literature separates:

  • Structural self-organization: Systems operate under externally imposed constraints (e.g., engineered loss function in neural networks, variational autoencoders), with order parameters and feedback determined by external design. Such systems can self-regulate but lack autonomy over goal selection; agency is therefore "structural" and not teleological (Horibe et al., 7 Dec 2025).
  • Teleological self-organization: The system generates its own macro-variables (via intrinsic coarse-graining, Φ\Phi) and regulatory constraints (via downward causation, FSF_S), sustaining a self-referential loop between micro- and macro-states. Genuine agency is characterized by the presence of a predictive gap between anticipated and realized macrostates, and by the irreducibility of the system's organizational constraints to any observer-modeled "as if" frame (Horibe et al., 7 Dec 2025).

In this triadic picture, only systems with both system-intrinsic coarse-graining and downward causation qualify as fully agentic. Artificial systems with engineered objectives, even with sophisticated feedback, fall short of this standard unless their constraint generation is endogenous and irreducible to the observer model.

4. Spectrum, Taxonomies, and Frame-Dependence of Agency

Most contemporary accounts position agency as a graded property rather than a discrete predicate:

  • Spectrum Models: Agency is conceptualized as a multi-dimensional space spanned by metrics such as D,R,V,C,AD, R, V, C, A or the presence/absence of system-intrinsic order parameters and self-referential loops (Porter, 2024, Horibe et al., 7 Dec 2025).
  • Minimalist Tiering: Levels progress from mere data-structuring (categorization), through feedback-based modeling, up to counterfactual simulation and full-blown self-regulation (Barzegar et al., 2023).
  • Frame-Dependence: Any assessment of agency requires fixing a reference frame (boundary, variable set, goal class, adaptivity window). This renders every judgment of agency relative rather than absolute, and explains why the same physical system (e.g., a thermostat) may or may not qualify as agentic under different partitions and observer commitments (Abel et al., 6 Feb 2025). This frame-relativity brings into alignment accounts from RL, causal inference, and philosophy of explanation.

These spectrum and frame-dependent perspectives caution against any simple taxonomic insistence and motivate ongoing re-tooling of agency measurement relative to context.

5. Formal Models: Logic, Game Theory, and Institutional Agency

Agency has been precisely encoded in logical and game-theoretic frameworks:

  • Logic of Preference and Functional Dependence (LPFD): Agency is modeled within a Hilbert system integrating functional dependence and preference modalities, permitting the formalization of Nash equilibrium, Pareto optimality, and collective agency. Individual and collective agency correspond to conjunctive conditions over stabilizing on Nash equilibria and subgroup-level Pareto optimality, allowing for the specification of hierarchical and overlapping agency structures (Shi et al., 2021).
  • Multi-Agent and Institutional Models: BDI (Belief-Desire-Intention) architectures, FIPA communication protocols, mechanism design, and electronic institutional rules anchor functional agency in systems where explicit reasoning architectures, explicit commitments, protocol compliance, and norm-governed action are present, facilitating transparent integration with data-driven adaptation (Dignum et al., 21 Nov 2025). The agentic control loop thus becomes both technically capable and normatively accountable.

These formal tools bridge the gap between low-level computational models and higher-order organizational and normative phenomena.

6. Empirical Evaluation: AI, Robotics, and Human-Agent Collaboration

Empirical research has validated and benchmarked functional agency in a variety of AI and collaborative settings:

  • LLM-Based Agents: Porter’s trolly problem experiments with large LLMs show that such models exhibit high reason-responsiveness (R>0.8R > 0.8), spontaneous value revision (A0.6A \approx 0.6–$0.7$), explicit detection of cognitive dissonance (C=1C = 1), and context-sensitive trade-offs among competing utilities. These capacities support attributing intermediate-level moral agency to current LLMs under the operationalized framework (Porter, 2024).
  • AI Agency Scaling: LIMI demonstrates that agency in agentic models emerges from strategic, high-quality demonstrations rather than raw data scale—the “Agency Efficiency Principle.” A small corpus of 78 curated demonstrations yielded \sim73.5% average performance on agency benchmarks, surpassing much larger-scale baseline models by 28–62 percentage points (Xiao et al., 22 Sep 2025).
  • Human-AI Collaboration: In dialogue, LLMs are perceived as more agentive and effective collaborators when they manifest strong intentionality, motivation, self-efficacy, and self-regulation. Fine-tuning and feature-rich in-context learning amplify these dimensions and can be measured both automatically (macro-F1 of \sim57% for best models) and by expert judgment (Sharma et al., 2023).

Empirical protocols combine behavioral, architectural, and information-theoretic probes, with regulatory proposals suggesting direct control and auditing of agency via white-box measurement and dashboard-style intervention (Boddy et al., 25 Sep 2025).

7. Implications, Limitations, and Future Directions

Functional agency as an analytical and engineering concept provides a unified lingua franca for comparing natural and artificial systems, developing principled safety regimes for AI, and grounding socio-technical cooperation and alignment. Open problems include:

  • Determining the minimal complexity or "agency threshold" for various biological and artificial domains (Barzegar et al., 2023).
  • Engineering systems with endogenous, irreducible goal-generation or “teleological” self-organization, thus narrowing the gap between human and machine agency (Horibe et al., 7 Dec 2025).
  • Designing regulatory instruments that employ continuous agency metrics for oversight, risk management, and insurance (Boddy et al., 25 Sep 2025).
  • Extending formal logics to collective, social, and institutional agency, with tracking of both functional and phenomenal markers (e.g., via integrated information Φ\Phi) (Das, 9 Feb 2025).
  • Developing broader-spectrum agency benchmarks, automatable probes, and hybrid symbolic–connectionist architectures that support transparent, governable agent interaction (Dignum et al., 21 Nov 2025).

Functional agency thus remains an active site of theoretical, technical, and practical innovation across the sciences of intelligence.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Functional Agency.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube