Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Agency Is Frame-Dependent (2502.04403v1)

Published 6 Feb 2025 in cs.AI

Abstract: Agency is a system's capacity to steer outcomes toward a goal, and is a central topic of study across biology, philosophy, cognitive science, and artificial intelligence. Determining if a system exhibits agency is a notoriously difficult question: Dennett (1989), for instance, highlights the puzzle of determining which principles can decide whether a rock, a thermostat, or a robot each possess agency. We here address this puzzle from the viewpoint of reinforcement learning by arguing that agency is fundamentally frame-dependent: Any measurement of a system's agency must be made relative to a reference frame. We support this claim by presenting a philosophical argument that each of the essential properties of agency proposed by Barandiaran et al. (2009) and Moreno (2018) are themselves frame-dependent. We conclude that any basic science of agency requires frame-dependence, and discuss the implications of this claim for reinforcement learning.

Summary

  • The paper argues that agency is not an absolute property but is inherently dependent on the reference frame from which it is observed or measured, challenging conventional views in biology, AI, and philosophy.
  • The authors propose that key components of agency—individuality, source of action, normativity, and adaptivity—are all contingent upon the choice of reference frame.
  • This frame-dependent perspective has significant implications for reinforcement learning and AI, suggesting that evaluations of system behavior require careful consideration of applied reference frames and potentially leading to formal mathematical models.

Essay: Agency is Frame-Dependent

The paper "Agency Is Frame-Dependent" investigates the notion of agency from the perspective of reinforcement learning (RL) and posits that agency is inherently dependent on the reference frame from which it is measured. This exploration explores the philosophical underpinnings of agency within various domains, including biology, cognitive science, artificial intelligence, and philosophy, addressing long-standing questions such as the nature of agency and the conditions under which a system can be said to possess it.

Abstract of Findings

In this examination, agency is framed as a system's capacity to drive outcomes toward specific goals, resonant with perspectives in numerous scientific disciplines. The puzzle of determining agency, whether inanimate objects like rocks or sophisticated systems like robots, is unpacked by asserting that agency cannot be viewed as an absolute attribute, but rather a frame-dependent one. The authors provide a philosophical argument that builds on the foundational definitions of agency, encapsulating it within a four-part structure originally presented by Barandiaran et al. These components include individuality, self-sourced action, normativity, and adaptivity, each being proposed as contingent upon the choice of a reference frame.

Key Arguments and Claims

  1. Individuality: Agency requires the system to have a distinct boundary separating it from its environment. This boundary is subjective, with multiple plausible delineations influencing the perception of the agent's individuality.
  2. Source of Action: A system's actions must originate from the system itself, not externally. Kenton et al. suggest that determining an agent within a causal model hinges on the selection of causal variables—a process inherently dependent on the frame of reference.
  3. Normativity: Agency involves meaningful goal-directed behavior. However, the assignment of goals can vary widely; thus, the determination of meaningful versus trivial goals is inherently subjective, depending on predefined criteria or biases.
  4. Adaptivity: Agency embodies adaptivity or the responsiveness of a system's actions to inputs. Zadeh highlighted that adaptivity is context-sensitive; what one reference frame might consider adaptive could be seen as static in another.

Implications for Reinforcement Learning and AI

This frame-dependent view of agency carries significant implications for reinforcement learning and the broader understanding of artificial intelligence. It suggests that evaluations of a system's behavior—whether adaptive, intelligent, or goal-directed—require careful consideration of the reference frames applied.

The exploration prompts a reconsideration of foundational aspects of RL, such as system boundaries, causal relationships, and reward structures. Furthermore, it touches on the interplay between intelligence and agency, probing whether these concepts are interdependent and to what degree RL methods should account for frame-dependent agency in their design and evaluation.

Future Directions

Building a formal mathematical model of reference frames would provide a rigorous backbone to the philosophical claims made in the paper, potentially informing more universal standards for evaluating agency in artificial systems. Further research may explore methods for selecting optimal reference frames guided by practical utility in prediction or explanation of behavior.

Additionally, this work aligns with broader cognitive and philosophical frameworks, such as Dennett's intentional stance and Marr's levels of analysis, suggesting a layered approach to understanding cognitive processes, wherein agency might serve as a bridge between physical systems and abstract constructs like goal-directed behavior.

Conclusion

The paper makes a thought-provoking case that agency, a core concept in many scientific discussions and real-world applications, is not an objective measure but is deeply entwined with the observer's perspective. In grounding agency within chosen reference frames, this research opens new avenues for exploring how we define, measure, and implement agency in artificial and natural systems alike, encouraging a re-evaluation of theoretical and methodological approaches in AI research.

HackerNews