- The paper argues that agency is not an absolute property but is inherently dependent on the reference frame from which it is observed or measured, challenging conventional views in biology, AI, and philosophy.
- The authors propose that key components of agency—individuality, source of action, normativity, and adaptivity—are all contingent upon the choice of reference frame.
- This frame-dependent perspective has significant implications for reinforcement learning and AI, suggesting that evaluations of system behavior require careful consideration of applied reference frames and potentially leading to formal mathematical models.
Essay: Agency is Frame-Dependent
The paper "Agency Is Frame-Dependent" investigates the notion of agency from the perspective of reinforcement learning (RL) and posits that agency is inherently dependent on the reference frame from which it is measured. This exploration explores the philosophical underpinnings of agency within various domains, including biology, cognitive science, artificial intelligence, and philosophy, addressing long-standing questions such as the nature of agency and the conditions under which a system can be said to possess it.
Abstract of Findings
In this examination, agency is framed as a system's capacity to drive outcomes toward specific goals, resonant with perspectives in numerous scientific disciplines. The puzzle of determining agency, whether inanimate objects like rocks or sophisticated systems like robots, is unpacked by asserting that agency cannot be viewed as an absolute attribute, but rather a frame-dependent one. The authors provide a philosophical argument that builds on the foundational definitions of agency, encapsulating it within a four-part structure originally presented by Barandiaran et al. These components include individuality, self-sourced action, normativity, and adaptivity, each being proposed as contingent upon the choice of a reference frame.
Key Arguments and Claims
- Individuality: Agency requires the system to have a distinct boundary separating it from its environment. This boundary is subjective, with multiple plausible delineations influencing the perception of the agent's individuality.
- Source of Action: A system's actions must originate from the system itself, not externally. Kenton et al. suggest that determining an agent within a causal model hinges on the selection of causal variables—a process inherently dependent on the frame of reference.
- Normativity: Agency involves meaningful goal-directed behavior. However, the assignment of goals can vary widely; thus, the determination of meaningful versus trivial goals is inherently subjective, depending on predefined criteria or biases.
- Adaptivity: Agency embodies adaptivity or the responsiveness of a system's actions to inputs. Zadeh highlighted that adaptivity is context-sensitive; what one reference frame might consider adaptive could be seen as static in another.
Implications for Reinforcement Learning and AI
This frame-dependent view of agency carries significant implications for reinforcement learning and the broader understanding of artificial intelligence. It suggests that evaluations of a system's behavior—whether adaptive, intelligent, or goal-directed—require careful consideration of the reference frames applied.
The exploration prompts a reconsideration of foundational aspects of RL, such as system boundaries, causal relationships, and reward structures. Furthermore, it touches on the interplay between intelligence and agency, probing whether these concepts are interdependent and to what degree RL methods should account for frame-dependent agency in their design and evaluation.
Future Directions
Building a formal mathematical model of reference frames would provide a rigorous backbone to the philosophical claims made in the paper, potentially informing more universal standards for evaluating agency in artificial systems. Further research may explore methods for selecting optimal reference frames guided by practical utility in prediction or explanation of behavior.
Additionally, this work aligns with broader cognitive and philosophical frameworks, such as Dennett's intentional stance and Marr's levels of analysis, suggesting a layered approach to understanding cognitive processes, wherein agency might serve as a bridge between physical systems and abstract constructs like goal-directed behavior.
Conclusion
The paper makes a thought-provoking case that agency, a core concept in many scientific discussions and real-world applications, is not an objective measure but is deeply entwined with the observer's perspective. In grounding agency within chosen reference frames, this research opens new avenues for exploring how we define, measure, and implement agency in artificial and natural systems alike, encouraging a re-evaluation of theoretical and methodological approaches in AI research.