Agency-Driven Interaction Mode
- Agency-driven interaction mode is a paradigm where users or agents maintain meaningful control over computational workflows by balancing automation and manual input.
- It leverages adaptive control strategies, friction points, and real-time feedback loops to enhance user engagement and preserve individual agency.
- This approach is applied across domains such as human–AI collaboration, ridesharing, and XR, demonstrating its impact on efficiency and user empowerment.
Agency-driven interaction mode refers to interface, control, and protocol paradigms in which a human user or artificial agent is designed to maintain meaningful initiative, influence, and authorship over computational workflows, information flows, or collaborative environments. It contrasts both with high-automation, agent-driven systems where users become passive subjects, and with entirely manual systems where all control is retained but efficiency and augmentation potential are lost. The agency-driven paradigm—across human–AI collaboration, autonomous agents, and interactive systems—seeks to rigorously model, preserve, and adapt the locus of control and the capacity for users or agents to shape outcomes according to their own goals, preferences, and self-generated norms.
1. Foundations and Definitions
Agency in interactive systems is operationalized as the degree to which an entity—human or artificial—exerts causal influence on actions, decisions, or outcomes, framed relative to system automation or external constraint. Several complementary definitions and models have emerged:
- Agency as Control Input Allocation: Quantified formally as the norm of human control input relative to machine input in the total control law, with an authority allocation parameter balancing agency (human) and automation (machine): (Langerak, 19 Feb 2025).
- Human Agency in HCI/HAX: Explicit in Human-Data Interaction as the ongoing capacity to understand, intervene in, and audit both raw data and downstream inferences—formalized as a closed feedback-loop: user data , policies , inferences , user appraisal , and corrective action , with an iterative update (Mortier et al., 2014).
- Agentic AI Agency: Modeled as an autonomous trajectory through composite action sequences —reasoning, tool-use, observation—emerging from self-directed engagement with the environment, rather than prescription by external scripts (Xiao et al., 22 Sep 2025).
- Mixed-Initiative Taxonomies: Agency is distributed () and allocated () between human and AI actors (e.g., Human-Driven, AI-Driven, Mixed-Initiative), each mode specified by control over decision-making and the loci of responsibility (Holter et al., 2024).
- Ontological Conditions and Limits: In the context of LLMs, agency is absent unless the system satisfies self-production (closure), endogenous norm creation (intrinsic value-regulation), and interactional asymmetry (origin of action)—which current LLMs lack (Barandiaran et al., 2024).
These formalizations are instantiated in diverse architectures, protocols, and user experiences spanning interface design, autonomous agents, and hybrid intelligent systems.
2. Design Patterns and Formal Models
Agency-driven interaction modes are realized through a range of mechanisms rigorously grounded in mathematical models, control theory, and protocol design:
- Shared Control via Optimal Control and Online Adjustment: Multi-input control systems deploy a tunable parameter (e.g., ) between human and automated actuation. Adaptive policies (actor-critic RL) online-adjust this blend in response to user state, optimizing cost and preserving perceived authorship (Langerak, 19 Feb 2025).
- Hypertextual Friction for Deliberative Agency: In algorithmic feeds and generative tools, explicit introduction of “friction points" (decision forks), traceability (visible provenance chains), and structure (user-authored link graphs) implement an agency-driven paradigm by making user action and intention central (Liu et al., 31 Jul 2025).
- Agency-Driven Governance in Negotiation Tasks: Multi-role state machines (Principal, Delegate, Counterparty, Critic) instantiate agency boundaries, information-gated progression, and dual-channel feedback. Authorization predicates and preflight checks strictly delimit agent autonomy, with escalation mechanisms preserving ultimate human oversight (Zhao et al., 9 Nov 2025).
- Feedback and Human-in-the-Loop Correction: Closed-loop architectures allow for user intervention at any stage—data correction, policy adjustment, or inference veto—supporting fine-grained, just-in-time agency modulation (Mortier et al., 2014).
- Operationalization in Learning Environments: Agency is coded as observable epistemic (initiation, challenge, integration) and regulatory (reflection, coordination) acts, with transition networks and sequential motif mining revealing agency emergence and its modulation by AI participant persona (Jin et al., 20 Dec 2025).
- Quantitative and Qualitative Agency Metrics: Composite metrics across friction index, structure density, trace depth, task-correction rate, and perceived control (Likert or functionally measurable) allow comparative evaluation of agency-supporting systems (Liu et al., 31 Jul 2025, Mortier et al., 2014, Adenuga et al., 2023).
3. Practical Applications and Case Studies
Agency-driven interaction is manifested across multiple domains:
| Domain | Implementation Patterns | Key Supporting Work |
|---|---|---|
| Human–AI Data Science Collaboration | Open-ended chat (high agency) vs structured wizards (low agency); user can control both planning and execution | (Guo et al., 2024) |
| Ridesharing Platforms | Transparent assignment, configurable preferences, feedback loops, dispute redress | (Adenuga et al., 2023) |
| Live Collective Control | Audience voting drives robot performance, but choreography/ framing shape outcomes; felt vs actual agency divergence is explicit | (Sathya et al., 11 Jun 2025) |
| Generative AI Economy | Agents represent users/businesses, negotiating via A2A protocols for unscripted, sometimes unrestricted, market operations | (Rothschild et al., 21 May 2025, Cui et al., 11 Dec 2025) |
| Accessible 3D Models | Layered modalities (tactile, gesture, proactive voice); tiered agency from user-driven to mixed-initiative | (Reinders et al., 2020) |
| Cognitive Rehabilitation / XR | FoA computed online from EEG markers modulates interface affordances to scaffold agency, with direct mapping from neurodynamics to UI adaption | (Hila, 9 Sep 2025) |
Additional cases include AI-driven negotiation (bounded by preflight authorization and information-gated progression) (Zhao et al., 9 Nov 2025), agent-based institutional design (BDI+FIPA protocols for transparency and accountable autonomy) (Dignum et al., 21 Nov 2025), and data-centric HDI systems instrumented for correction loops and negotiation (Mortier et al., 2014).
4. Agency in Artificial Agents and Emergent Machine Agency
Recent advances treat agency not as a static property but as an emergent, learnable trait in artificial systems:
- Trajectory-Based Agency Benchmarks: In LIMI, agency is the capacity to generate autonomous, multi-step solution trajectories via model reasoning, tool invocation, and outcome verification. Agency emerges through exposure to strategically curated, high-signal agentic demonstrations rather than data abundance, inverting classic scaling relations (Xiao et al., 22 Sep 2025).
- Mechanistic Interpretability: Research into the mechanistic representation of agency in deep neural networks and the reinforcement learning of internal goals is proposed as foundational for safe, alignment-robust agentic systems (Xiao et al., 22 Sep 2025, Dignum et al., 21 Nov 2025).
- Formal Agency Spectrum: In predictive coding and active inference agents, agency-driven modes correspond to specific configurations of model complexity (e.g., KL-regularization weights in the ELBO), which tune the agent from egocentric leadership (tight complexity, strong self-prior) to follower imitation (loose complexity, high adaptation to others) (Ohata et al., 2020). The structural vs teleological distinction underlines whether autonomy is self-generated (biological/teleological systems) or externally imposed (structural artificial systems) (Horibe et al., 7 Dec 2025).
- Limits of Current LLMs: Current LLMs, when isolated, fail to satisfy the conditions for autonomous agency (self-production, intrinsic normativity, interactional asymmetry) but may still produce new forms of “midtended" or hybrid human–machine agency when tightly coupled with human activity (Barandiaran et al., 2024).
5. Evaluation, Measurement, and Trade-offs
Evaluation of agency-driven modes combines quantitative measurement, user/performance outcomes, and nuanced trade-off analyses:
- Composite Scoring and Agency Indices: Normalized sums of friction, traceability, and structure metrics, cognitive load reduction ratios (e.g., 72%–88% less vs RAG/manual workflows in SoDA), perceived control scores, and task correction/ completion rates are all in operational use (Liu et al., 31 Jul 2025, Cui et al., 11 Dec 2025, Mortier et al., 2014).
- Qualitative Perceptions, Trust, and Satisfaction: Perceived agency (via NASA-TLX, SUS, or tailored sense-of-agency scales), satisfaction, and workload relate closely to the interaction mode—shared or adaptive agency significantly outperforms naive automation or rigid manual control (Langerak, 19 Feb 2025, Jin et al., 20 Dec 2025).
- Friction–Safety–Performance Triad: Increasing user agency may trade off with efficiency, workload, and (if unchecked) risk—necessitating context-sensitive adaptation, transparency, and the capacity to reconfigure authority allocation in situ (Langerak, 19 Feb 2025, Zhao et al., 9 Nov 2025, Cui et al., 11 Dec 2025).
- Feelings of Agency (FoA) as a Design Signal: In neuroadaptive systems, real-time computation of FoA from affective engagement and volitional attention guides instant interface adaptation, closing the loop between phenomenological state and environmental affordance (Hila, 9 Sep 2025).
6. Open Challenges and Future Research Directions
Despite rapid progress, several challenges remain:
- Standardization and Protocol Interoperability: The agentic web demands broad adoption of common message schemas (e.g., MCP, A2A, UPDL), seamless migration of user memory, and decentralized yet trustworthy cross-agent discovery (Rothschild et al., 21 May 2025, Cui et al., 11 Dec 2025).
- Scalable and Auditable Governance: As agentic modes pervade negotiation, commerce, and B2B interaction, architectures must enforce bounded autonomy, auditability, and escalation protocols—balancing performance and risk (Zhao et al., 9 Nov 2025, Dignum et al., 21 Nov 2025).
- Human–AI Co-regulation and Equity: Dynamics of power, psychological safety, and ownership arise in emergent agency distribution (e.g., contrarian vs supportive AI personas in collaborative creativity show artifact trade-offs between productive friction and affective safety) (Jin et al., 20 Dec 2025).
- Translational Neuroscience and Enactivist HCI: New models of agency—focused on dynamical enaction, endogenous motivation, and the negotiation of affective-volitional coupling—inform next-generation interface tuning and adaptive tooling, but require robust, scalable real-time bio-signal integration (Hila, 9 Sep 2025).
- Theoretical Foundations and Ontological Criteria: Advancing agency in artificial systems will require architectures that go beyond script-following and reaction, embedding continual self-production, goal formation, and bidirectional coupling with open environments—pushing current models toward embodied, normative, and asymmetrically interactive regimes (Barandiaran et al., 2024, Horibe et al., 7 Dec 2025).
7. Summary Table: Agency-Driven Interaction Mode Mechanisms
| Mechanism/Pattern | Domain/Example | Formal/Metric Features |
|---|---|---|
| Optimal control with adjustable | Haptic guidance/robotics | Cost blend, input norms, user effort |
| Hypertextual friction & structure | Web interfaces | Friction-index, trace-depth, density |
| Info-gated progression, state machines | B2B AI negotiation | TCI, escalation, safety invariants |
| Multi-modal, tiered interaction | Accessible 3D models | Mode selection, proactive/reactive |
| Strategic curation for model agency | AI agent training (LIMI) | Benchmark SR@3, FTFC, data efficiency |
| FoA-driven adaptive affordances | XR/BCI, generative AI | AE/VA indicators, PAC, FAA, alpha/beta |
In summary, agency-driven interaction modes represent a research frontier spanning optimal interface and control design, robust governance and protocol interoperability, and deep questions of autonomy, normativity, and co-regulation. These approaches prioritize the user or agent’s ongoing capacity to shape, understand, and adapt outcomes—pushing beyond static alignment and toward evolving, participatory, and auditable intelligence systems (Langerak, 19 Feb 2025, Liu et al., 31 Jul 2025, Cui et al., 11 Dec 2025, Adenuga et al., 2023, Zhao et al., 9 Nov 2025, Xiao et al., 22 Sep 2025, Holter et al., 2024, Hila, 9 Sep 2025, Mortier et al., 2014, Barandiaran et al., 2024).