Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Agency Scale (HAS)

Updated 30 June 2025
  • Human Agency Scale (HAS) is a multidimensional framework that measures individuals’ capacity for goal-directed action in technologically mediated environments.
  • It quantifies agency by integrating behavioral, social, and technological factors such as trust, self-efficacy, perceived risk, and machine affordances.
  • HAS finds application in diverse fields like social media, healthcare, and decision support systems, guiding empirical research and system design.

The Human Agency Scale (HAS) denotes a multidimensional approach to assessing human capacity for goal-directed action, control, and influence within increasingly complex human-machine networks (HMNs) and human-AI collaborative systems. It is rooted in empirical frameworks developed to capture the dynamic interplay between both human and machine agency and is designed to systematically account for behavioral, psychological, and social factors that enable or constrain individual and collective action in technologically mediated environments.

1. Conceptual Foundations and Theoretical Model

At its core, HAS operationalizes agency as “the capacity to perform goal-directed actions in the network” (1702.04537). In contemporary HMNs, both human and machine actors exhibit agency:

  • Human agency: Grounded in conscious intentionality, autonomy, and self-efficacy.
  • Machine agency: The actual or perceived ability of machines to perform actions, make decisions, and shape human experience, well beyond mere automation.

The HAS model situates human agency within a system of interlocking constructs, including:

  • Trust: Confidence in people and technical systems to act with reliability and integrity.
  • Perceived Risk: Subjective hazards or uncertainties influencing engagement.
  • Regulation: Legal, technical, and procedural standards shaping both agency and perceived risk.
  • Social Norms: Social group expectations and influences.
  • Self-Efficacy: The belief in one’s capability or competence using technological resources.
  • Machine Affordances: The supportive or limiting characteristics of algorithmic agents.

A visual model synthesized in (1702.04537) demonstrates the hypothesized relations among these constructs, informing the HAS’s measurement design.

2. Key Dimensions and Measurement Principles

Research across domains has consistently emphasized that HAS must capture the following dimensions:

  • Material and Experiential Aspects: Agency encompasses both the material ability to act and the subjective sense (or feeling) of having agency (2301.12490). Scales should distinguish between externally observable control and internal perceptions.
  • Time Scale: Agency’s relevance varies across micro-interactions, bounded episodes, and extended engagement (days–years), with different patterns emerging across timescales (2301.12490).
  • Social Context and Interdependence: Agency in HMNs is shaped not only by individual action, but also by collective or “proxy agency,” where technology enables people to act through or with others (including machines) (1702.04537, 2310.15065).
  • Context Sensitivity: Both enabling and inhibiting factors (e.g., over-regulation, risk, negative social influence) and the nature of the machine agency (autonomy level, transparency) must be considered.
  • Dynamic and Nonlinear Emergence: Agency often emerges through gradual or punctuated transitions and coordination, as seen in developmental studies (2212.03123).
  • Epistemic and Decision Agency: Some contexts center on control over knowledge production and interpretation (epistemic agency), rather than mere task execution (2408.08846, 2505.03105).
  • Interplay with Trust: Trust mediates the translation of agency into networked or system behavior and must be reflected in comprehensive assessments (1702.04537).

3. Methodological Approaches and Quantification

HAS deployment draws on a mixture of self-report, behavioral, and system-level quantitative measures, including:

Structural Model Formulations

A typical structural equation modeling (SEM) representation for agency in HMNs is

BehaviourHMN=β1TrustHMN+β2SelfEfficacy+β3HumanAgency+β4SocialNorms+ε HumanAgency=γ1TrustHMN+γ2MachineAgency+γ3Regulation+ζ PerceivedRisk=δ1Regulation+δ2SocialNorms+δ3SelfEfficacy+η\begin{align*} \text{Behaviour}_{\mathrm{HMN}} &= \beta_1 \cdot \text{Trust}_{\mathrm{HMN}} + \beta_2 \cdot \text{SelfEfficacy} + \beta_3 \cdot \text{HumanAgency} + \beta_4 \cdot \text{SocialNorms} + \varepsilon \ \text{HumanAgency} &= \gamma_1 \cdot \text{Trust}_{\mathrm{HMN}} + \gamma_2 \cdot \text{MachineAgency} + \gamma_3 \cdot \text{Regulation} + \zeta \ \text{PerceivedRisk} &= \delta_1 \cdot \text{Regulation} + \delta_2 \cdot \text{SocialNorms} + \delta_3 \cdot \text{SelfEfficacy} + \eta \end{align*}

where coefficients are estimated empirically to reflect the strength of influence among constructs (1702.04537).

Scale Item Construction

Suggested survey items are designed to measure:

  • Perceived control and freedom of action
  • Reliance on or challenge to machine “proxy” agency
  • Trust in both social and technical systems
  • Risk perceptions and regulatory awareness
  • Social affirmation and perceived social pressure

The measurement must privilege perceived agency—what users think they can do—as much as objective affordances.

4. Practical Applications and Deployment Contexts

HAS is designed for broad applicability in HMNs, including:

  • Social Media and E-commerce: Measuring how recommendation, feedback, and privacy settings affect users’ agency in online interactions and purchasing behavior.
  • Healthcare Networks: Assessing how digital platforms, telemedicine, and AI-driven support systems empower or constrain practitioners and patients.
  • Decision Support Systems: Evaluating the impact of AI and machine learning in enhancing or inhibiting decision autonomy for operators in high-stakes domains (e.g., crisis management, transportation) (1702.07480).
  • Organizational and Workplace Settings: Studying how digital workflows, collaborative platforms, and automation influence employee and manager agency.
  • Educational and Learning Technologies: Assessing student and instructor autonomy, especially in adaptive or intelligent learning environments.

5. Implications, Limitations, and Future Directions

The establishment of HAS creates a foundation for rigorously assessing and guiding system design towards agency enhancement. However, several challenges and caveats are acknowledged:

  • Complex Interdependency: Agency is rarely unidimensional; human and machine agency are interdependent, sometimes synergistically, sometimes in tension.
  • Subjectivity and Contextuality: Perceptions of agency—especially as mediated by trust, regulation, and social contexts—are not invariant across individuals, demographics, or network contexts.
  • Dynamic Adjustments: The HAS framework must remain adaptable, accounting for temporal variations (e.g., shifting roles in crisis), as well as for changes in regulatory environments or technological affordances.

Planned future directions include longitudinal validation, integration of behavioral analytics, and refinement of subscales to reflect evolving HMNs and societal priorities.

6. Table: Factors Enabling and Inhibiting Agency in HMNs

Factor Type Examples
Agency-inhibiting Perceived risk, over-regulation,
lack of machine support, negative norms
Agency-enabling High self-efficacy, transparent design,
supportive regulation, positive norms

7. Summary and Recommendations for HAS Construction

The Human Agency Scale should be constructed to:

  • Capture the multidimensional, context-sensitive, and dynamic attributes of agency in HMNs.
  • Integrate constructs from trust, risk, social influence, regulation, and self-efficacy.
  • Emphasize both individual and socio-technical forms of agency, with sensitivity to both perceived and actual capacities.
  • Serve both as a research tool for empirical investigation and as a design guidance framework for engineers and system architects seeking to optimize user empowerment within complex digital infrastructures.

By grounding agency assessment in the validated constructs and empirically supported relationships detailed in (1702.04537), the HAS enables both rigorous paper and practical advancement of human-centered, participatory digital ecosystems.