Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
Gemini 2.5 Pro
GPT-5
GPT-4o
DeepSeek R1 via Azure
2000 character limit reached

Human-Centered Requirements Framework

Updated 8 August 2025
  • Human-Centered Requirements Framework is a systematic approach that embeds human values, ethical priorities, and stakeholder needs into AI system design.
  • It employs layered architectures, artifact-based integration, and iterative feedback to align technical specifications with diverse human requirements.
  • The framework enhances human–machine interaction, safety, and explainability across various domains such as robotics, software, and responsible AI.

A human-centered requirements framework systematically integrates human values, needs, and ethical priorities into the specification, engineering, analysis, and operationalization of AI-based systems or intelligent agents. This paradigm extends beyond technical correctness or efficiency, embedding considerations such as trust, inclusivity, cognitive fit, explainability, and societal impact directly into requirements engineering, organizational processes, and system evaluation. The frameworks described in recent research manifest through layered architectures, multidimensional checklists, process blueprints, and empirical validation in various AI application domains.

1. Foundational Dimensions and Key Properties of Human-Centered Requirements

Human-centered requirements frameworks universally address multidimensional criteria that shape how AI systems interact with human stakeholders. Across domains—such as trustworthy autonomous robotics, agent-based assistants, software engineering, and explainable AI—the following foundational classes of requirements recur:

Dimension Typical Requirements Example Citation [arXiv id]
Safety Avoidance of harm in dynamic contexts (He et al., 2021)
Security Resistance to cyberattacks, privacy (He et al., 2021)
System Health/Fault Tolerance Self-diagnosis, resilience (He et al., 2021, Xu et al., 2023)
Human–Machine Interaction (HMI) Usability, transparency, controllability (He et al., 2021, D'Oro et al., 18 Jul 2025)
Ethics and Law Fairness, compliance, accountability (He et al., 2021, Pyae, 5 Feb 2025)

Safety requirements ensure that systems behave predictably and defensively in stochastic, human-populated environments. Security requirements entail not just informatic safeguards, but also the seamless blending of privacy with physical safety. System health involves continuous monitoring—via model-based or data-driven means—of faults and adversities, with an emphasis on self-correction and graceful degradation. HMI requirements prescribe intuitive, bidirectional interaction modalities and error-transparent design, tightly coupled to explainability and trust-building. Ethical/legal compliance, often articulated through design principles and checklists, incorporates fairness audits, traceability, and value alignment.

2. Framework Architectures and Methodological Structures

Research converges on multi-layered and artifact-centric architectural blueprints for operationalizing human-centered requirements.

Layered Frameworks:

Most frameworks introduce hierarchical or multi-layered models. For example, one taxonomy (Pyae, 5 Feb 2025) classifies 26 attributes into four tiers:

  • Ethical Foundations: fairness, human values, trust, privacy.
  • Usability: user-friendliness, decision-making support, controllability.
  • Emotional & Cognitive: empathy, well-being, involvement, feedback.
  • Personalization: user models, cognitive styles, emotional adaptation.

The RE4HCAI framework (Ahmad et al., 2023) structures requirements elicitation into:

  1. User Needs
  2. Model Needs
  3. Data Needs
  4. Feedback & Control
  5. Explainability & Trust
  6. Errors & Failure

These are supported by catalog checklists, modeling languages with domain-specific overlays, and explicit mapping to lifecycle phases.

Artifact-Based Integration:

Design Thinking and Requirements Engineering can be explicitly linked through artifact models, layering context (user/business needs), black-box requirements, and system architecture artifacts (Hehn et al., 2021). Cross-disciplinary artifacts (e.g., "Design Challenge," "Objectives & Goals") serve as bridges across creative and technical domains, enabling traceable engineering from empathy-guided ideation to system implementation.

Process Models:

Frameworks consistently merge iterative human-centered design cycles (e.g., the “double diamond” discovery–definition–development–delivery) with the full AI lifecycle (problem framing, data/knowledge integration, modeling, validation, governance) (Xu et al., 2023, Xu et al., 2023). Three-level deployment strategies often span:

  • Societal (macro): standards, policy, education.
  • Organizational: guidelines, standardization, culture.
  • Project/Team: interdisciplinary collaboration, actionable processes.

3. Methodologies for Elicitation, Validation, and Operationalization

Empirical and Participatory Approaches:

Frameworks are increasingly validated through mixed-method studies involving practitioners, expert panels, and surveys (Pyae, 5 Feb 2025). Requirements are elicited by mapping industrial guidelines (Google PAIR, Apple HIG, Microsoft guidelines) with literature-derived constructs, followed by iterative validation and ranking using Likert-scale surveys and consensus scoring.

Checklists and Catalogs:

Systematic requirements catalogs enumerate contextual questions for each requirement domain (e.g., user identification, system capabilities, fairness trade-offs, explainability metrics) (Ahmad et al., 2023). These catalogs guide structured workshops or stakeholder interviews, facilitating inclusive coverage of both technical and human priorities.

Visual and Formal Models:

Conceptual UML-inspired diagrams, TikZ/LaTeX-based tier illustrations, and domain-specific modeling notations reduce cognitive load and foster shared understanding among interdisciplinary teams and non-technical stakeholders (Ahmad et al., 2023, Pyae, 5 Feb 2025).

Metrics and Acceptance Criteria:

Acceptance models (such as the "worthiness/trustiness" dual-axis for RAS (He et al., 2021)) quantitatively encapsulate systemic sufficiency and ethical soundness:

  • Worthiness: technical performance.
  • Trustiness: aggregate of trust properties (safety, security, health, HMI, ethics).

Threshold-based scoring (with ternary or continuous scales) concretizes the compliance of AI outputs against human-centric checklists (e.g., threshold of 7.0/12.0 for Copilot output acceptance (Heydari, 5 Aug 2025)).

4. Application Domains and Case Studies

Robotics and Autonomous Systems:

Comprehensive human-centered frameworks for RAS focus on fault tolerance, robust HMI, and the legal/ethical landscape critical for social acceptance. Specific analytical models bridge control-theoretic safety (e.g., y=Λ(f(v))y = \Lambda(f(v))) with human trust and interaction modalities (He et al., 2021).

Software-Intensive Systems:

Artifact-based and layered frameworks are employed in requirement elicitation for complex applications, such as VR video enhancement or mobile health interventions (Ahmad et al., 2023, Ahmad et al., 2023). Real-world case studies consistently reveal that progressing from traditional RE to human-centered approaches uncovers previously neglected dimensions of bias, usability, and adaptability.

Human–AI Collaboration and Agent Design:

The ADEPTS capability framework articulates minimal user-facing principles—autonomous actuation, disambiguation, evaluation, personalization, transparency, and proactive safety—as functional benchmarks for agent-based systems (D'Oro et al., 18 Jul 2025). In collaborative settings, dynamic function allocation, shared situation awareness, and continuous trust calibration are explicit requirements (Gao et al., 28 May 2025).

Explainable and Responsible AI:

Frameworks for XAI evaluation, such as OpenHEXAI, standardize human-in-the-loop user studies with comprehensive objective (accuracy, F1, AAOD, EOD) and subjective (user trust, perceived fairness) metrics (Ma et al., 20 Feb 2024). Advanced LLM-centric frameworks deploy dual segmented explanations (for expert transparency and non-expert clarity) in a reproducible, data/model/explanation-agnostic pipeline (Paraschou et al., 13 Jun 2025).

Educational Technology:

The ARCHED instructional design framework institutes staged, human–AI workflows where educators retain agency over learning objectives, and alignment with pedagogical taxonomies (e.g., Bloom's) is ensured through cascaded agent–human evaluation and transparent feedback (Li et al., 11 Mar 2025).

5. Trust, Explainability, and Continuous Feedback

Across all frameworks, trust formation, calibration, and maintenance constitute essential human-centric requirements. Iterative feedback loops—whether achieved through user studies, in situ monitoring, or agile co-creation—reinforce alignment with stakeholder values and expectations (Tjondronegoro et al., 2022, Silva et al., 14 Apr 2025). Architectures commonly provide mechanisms for real-time explanation personalization, continuous trust monitoring, and adaptive tuning based on explicit or implicit user feedback (Xu et al., 2023, Silva et al., 14 Apr 2025).

Explainability and transparency are operationalized not only through model-intrinsic features (e.g., Grad-CAM heatmaps, confidence quantification) but also through user-adaptive explanation layers and dynamic, participatory refinement cycles (Silva et al., 14 Apr 2025, Habibi et al., 11 May 2024). A plausible implication is that the sustainability of trust in high-stakes AI systems depends as much on these continuous, contextually-calibrated feedback mechanisms as on static design choices.

6. Societal and Macro-Level Integration

Certain frameworks explicitly extend human-centered requirements to macro-level societal objectives, including the United Nations Sustainable Development Goals (Mortezapour et al., 5 Jul 2025). The "Dual Pyramid" model, for example, mandates that designers address both micro-level interaction requirements (e.g., effectiveness, safety, empathy in one-on-one human–robot interaction) and macro-level, societal needs (accountability, inclusiveness, adaptability, and contribution to SDGs).

Three-layer strategies for practical deployment further strengthen the embedding of human-centeredness by spanning governmental, organizational, and project-team actions (education, policy, standardized practices, and interdisciplinary training), thereby ensuring that requirements are maintained across changing contexts and lifecycle phases (Xu et al., 2023, Xu et al., 2023).

7. Future Directions and Benchmarking

Current frameworks highlight the necessity for continuous evolution and benchmarking:

  • Expanding and validating requirement attributes in diverse cultural and industrial domains (Pyae, 5 Feb 2025).
  • Developing metrics and standardized evaluation experiments for emerging AI paradigms (e.g., LLM agents, generative models, cross-modal IVAs) (Sung et al., 16 Mar 2025, Guo et al., 9 Oct 2024).
  • Refining methodologies for interdisciplinary collaboration, dynamic requirements evolution, and integration of human feedback at multiple abstraction levels (Xu et al., 2023).

Research agendas call for deeper theoretical models specific to human–AI teaming, leadership frameworks for human oversight in autonomous collaboration, and more robust tools for adaptive, explainable, and inclusive requirement management (Gao et al., 28 May 2025).


In summary, a human-centered requirements framework in AI is characterized by its multi-dimensional, layered structure, its empirically validated criteria spanning ethical, usability, cognitive, and personalization tiers, and its integration of iterative, participatory processes for requirements elicitation, validation, and lifecycle management. Such frameworks enable the systematic translation of human values, needs, and expectations into technical specifications, thereby fostering systems that are robust, transparent, ethically aligned, and socially beneficial.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)