RE4HCAI: Framework for Human-Centered AI
- RE4HCAI Framework is a formal, multi-layered approach that integrates human-centered guidelines to capture, specify, and validate AI requirements focused on fairness and inclusivity.
- It employs a structured reference map, elicitation catalog, and UML-inspired visual modeling to comprehensively document user, model, and data needs.
- Case study validations demonstrate its effectiveness in mitigating challenges like algorithmic bias, opaque behaviors, and insufficient stakeholder engagement.
The RE4HCAI Framework (Requirements Engineering for Human-centered Artificial Intelligence) constitutes a formal, multi-layered approach to systematically capture, specify, and visualize requirements that ensure AI-based software solutions are responsible, unbiased, inclusive, and attuned to human values and needs. Developed in response to the persistent neglect of human-centric aspects in AI system engineering, the framework operationalizes current human-centered guidelines and provides structured artifacts for practitioners to elicit, analyze, and represent requirements, facilitating coverage validation and the mitigation of common pitfalls such as algorithmic bias and opaque model behavior (Ahmad et al., 2023).
1. Conceptual Foundation and Motivation
The framework arises from the recognition that AI solutions frequently prioritize technical functionality over human-centered concerns—such as explainability, fairness, diversity, error management, and user agency. To redress this imbalance, RE4HCAI integrates principles from leading industrial guidelines (Google PAIR, Microsoft, Apple’s ML canvas), systematic literature reviews, and practitioner surveys, mapping them into an actionable requirements engineering artifact that targets both system-level and framework-level requirements. The underlying rationale is that robust inclusion of human-centered requirements at initial stages is critical for responsible AI deployment and to prevent later-stage ethical and usability failures.
2. Multi-layered Structure: Model, Catalog, and Visual Notation
RE4HCAI’s architecture is organized into three layers, each serving a distinct purpose in requirements engineering:
- @@@@2@@@@ Guidelines Reference Map (Layer 1):
- Synthesizes industry standards and literature into six main requirement areas for human-centered AI:
- User Needs
- Model Needs
- Data Needs
- Feedback & User Control
- Explainability & Trust
- Errors & Failure
- Figure 1 of the paper visually maps relative coverage and integration of these aspects.
- Synthesizes industry standards and literature into six main requirement areas for human-centered AI:
- Requirements Elicitation Catalog (Layer 2):
- Structured checklist of elicitation prompts and attributes, mapped to the six areas.
- Designed for systematic, exhaustive interrogation of stakeholders and domain experts, yielding requirements not habitually captured in conventional RE.
- Example prompts include identification of system users and stakeholders, explicit trade-offs between automation and augmentation, requirements for visibility of AI features, reward function specification (e.g., choice between precision and recall, with
where TP = true positives, FP = false positives, FN = false negatives).
Conceptual Modeling Language (Layer 3):
- Notation for visual representation and decomposition of requirement classes.
- Uses UML-inspired graphical symbols optimized for cognitive tractability:
- Ovals for main goals,
- Dashed squares for "Needs",
- Color-coded rectangles for capabilities/limitations,
- Hexagons for processes/tasks,
- Octagons for trade-offs.
- System-level and framework-level requirements are distinguished, and relationships are depicted using directional connectors. Hierarchical decomposition is provided: Level 1 (holistic system view), Level 2 (area-specific submodels).
3. Six Requirement Areas: Detailed Scope and Attributes
| Area | Scope (per paper) | Sample Attributes/Catalog Questions |
|---|---|---|
| User Needs | Stakeholders, interaction modes, user inclusion, automation vs. augmentation, reward functions, visibility of AI features | “What users are targeted?”, “Which functions should be invisible/visible to users?”, “How is user inclusion managed?” |
| Model Needs | Algorithm selection/tuning, feedback integration, model scalability/evaluation, trade-offs between accuracy and explainability | “What types of algorithms are required?”, “How is feedback incorporated?”, “How do trade-offs influence model design?” |
| Data Needs | Data sourcing/validation, privacy, fairness/bias mitigation, compliance, diversity, ownership, accuracy | “What diversity characteristics must data encompass?”, “Who maintains data quality?”, “How is privacy ensured?” |
| Feedback & User Control | Modalities of feedback, calibration, privacy/security for user input, control mechanisms | “What user control modalities are required?”, “How is feedback privacy ensured?” |
| Explainability & Trust | Transparency, intended audience, scope/timing of explanation, trade-offs with other attributes, regulatory compliance | “To whom are explanations aimed?”, “What should be explained and when?”, “How does explanation interact with compliance?” |
| Errors & Failure | Error types (user-visible, contextual, background), mitigation, categorization, risk planning | “What failure modes are anticipated?”, “How are errors detected, categorized, and mitigated?” |
Each area is visually represented (see Figure 2-4 in the paper) with notational differentiation of system- versus framework-level requirements.
4. Process for Elicitation, Iterative Refinement, and Validation
The operational process proceeds as follows:
- Guidelines Synthesis: Reference Layer 1 for mapping requirement scope.
- Catalog-driven Stakeholder Engagement: Use Layer 2 elicitation catalog for interviews, workshops, and documentation; systematically capture requirements per area.
- Visual Modeling: Apply Layer 3 notation for communicating, decomposing, and updating requirements; highlight trade-offs and limitations.
- Iterative Update: As system development proceeds, refine requirements in response to emergent model/data behavior—particularly relevant given AI’s black-box dynamics.
- Coverage and Gap Validation: Conduct surveys or expert reviews to assess requirement completeness, prioritizing domains based on practitioner needs.
5. Case Study: Application to Deep Learning-based VR Video Enhancement
The framework was validated on a realistic project—an AI system for resolution enhancement of 360° videos for VR users. Applying RE4HCAI yielded:
- Explicit user needs documentation (target user profiles, offline enhancement modalities, invisible AI feature delivery).
- Model needs clarification (focus on regression loss rather than explainability; deferred human evaluation).
- Data needs mapping (diversity in motion, scene; privacy-maintaining protocol).
- Visual models (Layer 3) efficiently exposed capability limitations (requirement for high-end VR equipment, non-real-time enhancement).
Surveyed practitioners credited the framework for highlighting overlooked factors (e.g., insufficient data diversity), surfacing limitations, and triggering trade-off discussions (precision vs. recall, enhancement regression objectives). Iterative modeling increased clarity and alignment across stakeholders, fostering improved technical and ethical process management.
6. Comparison to Existing Guidelines and Generalization
Layer 1 is directly derived from authoritative sources (Google PAIR, Apple, Microsoft, Table 1/Appendix) and literature. Practitioner validation (n=29 survey/workshops) confirmed industrial relevance, especially for data requirements and human-control aspects that are frequently neglected. The framework is presented as domain-agnostic; while case study focuses on VR video enhancement, the artifacts and process steps are designed to generalize to any AI-based system requiring human-centric robustness and transparency.
7. Key Contributions and Implications
RE4HCAI advances the state of practice by providing:
- A concrete, actionable structure for embedding human-centered values in AI requirements engineering processes.
- Systematic elicitation tools and visual modeling, ensuring holistic coverage across technical and human domains.
- Iterative adaptability to account for evolving AI system characteristics.
- Mechanisms for surfacing trade-offs (automation vs. augmentation, precision vs. recall) in ways directly meaningful to practitioners.
- Documentation of gaps and areas for further validation or context-specific refinement.
This suggests that the framework is especially effective for teams seeking to move beyond superficial inclusion of human-centered principles and toward comprehensive, measurable, and replicable responsible AI engineering practices.
Summary Table: Core RE4HCAI Artifacts
| Artifact | Description | Utility |
|---|---|---|
| Guidelines Reference Map | Synthesis of coverage areas | Ensures global alignment |
| Elicitation Catalog | Structured domain-specific checklists/questions | Drives complete requirements gathering |
| Conceptual Modeling | UML-inspired visual decomposition | Enhances communication, clarity |
| Case Study Models | Real-system sub-models per requirement area | Validates practical effectiveness |
Conclusion
The RE4HCAI Framework operationalizes leading human-centered AI guidelines in a formalized, multi-layered artifact for requirements engineering. It delivers systematic, actionable processes and tools that enable complete, responsible, and inclusive specification of AI-based software systems, validated through industrial case study and practitioner engagement. The framework fills a critical gap in conventional AI system development, establishing a measurable pathway for human-centric engineering and robust ethical compliance (Ahmad et al., 2023).