Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Human-Centred Artificial Intelligence

Updated 30 July 2025
  • Human-centred AI is an approach that embeds human values, cognitive labour, and ethical oversight into the design and deployment of intelligent systems.
  • It integrates methodologies such as human-in-the-loop, double diamond cycle, and interdisciplinary teamwork to ensure transparency and trustworthiness.
  • Applications range from autonomous robotics to FinTech, aiming to augment human capabilities while maintaining accountability and regulatory compliance.

Human-centred AI is broadly defined as the intentional design, development, and deployment of AI systems that place human values, needs, cognition, agency, and well-being at the core of computational architectures, workflows, and societal integration (Riedl, 2019, Xu et al., 2023, Pyae, 5 Feb 2025, Xu et al., 2023, Guest, 26 Jul 2025). Rather than focusing solely on technical performance, human-centred AI treats intelligent systems as components of larger sociotechnical relationships, where tasks of cognitive labour are redistributed, augmented, or performed in collaboration with humans. This paradigm emphasizes transparency, responsibility, and alignment with social and ethical norms at all stages of the AI lifecycle.

1. Foundational Principles and Definitions

Research identifies several foundational elements underpinning human-centred AI:

  • Relationality of Cognitive Labour: All AI artefacts perform aspects of human cognitive labour—calculating, perceiving, planning, or making decisions—thereby redefining the relationship between technology and human cognition. This process can be structured as augmentative (enhancement), substitutive (replacement), or detrimental (displacement), each impacting skill retention, autonomy, or deskilling in distinctive ways (Guest, 26 Jul 2025).
  • Hierarchical Frameworks: Empirical frameworks feature a hierarchy spanning ethical foundations (fairness, transparency, privacy, dignity), usability (user-friendliness, autonomy, control), emotional/cognitive factors (empathy, well-being, involvement), and personalization dimensions (user models, stakeholder engagement, human cognition) (Pyae, 5 Feb 2025).
  • Risk-based Regulatory Anchoring: Regulatory frameworks such as the EU AI Act demand “human-centricity,” requiring AI to uphold democratic and humanistic values through mechanisms of oversight, transparency, and explanation, especially in high-risk systems (Valdez et al., 22 Feb 2024, Calvano et al., 14 Jan 2025).
  • Methodological Holism: State-of-the-art frameworks integrate interdisciplinary teams, process models (such as the “double diamond” design cycle), and explicit mapping from design goals (trustworthiness, scalability, responsibility, augmentative capability) to actionable principles and methods (Xu et al., 2023, Xu et al., 2023).

The primary aim is not simply to avoid harm but to enhance human capacities, preserve interpretability and control, and ensure that AI continually serves human well-being and societal progress.

2. Typologies and Sociotechnical Relationships

Human-centred AI has been analyzed through the lens of explicit typologies delineating how AI artefacts engage with human cognition (Guest, 26 Jul 2025):

Category Definition Implication for AI System Design
Enhancement AI augments human skills without degrading them Retain human-in-the-loop, explicit support
Replacement AI substitutes human cognitive labour, output fixed Watch for skill erasure, ethical opacity
Displacement AI offloads cognitive work detrimentally (deskilling) Risk of diminished capacity, transparency

Design decisions must preserve human involvement where enhancement is intended, or address the risk of obscured human labour in replacement/displacement contexts. The obfuscation of cognition—where human input is hidden by the artefact’s operation—poses risks for deskilling, misattributed capability, and ethical ambiguity (the “ghost in the machine” phenomenon).

3. Framework Components and Methodological Structure

Human-centred AI frameworks operate through hierarchical and multi-layered structures, combining normative, procedural, and technical guidance (Xu et al., 2023, Xu et al., 2023):

  • Requirement Hierarchies: Design goals (e.g., trustworthy, responsible, augmenting) are decomposed into specific design principles, implementation approaches (e.g., hybrid intelligence, explainable AI, human-in-the-loop mechanisms), interdisciplinary methods (e.g., human-centred ML, algorithmic nudge), and processes spanning problem discovery through deployment and monitoring.
  • Implementation Taxonomy: Typical models enumerate 15+ implementation strategies and 20+ interdisciplinary methods spanning HCI, AI, behavioral science, and ethics (e.g., human state modeling, neuroergonomics, participatory design).
  • Process Integration: The “double diamond” process—discovery, definition, development, delivery—is overlaid with the AI lifecycle (problem definition, data gathering, modeling, deployment, monitoring), with HCAI guidance at each stage (Xu et al., 2023).
  • Three-Layer Strategy: Practical adoption requires alignment of the broader social context (policy, standards, cross-industry collaboration), organizational culture (guidelines, governance), and multidisciplinary project teams (skills development, actionable process) (Xu et al., 2023, Xu et al., 2023).

Conceptually, such frameworks can be rendered algebraically as:

HCAIScore=α1Ethics+α2Usability+α3Emotional Intelligence+α4Personalization\text{HCAI}_\text{Score} = \alpha_1 \cdot \text{Ethics} + \alpha_2 \cdot \text{Usability} + \alpha_3 \cdot \text{Emotional Intelligence} + \alpha_4 \cdot \text{Personalization}

where αi\alpha_i are weights based on application or empirical validation (Pyae, 5 Feb 2025).

4. Key Domains and Applications

Human-centred AI frameworks are realized across numerous application domains:

  • Autonomous Robotics: Embedding HCAI within robotic architectures (notably using IBM’s MAPE-K framework) ensures that systems are adaptive, transparent, and reliable, balancing autonomous operation with explicit human oversight and feedback (Casini et al., 28 Apr 2025). Processes such as human-robot teaming, shared situational awareness, and dynamic control allocation are central to safety and trust.
  • FinTech and Personalization: In financial technology, HCAI powers user-centric services through AI-powered analytics, natural language processing for virtual assistants, robo-advisory services, and dynamic fraud detection—always driven by continuous user feedback and regulatory compliance (Adedoyin et al., 18 Jun 2025).
  • Human-AI Collaboration: Human-centred collaboration involves human-led ultimate control and AI-empowered support, with mutual situation awareness, dynamic task reallocation, transparent communication, and ongoing trust calibration, demonstrated in domains such as autonomous vehicles and air traffic management (Gao et al., 28 May 2025).
  • Fairness and Accountability: Designing interfaces (e.g., FairHIL) that enable human-in-the-loop fairness assessments promotes transparency, tailored explanations, and sustained accountability, ensuring that AI recommendations are scrutinizable and adjustable by diverse stakeholders (Nakao et al., 2022).

The outcome across domains is the transition from passive human use to active engagement with, supervision of, and improvement of AI systems, supported by transparent, explainable interactions.

5. Regulatory and Ethical Anchoring

Regulation is a pillar of recent advances. The EU AI Act provides a prototypical legal anchor, emphasizing:

  • Risk-based assessment: AI systems are assigned to categories (minimal, high, unacceptable risk), with commensurate requirements for oversight, transparency, and user empowerment (Valdez et al., 22 Feb 2024, Calvano et al., 14 Jan 2025).
  • Human oversight and documentation: High-risk systems must support auditability, intervention, systematic user education, and clear delineation of operator responsibilities.
  • Alignment with fundamental rights: Legal mandates embed fairness and protection, with the additional challenge of operationalizing subjective properties (e.g., “explainability”) into measurable criteria for both certification and assurance.

The interplay of legal, technical, and social theory is critical to synchronizing actual system behavior with humanistic values.

6. Future Directions and Open Problems

Despite recent methodological rigor, open questions persist:

  • Standardization of Evaluation: There is no universally accepted metric for human-centredness, fairness, or explainability, complicating assessment and compliance (Calvano et al., 14 Jan 2025, Pyae, 5 Feb 2025).
  • Interdisciplinary Integration: Genuine human-centred AI necessitates ongoing fusion of technical, ethical, cognitive, and psychological expertise, demanding novel educational curricula and rigorous teamwork across historically siloed disciplines (Xu et al., 2023, Yue et al., 2023).
  • Addressing Obfuscation: Ensuring that the human-in-the-loop is not functionally replaced or rendered invisible by automation or black-box AI is a critical challenge. The obfuscation of cognition, if unchecked, risks deskilling, ethical ambiguity, and distorted societal understanding of technological agency (Guest, 26 Jul 2025).
  • Dynamic Adaptation: Adjusting the ratio of human control (α\alpha) to AI-driven automation in real time is necessary to achieve true symbiosis; this remains a technical and operational open problem (Calvano et al., 14 Jan 2025, Gao et al., 28 May 2025).

Frameworks call for continuous empirical research—across cultures and domains—to ensure that HCAI attributes remain relevant and effective amidst evolving technologies (Pyae, 5 Feb 2025).

7. Summary Table of Notable HCAI Framework Dimensions

Dimension Example Attributes / Features Source Paper(s)
Ethical Foundations Fairness, Transparency, Privacy, Dignity (Pyae, 5 Feb 2025, Calvano et al., 14 Jan 2025)
Usability/Autonomy User-friendliness, Human control, Operational autonomy (Xu et al., 2023, Serafini et al., 2021)
Emotional/Cognitive Empathy, Feedback loops, Emotional intelligence, Shared mental models (Pyae, 5 Feb 2025, Gao et al., 28 May 2025)
Personalization User models, Stakeholder engagement, Adaptive interfaces (Pyae, 5 Feb 2025, Adedoyin et al., 18 Jun 2025)
Regulatory/Legal Risk-based assessment, Oversight, Documentation, Accountability (Valdez et al., 22 Feb 2024, Calvano et al., 14 Jan 2025)
Methodological Interdisciplinary teams, Double diamond lifecycle, Human-in-the-loop collaboration (Xu et al., 2023, Xu et al., 2023)

Conclusion

Human-centred artificial intelligence is rigorously defined by frameworks prioritizing ethical foundations, usability, personalization, and transparency in both the sociotechnical relationship and the architectural design of AI systems (Xu et al., 2023, Pyae, 5 Feb 2025, Valdez et al., 22 Feb 2024, Guest, 26 Jul 2025). It explicitly recognizes the necessity of human oversight, the risks of cognitive obfuscation, and the potential for enhanced well-being via augmentation rather than replacement or displacement. The field is shaped by regulatory imperatives, empirical practitioner input, and a sustained demand for hybrid intelligence that is adaptable, responsible, and continually aligned with human needs and values. Significant challenges remain, particularly in standardization, dynamic control allocation, and the mitigation of deskilling—the future development of HCAI will require persistent empirical paper, collaborative interdisciplinarity, and ongoing integration of societal values into technical systems.