Papers
Topics
Authors
Recent
2000 character limit reached

Human-Centered AI Maturity Model

Updated 24 December 2025
  • Human-Centered AI Maturity Model (HCAI-MM) is a framework that defines five maturity stages to enhance AI practices through measurable metrics and structured governance.
  • It integrates both technical performance and socio-technical dynamics by employing quantitative indicators, iterative design cycles, and standardized tools.
  • The model guides organizations from initial HCAI efforts to optimized practices, validated by real-world case studies in healthcare and technology.

The Human-Centered AI Maturity Model (HCAI-MM) is a staged organizational framework designed to systematically assess and advance an enterprise’s capability to design, develop, deploy, and govern AI systems that prioritize human needs, values, and experiences. It offers a roadmap from basic, ad-hoc HCAI efforts to optimized, industry-leading organizational practices, coupling technical and social dimensions through quantifiable metrics, structured governance, standardized tools, and a documented methodology that interweaves organizational design with HCAI progression (Winby et al., 17 Dec 2025).

1. Conceptual Foundations and Scope

HCAI-MM is defined as a maturity model comprising five sequential stages by which organizations can evaluate, monitor, and incrementally enhance the design and implementation of human-centered AI (HCAI) practices. The scope encompasses all elements required for robust HCAI: human-AI collaboration, explainability, fairness, and user experience. The core purposes are to (1) articulate a staged progression from novice to leader, (2) provide metrics and tools for self-assessment, and (3) institutionalize organizational mechanisms that ensure continuous, measurable enhancement of HCAI capabilities. HCAI-MM uniquely integrates organizational design perspectives directly into the progression framework, unlike prior models that treat socio-technical and technical change in isolation.

2. Maturity Stages: Structure, Criteria, and Objectives

HCAI-MM delineates five progressive stages of maturity, each defined by specific practices, metrics, governance structures, and benchmarks:

Stage Characteristics & Capabilities Key Objectives
Level 1: Initial Isolated HCAI pilots, reactive AI, low awareness. Metrics: MentryM_{\text{entry}}, MtrainM_{\text{train}}. Executive sanction, readiness assessment, awareness.
Level 2: Developing Emerging frameworks, basic user research/testing. Metrics: ΔDP\Delta_{\text{DP}}, FfeedbackF_{\text{feedback}}. Institute frameworks, social/technical analysis.
Level 3: Defined Formal governance body, published guidelines. Metrics: SinterpS_{\text{interp}}, UsuccessU_{\text{success}}. Standardize user-input, launch multi-disciplinary training.
Level 4: Managed HCAI embedded in KPIs, lifecycle integration. Metrics: AcomplianceA_{\text{compliance}}, social impact index. Audit mechanisms, HCAI dashboards organization-wide.
Level 5: Optimizing Continuous innovation, external advocacy, co-design. Metrics: RCIR_{\text{CI}}, stakeholder engagement. Shape standards, maintain user communities.
  • $M_{\text{entry}} = \frac{\text{# completed entry tasks}}{\text{# defined entry tasks}}\times 100\%$
  • $M_{\text{train}} = \frac{\text{# stakeholders trained}}{\text{# total stakeholders}}\times 100\%$
  • ΔDP=P(y^=1A=0)P(y^=1A=1)\Delta_{\text{DP}} = \left|P(\hat y=1|A=0) - P(\hat y=1|A=1)\right|
  • $F_{\text{feedback}} = \frac{\text{# feedback events}}{\text{time period}}$
  • Sinterp=1Ni=1Nsi,  si[1,5]S_{\text{interp}} = \frac{1}{N}\sum_{i=1}^N s_i,\;s_i\in[1,5]
  • $U_{\text{success}} = \frac{\text{# tasks completed successfully}}{\text{# tasks tested}}\times 100\%$
  • $A_{\text{compliance}} = \frac{\text{# projects passing ethics audit}}{\text{# audited projects}}\times 100\%$
  • $R_{\text{CI}} = \frac{\text{# iterative changes based on feedback}}{\text{time period}}$

Progression relies on both quantitative measures (e.g., audit scores, usability rates) and qualitative practices (e.g., establishing cross-functional governance).

3. Metrics for Human-Centered Maturity

Metrics in HCAI-MM are defined across four central dimensions:

  • Human-AI Collaboration Index (HAC):

$HAC = w_h\cdot\frac{\text{# successful human-AI tasks}}{\text{# total tasks}} + w_c\cdot\frac{\text{# collaborative sessions}}{\text{time period}}$

with wh+wc=1w_h + w_c = 1.

  • Explainability Score (EXP):

EXP=1Ni=1N[silocal+siglobal],s[0,1]EXP = \frac{1}{N} \sum_{i=1}^{N} \left[ s^{\text{local}}_i + s^{\text{global}}_i \right], \quad s \in [0,1]

integrating local and global model explanation ratings.

  • Fairness Gap (FG):

FG=maxabP(y^=1A=a)P(y^=1A=b)FG = \max_{a \neq b} \left| P(\hat y=1|A=a) - P(\hat y=1|A=b) \right|

quantifying disparities across protected attributes.

  • User Experience Composite (UX):

UX=αSsus+βUsuccess+γTtaskUX = \alpha \cdot S_{\text{sus}} + \beta \cdot U_{\text{success}} + \gamma \cdot T_{\text{task}}

with SsusS_{\text{sus}} as System Usability Scale, UsuccessU_{\text{success}} the task completion rate, TtaskT_{\text{task}} normalized time, and α+β+γ=1\alpha+\beta+\gamma=1.

These measurements support evidence-based benchmarking and progression across maturity stages.

4. Governance Structures, Toolkits, and Best Practices

Governance mechanisms and tooling are stage-specific, scaling in complexity and organizational embedment as maturity increases:

Stage Governance/Tools Best Practices
Level 1 Self-assessment surveys Assign HCAI sponsor, awareness workshops
Level 2 User feedback platforms, IBM AI Fairness 360, draft guidelines Pilot usability/fairness tests
Level 3 Design-lab environment, LIME/SHAP, HCAI committee, published design guidelines Stakeholder sign-off in lifecycle
Level 4 CI/CD dashboards, MS Fairness Dashboard, internal/external audits Quarterly HCAI reviews, impact assessments
Level 5 Co-design portals, live analytics, public ethics reports Annual summits, external research grants

Tool adoption and best practices are mapped to maturity level, with compliance and ongoing audit institutionalized from stage 4 onward.

5. Organizational Design and Socio-Technical Cycle

HCAI-MM embeds progression in a five-phase socio-technical design cycle—operationalized in LaTeX as:

EntryResearch data AnalysisDesign LabImplementationAdaptation\text{Entry} \longrightarrow \text{Research {data} Analysis} \longrightarrow \text{Design Lab} \longrightarrow \text{Implementation} \longrightarrow \text{Adaptation}

Phases are:

  • Entry & Sanction: Secure executive buy-in and conduct readiness scan.
  • Research & Analysis: Perform both technical (process mapping, variance identification) and social analyses (user research, task analysis).
  • Design Lab: Iterative prototyping and multi-stakeholder deliberation, integrating ethical frameworks.
  • Implementation: Pilot deployment, training, and establishment of feedback loops.
  • Adaptation: Continuous monitoring, detection and correction of variances, refinement of governance mechanisms.

A simplified TikZ representation formalizes the workflow for organizational communication and planning.

6. Empirical Validation: Case Studies

Empirical case studies illustrate real-world progression across maturity levels:

  • Mayo Clinic (Healthcare, Level 2 → 3): Transitioned from an NLP-based clinical scheduling pilot with co-design to institution-wide deployment by formalizing HCAI guidelines, instituting governance checkpoints, and systematic usability testing. Resulted in the publication of design principles and cross-departmental tool scaling.
  • IBM HR (Technology, Level 2 → 4): Advanced from initial explainable dashboards and manager feedback (Level 2) to Level 4 by incorporating fairness audits in HR processes, forming an AI Ethics Committee, and embedding HCAI KPIs and dashboards company-wide.

These exemplars validate the staged approach and highlight the criticality of embedding governance, continuous measurement, and structured feedback at each step.

7. Significance, Utility, and Progression Pathways

HCAI-MM enables organizations to benchmark current HCAI practices, select and implement appropriate governance structures and tools at each stage, operationalize systematic socio-technical design cycles, and accelerate progress by learning from peer case studies. By institutionalizing quantitative and qualitative measurement of human-AI collaboration, explainability, fairness, and user experience, and integrating these into both technical and organizational subsystems, the model provides a foundation for cultivating human-centered, ethically grounded, and continuously evolving AI capabilities (Winby et al., 17 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Human-Centered AI Maturity Model (HCAI-MM).