Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 96 TPS
Gemini 2.5 Pro 50 TPS Pro
GPT-5 Medium 31 TPS
GPT-5 High 29 TPS Pro
GPT-4o 96 TPS
GPT OSS 120B 475 TPS Pro
Kimi K2 194 TPS Pro
2000 character limit reached

Human-Centered Safety Models in HRI

Updated 16 August 2025
  • Human-centered safety models are frameworks that integrate physical metrics with psychological and contextual factors to assess and enhance safety in human–machine interactions.
  • They employ adaptive personalization using parameters like ρ and clustering based on empirical human studies to tailor safety interventions across diverse scenarios.
  • These models improve both physical safety and user trust in critical settings such as autonomous vehicles, robotics, and healthcare through dynamic, context-aware adjustments.

A human-centered safety model systematically accounts for the safety, trust, and well-being of human participants within cyber-physical and AI-enabled systems by explicitly modeling not just physical metrics (such as relative distance and velocity), but also psychological, behavioral, and contextual dimensions. These models integrate formal, data-driven, and adaptive approaches to ensure that safety interventions are reliable, interpretable, contextually appropriate, and capable of supporting human–machine collaboration in safety-critical domains such as autonomous vehicles, robotics, and industrial automation.

1. Integrated Measurement of Physical and Perceived Safety

Recent work demonstrates that purely sensor-based safety measures (e.g., using proximity or collision risk) are inadequate for capturing the diversity of human safety perception in human–robot interaction (HRI) and related domains. To address this, human-centered safety models, such as the parameterized General Safety Index (GSI), introduce a personalization hyperparameter ρ\rho that modulates the mapping from measurable physical quantities to subjective safety perception (Pandey et al., 9 Jul 2025). The GSI formula is:

GSIhi(dhi,r,vhi,r;ρ)=clip([dhi,rs(vhi,r)vhi,r22AmaxDminDmaxDmin]ρ, 0, 1)\text{GSI}_{h_i}(d_{h_i,r}, v_{h_i,r}; \rho) = \text{clip}\left( \left[ \frac{d_{h_i,r} - \text{s}(v_{h_i,r}) \frac{v_{h_i,r}^2}{2 A_{\text{max}}} - D_{\text{min}}}{D_{\text{max}} - D_{\text{min}}} \right]^{\rho},\ 0,\ 1 \right)

where dhi,rd_{h_i,r} is the Euclidean distance, vhi,rv_{h_i,r} is relative velocity, AmaxA_{\text{max}} is maximum deceleration, DminD_{\text{min}} and DmaxD_{\text{max}} bound proxemics-based zones, and s()s(\cdot) is a sign function. The ρ\rho parameter personalizes the mapping, allowing for cautious (higher ρ\rho) or tolerant (lower ρ\rho) safety profiles.

This parameterization flexibly bridges absolute physical safety (APS) and perceived physical safety (PPS), enabling adaptation of robot behavior to suit individual users’ comfort levels and application domains (e.g., healthcare vs. delivery robots).

2. Empirical Characterization and Personalization via Human Studies

Empirical human-subject studies using the parameterized GSI framework validate the model’s ability to capture meaningful individual differences in safety perception (Pandey et al., 9 Jul 2025). In a simulated rescue (MEDEVAC) scenario, 61 participants experienced both direct (casualty/transported) and indirect (bystander/observation) roles in robot interaction across multiple operating modes (slow, fast, teleop). Measures included:

  • Emotional state (relaxation, calmness, comfort, predictability)
  • Perceived safety via post-trial questionnaires
  • Trust in task consistency and robot behavior

Maximum likelihood estimation was used to fit individualized ρ\rho parameters, revealing that positive emotional states and higher trust consistently align with higher perceived safety (higher GSI), confirming that positive affect and predictable robot behavior facilitate psychological comfort.

Clustering analyses showed that participants do not represent a continuum but rather fall into a small set of user "types," each with a typical ρ\rho. Bystanders exhibited greater diversity than casualties, while casualties generally reported lower ρ\rho values (greater tolerance for close interaction).

This clustering supports adaptive personalization: the robot system can tailor motion planning and safety responses based on user type or dynamically inferred ρ\rho.

3. Influence of Predictability, Trust, and Contextual Factors

Predictability and operational consistency of the robot are critical determinants of perceived safety. Both qualitative and quantitative analyses found a strong positive correlation between perceived predictability and GSI scores among participants (Pandey et al., 9 Jul 2025). Unpredictable or erratic robot behavior, even if physically safe, undermines user trust and leads to lower perceived safety—a key empirical result indicating that APS is insufficient for HRI.

Role and repeated exposure strongly shape the perception of safety. Transported participants (casualties) reported decreasing perceived safety with repeated direct physical interaction, while bystander ratings remained stable. This suggests that physical interaction and accumulated experience modulate the preferred ρ\rho and comfort with proximity, highlighting the importance of longitudinal adaptation in safety models.

4. Towards Adaptive Safety Planning and Personalization

The presence of distinct user clusters enables the development of adaptive planning algorithms that dynamically adjust safety margins and robot behavior based on inferred user type or real-time feedback. During extended operation, ρ\rho can be updated based on physiological signals, behavioral cues, or explicit feedback, allowing for a closed-loop, context-adaptive approach to safety in HRI. This strategy supports not only the avoidance of collisions or unsafe proximity, but also the continuous maintenance of user trust and comfort.

Adaptive personalization is particularly salient in domains with diverse or changing user populations and in scenarios where users transition between observer and direct participant roles.

5. Integration of Psychological and Physical Dimensions in Safety Certification

The integration of psychological (affect, trust, predictability) and traditional physical (distance, velocity, acceleration) metrics closes the gap between sensor-based models and user-centered experiences (Pandey et al., 9 Jul 2025). By capturing both sets of variables within a unified safety index, the model allows robotic systems to objectively measure and actively manage both physical integrity and user perception.

Such holistic models are foundational for future safety certification frameworks that require demonstration of not only APS, but also sustained PPS across operational scenarios. The inclusion of a quantifiable personalization parameter (such as ρ\rho) and empirical validation through human-centric studies provides a concrete methodology for certifying both physical and psychological safety in critical domains such as healthcare, disaster response, and public assistance robotics.

6. Implications for Future Human-Robot Interaction and Broader Safety Domains

Incorporating both objective and subjective factors into safety models marks a significant evolution in the design and deployment of human-centered autonomous systems. These models inform the development of adaptive robotic controllers, context-aware user interfaces, and long-term trust management strategies. The recognition that safety is not a purely mechanical constraint but is co-constructed through user experience has far-reaching implications, extending beyond robotics to all domains where human–AI collaboration is safety critical.

Clustering results and longitudinal effects also suggest fertile ground for the development of generalized user-type-based safety policies and for systems that continuously learn and update user models to maximize both safety and psychological comfort over time.


In summary, parameterized and adaptive human-centered safety models—exemplified by the ρ\rho-modulated Generalized Safety Index—provide a rigorous approach to integrating physical and psychological safety considerations in HRI. Validated through human-subject experiments, these models support individualized and context-sensitive safety planning, thereby enabling more trustworthy, effective, and certifiable operation in safety-critical environments (Pandey et al., 9 Jul 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)