Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 84 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 32 tok/s
GPT-5 High 32 tok/s Pro
GPT-4o 75 tok/s
GPT OSS 120B 459 tok/s Pro
Kimi K2 213 tok/s Pro
2000 character limit reached

Moby: Standing Support Mobility Robot

Updated 29 August 2025
  • Moby is a standing support mobility robot that enables users to remain upright and independent, contrasting traditional wheelchairs by reducing the need for transfers.
  • It employs a ROS-based control architecture with sensor fusion from LiDAR and encoders, enabling adaptive assistance during sit-to-stand transitions and autonomous navigation.
  • Experimental evaluations show improved task completion times, reduced user workload, and enhanced safety, affirming its promise for real-world mobility support.

Standing support mobility robots, exemplified by the system "Moby," are designed to enhance independence and safety for elderly individuals during daily activities by maintaining users in an upright posture. Unlike conventional mobility aids such as wheelchairs, which require seated operation and frequent transfers, standing support robots enable users to engage in mobility tasks while remaining vertical, supporting both physical and psychosocial well-being. Moby utilizes a vertically mounted column with adjustable cushions for shin and abdominal support, powered drive mechanisms, an ergonomic control interface, and an integrated navigation and safety system. Recent advances enable hands-free operation, robust autonomous navigation, adaptive assistance for sit-to-stand transitions, and comprehensive risk management, underpinned by validated biomechanical modeling, sensor fusion, and feedback-driven control.

1. System Architecture and Control Frameworks

Moby is constructed around a rigid vertical column and a dual-cushion mechanism: the shin pad provides lower-leg alignment, while the abdominal pad ensures trunk stability during transfers and short-distance navigation (Manríquez-Cisterna et al., 27 Aug 2025). The robot operates in both powered and passive modes, enabling manual movement when powered off and active mobility support under programmatic control. Its lightweight frame (27.8 kg) facilitates repositioning by the user or a caregiver. The core control architecture is based on the Robot Operating System (ROS), utilizing modular ROS nodes for state management and sensor fusion. The high-level control is executed on a Raspberry Pi 5, which also interfaces with the user through a USB HID device and processes sensory inputs including LiDAR (/scan), odometry (/odom), and battery state (/battery_state).

Low-level actuation is managed by an ATmega32U4 microcontroller via a CAN bus (MCP2515, TJA1050), providing real-time command of DC motors with cycloidal drives and active braking. A custom firmware coordinates transitions between sit-to-stand and mobile phases, monitoring the duty cycle as:

Duty Cycle=Load FactoractualMaximum Load Factor×100%\text{Duty Cycle} = \frac{\text{Load Factor}_\text{actual}}{\text{Maximum Load Factor}} \times 100\%

ensuring motor protection and system reliability.

The software stack integrates NAV2 (ROS 2 navigation) and Hokuyo 2D LiDAR for autonomous indoor navigation. Autonomous operation leverages 2D range measurement for mapping, obstacle avoidance, and trajectory planning, with inputs from voice command interfaces and IoT frameworks.

2. Human–Robot Interaction and Interface Design

Moby’s interaction design prioritizes cognitive acceptance and physiological comfort, drawing from guidelines established for elderly-assistive SuperLimb systems (Wu et al., 2020). The upright posture support minimizes physical strain compared to seated devices. User interaction is facilitated via an ergonomic joystick and display panel; in advanced implementations, hands-free control is achieved through a torso-based interface with compliant coupling and force-sensitive resistors (FSR) (Chen et al., 2023, Chen et al., 2020). The compliant torso support physically couples the user to the robot and deflects in response to trunk bending, generating sensor signals mapped to velocity commands. For example, the COP (center of pressure) from FSR readings is used in navigation mapping:

δ=iαiλisiiλi\delta = \frac{\sum_{i} \alpha_i \lambda_i s_i}{\sum_{i} \lambda_i}

where λi\lambda_i are sensor outputs, αi\alpha_i are weights, and sis_i are spatial locations.

User studies demonstrate that the hands-free interface enables control performance comparable to joystick operation (10% longer completion time, \sim0.12 m average cross error, 4.9% less average acceleration), with distinct advantages when simultaneous object manipulation is required.

3. Assistive Trajectories and Biomechanical Modeling

Moby and related standing support robots utilize subject-specific motion data to reproduce natural standing trajectories. Gyroscope or motion capture systems record joint kinematics; a genetic algorithm optimizes the time-varying joint angles for minimal load and maximum comfort (Kusui et al., 18 May 2025). The principal reference trajectories are:

  • Hip center: S-shaped curve.
  • Knee center: arc-shaped trajectory.

Mechanically, a four-link (or four-bar) mechanism with variable geometry is implemented to drive the seat along these trajectories. The key parameters are determined from user-specific anthropometric data, particularly for link D, ensuring proper alignment with joint centers.

Feedforward speed control of the stepping motor, using discrete actuator length increments,

fm=ΔLstepsize×100f_m = \frac{\Delta L}{\text{step}_{size}} \times 100

(governed by cycle period and actuator mechanics), assures reproducibility. Evaluation with optical motion tracking and RMSE metrics demonstrates errors under 4% of total travel (hip: \sim12 mm/400 mm, knee: \sim6 mm/400 mm), validating fidelity to natural motion.

4. Autonomous Navigation and Docking

Robust indoor navigation is achieved by leveraging NAV2 and LiDAR-based mapping (Manríquez-Cisterna et al., 27 Aug 2025). For autonomous docking to furniture or support zones, virtual landmark-based control is implemented (Chen et al., 2021). The controller uses a nonlinear feedback approach with state variables for robot pose (ρ\rho, α\alpha, ϕ\phi), constrained to keep the landmark within the camera’s FOV, employing equations such as:

V=12ρ2+12sin2α+12ϕ2 v=k1ρcosα ω=k2sinαcosαk3ϕ(sin2αˉsin2α)V = \frac{1}{2}\rho^2 + \frac{1}{2}\sin^2\alpha + \frac{1}{2}\phi^2 \ v = k_1 \rho \cos\alpha \ \omega = k_2 \sin\alpha \cos\alpha - k_3 \phi (\sin^2\bar{\alpha} - \sin^2\alpha^*)

where gains kk tune the convergence. Extended Kalman Filter (EKF) sensor fusion accommodates measurement noise from multiple cameras, increasing robustness. Numerical methods define a feasible region for guaranteed convergence (ρ\rho, α\alpha, ϕ\phi bounded), optimizing safety in real-world settings.

5. Balance Assistance and Whole-Body Support

For users with pronounced motor deficits or neurological disorders, Moby analogs integrate mobile collaborative robots with force-torque sensor handles and admittance control (Ruiz-Ruiz et al., 2021). The device monitors the user’s center of mass (CoM) and center of pressure (CoP) relative to a support polygon, using real-time kinematic feedback to detect instability. Balance restoration strategies include:

  • Fixed Spring Assistance (FSA): robot exerts a spring force along the direction of deviation with critical damping.
  • Mirrored Balance Assistance (MBA): robot reference pose is modulated by the mirrored CoP, providing adaptive corrective action.

The admittance control law operates in Laplace space:

Xd(s)=Λ^h(s)+KadmXref(s)Madms2+Dadms+KadmX_d(s) = \frac{\hat{\Lambda}_h(s) + K_{adm} X_{ref}(s)}{M_{adm} s^2 + D_{adm} s + K_{adm}}

where MadmM_{adm}, DadmD_{adm}, KadmK_{adm} are mechanical tuning matrices.

Experimental results indicate MBA delivers lower maximal deviations and smoother recovery forces than benchmark approaches, with higher user trust and reliability ratings.

6. Empirical Performance and Comparative Evaluation

Experimental validations reveal that Moby outperforms wheelchairs in task completion time for toilet transfers (241 s vs. 304 s), achieving performance near assisted walking (241 s vs. 226 s) but without requiring caregiver intervention (Manríquez-Cisterna et al., 27 Aug 2025). NASA-TLX subjective workload analysis shows markedly lower mental and physical demand scores for Moby (mental: 27 vs. 93 for wheelchair).

Balance assistance strategies in whole-body collaborative robots yield gentler corrective forces and improved recovery metrics, as measured by CoP deviation, force amplitude, and user questionnaires (Ruiz-Ruiz et al., 2021).

Hands-free control interfaces achieve competitive smoothness (jerk metrics) compared to joystick, although with modest increases in task time; user feedback indicates enhanced anthropomorphic and safety perception (Chen et al., 2020, Chen et al., 2023).

7. Adaptation, Versatility, and Future Directions

Moby’s design enables dynamic adaptation through personalized parameterization of mechanical geometry, sensor calibration, and control mappings. Integration with smart home assistants and multi-modal sensors is underway to expand the system’s capabilities beyond mobility support, aiming for conversational assistance and contextual guidance (Manríquez-Cisterna et al., 27 Aug 2025). Future research focuses on:

  • Enhanced adaptability for users with limited dexterity or cognition (shared autonomy, intent recognition).
  • Longitudinal field trials in home and clinical environments to assess reliability and individual variability.
  • Optimization of sensor fusion, learning-based perception, and natural language control.

A plausible implication is that standing support mobility robots such as Moby will progressively supplant conventional aids for elderly daily living, yielding higher autonomy, safety, and engagement, contingent on continued refinement of compliance (mechanical and algorithmic), navigation robustness, and user-centric design.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube