Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fusing uncalibrated IMUs and handheld smartphone video to reconstruct knee kinematics (2405.17368v1)

Published 27 May 2024 in cs.CV

Abstract: Video and wearable sensor data provide complementary information about human movement. Video provides a holistic understanding of the entire body in the world while wearable sensors provide high-resolution measurements of specific body segments. A robust method to fuse these modalities and obtain biomechanically accurate kinematics would have substantial utility for clinical assessment and monitoring. While multiple video-sensor fusion methods exist, most assume that a time-intensive, and often brittle, sensor-body calibration process has already been performed. In this work, we present a method to combine handheld smartphone video and uncalibrated wearable sensor data at their full temporal resolution. Our monocular, video-only, biomechanical reconstruction already performs well, with only several degrees of error at the knee during walking compared to markerless motion capture. Reconstructing from a fusion of video and wearable sensor data further reduces this error. We validate this in a mixture of people with no gait impairments, lower limb prosthesis users, and individuals with a history of stroke. We also show that sensor data allows tracking through periods of visual occlusion.

Citations (2)

Summary

  • The paper introduces a calibration-free method that fuses smartphone video with uncalibrated IMU data to reconstruct knee angles.
  • It employs a biomechanically grounded forward kinematic model in MuJoCo with 3D keypoints from the MeTRAbs-ACAE model to achieve a median mean adjusted MAE of 2.9°.
  • The approach streamlines clinical assessment by eliminating tedious sensor calibration, enhancing motion tracking for rehabilitation and real-world applications.

Overview of "Fusing Uncalibrated IMUs and Handheld Smartphone Video to Reconstruct Knee Kinematics"

The paper presents a novel method for reconstructing knee kinematics by fusing data from handheld smartphone video and uncalibrated Inertial Measurement Units (IMUs). The authors aim to address the limitations in clinical kinematic assessment, which often require cumbersome setup and calibration processes.

Methodology

The authors extend their previous differentiable biomechanics approach to fuse uncalibrated IMU data with smartphone video. The system utilizes a biomechanically grounded forward kinematic model implemented in MuJoCo. Here's a concise breakdown of the methodology:

  • Data Collection: A cohort of 38 individuals, including controls, lower limb prosthesis users, and stroke survivors, were recorded using both a multiview markerless motion capture system and a Portable Biomechanics Laboratory (PBL). The PBL integrates a Samsung Galaxy S20 smartphone and custom IMUs.
  • Biomechanical Fusion: The authors employ an implicit function to optimize kinematic trajectories from monocular video, leveraging 3D keypoints detected via the MeTRAbs-ACAE model. This is fused with sensor data to improve accuracy, particularly in reconstructing knee angles.
  • Sensor Calibration: A novel calibration solution is implemented allowing arbitrary sensor placement without pre-calibration. This approach resolves the traditional requirement for tedious sensor-body calibration processes.
  • Evaluation and Results: The fused system produced mean absolute errors (MAE) for knee angles that were significantly lower than video alone, demonstrating robustness across control and clinical populations. During artificial occlusion tests, the fusion retained accuracy, unlike video-only methods.

Numerical Results and Implications

The fusion approach achieved a median mean adjusted MAE of 2.9 degrees for knee angles, outperforming video-only estimates. Pearson’s correlation coefficients showed significant improvement with sensor integration, indicating enhanced temporal consistency in kinematic tracking.

Implications for Clinical Practice

This method eliminates the need for extensive body-sensor calibration, making it accessible for routine clinical use. The reduction in setup complexity and improved accuracy potentially enable widespread adoption in rehabilitation monitoring and assessment.

Future Directions

The integration of monocular video and IMU data promises advancements in tracking human motion in real-world settings. Future improvements might include refining occlusion handling techniques and expanding the system to track other joint movements or include additional sensor types.

In conclusion, the paper contributes a significant improvement to the field of kinematic analysis by presenting an accessible and efficient method for combining smartphone video and uncalibrated sensor data. This could set a new standard in clinical settings, enhancing the ability to conduct accurate motion analysis with minimal technical barriers.

Youtube Logo Streamline Icon: https://streamlinehq.com