Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PresSim: An End-to-end Framework for Dynamic Ground Pressure Profile Generation from Monocular Videos Using Physics-based 3D Simulation (2302.00391v1)

Published 1 Feb 2023 in cs.CV, cs.AI, and cs.GR

Abstract: Ground pressure exerted by the human body is a valuable source of information for human activity recognition (HAR) in unobtrusive pervasive sensing. While data collection from pressure sensors to develop HAR solutions requires significant resources and effort, we present a novel end-to-end framework, PresSim, to synthesize sensor data from videos of human activities to reduce such effort significantly. PresSim adopts a 3-stage process: first, extract the 3D activity information from videos with computer vision architectures; then simulate the floor mesh deformation profiles based on the 3D activity information and gravity-included physics simulation; lastly, generate the simulated pressure sensor data with deep learning models. We explored two approaches for the 3D activity information: inverse kinematics with mesh re-targeting, and volumetric pose and shape estimation. We validated PresSim with an experimental setup with a monocular camera to provide input and a pressure-sensing fitness mat (80x28 spatial resolution) to provide the sensor ground truth, where nine participants performed a set of predefined yoga sequences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. R. B. Brund, S. Rasmussen, R. O. Nielsen, U. G. Kersting, U. Laessoe, and M. Voigt, “Medial shoe-ground pressure and specific running injuries: A 1-year prospective cohort study,” Journal of science and medicine in sport, vol. 20, no. 9, pp. 830–834, 2017.
  2. B. Hwang and D. Jeon, “Estimation of the user’s muscular torque for an over-ground gait rehabilitation robot using torque and insole pressure sensors,” International Journal of Control, Automation and Systems, vol. 16, no. 1, pp. 275–283, 2018.
  3. Z. Taha, M. S. Norman, S. F. S. Omar, and E. Suwarganda, “A finite element analysis of a human foot model to simulate neutral standing on ground,” Procedia Engineering, vol. 147, pp. 240–245, 2016.
  4. J. Galarza and D. I. Caruntu, “Effect of additional weight on human squat exercise stability: Ground reaction forces and centers of pressure,” in Dynamic Systems and Control Conference, vol. 84270.   American Society of Mechanical Engineers, 2020, p. V001T06A001.
  5. J. WASIK, W. J. CYNARSKI, D. SZYMCZYK, A. M. VENCESBRITO, G. KOROBEYNIKOV, and T. ZWIERKO, “Changes in foot pressure on the ground during gyaku-zuki (punch) in a karate athlete: a case study,” Trends in sport sciences, vol. 26, no. 4, pp. 153–156, 2019.
  6. M. Lorenzini, W. Kim, E. De Momi, and A. Ajoudani, “A synergistic approach to the real-time estimation of the feet ground reaction forces and centers of pressure in humans with application to human–robot collaboration,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3654–3661, 2018.
  7. J. Scott, B. Ravichandran, C. Funk, R. T. Collins, and Y. Liu, “From image to stability: learning dynamics from human pose,” in European Conference on Computer Vision.   Springer, 2020, pp. 536–554.
  8. H. M. Clever, P. Grady, G. Turk, and C. C. Kemp, “Bodypressure–inferring body pose and contact pressure from a depth image,” arXiv preprint arXiv:2105.09936, 2021.
  9. G. Pavlakos, V. Choutas, N. Ghorbani, T. Bolkart, A. A. Osman, D. Tzionas, and M. J. Black, “Expressive body capture: 3d hands, face, and body from a single image,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 10 975–10 985.
  10. N. Nakano, T. Sakura, K. Ueda, L. Omura, A. Kimura, Y. Iino, S. Fukashiro, and S. Yoshioka, “Evaluation of 3d markerless motion capture accuracy using openpose with multiple video cameras,” Frontiers in sports and active living, vol. 2, p. 50, 2020.
  11. J. S. Matthis and A. Cherian, “FreeMoCap: A free, open source markerless motion capture system,” 2022. [Online]. Available: https://github.com/freemocap/freemocap/
  12. A. K. Singh, V. A. Kumbhare, and K. Arthi, “Real-time human pose detection and recognition using mediapipe,” in International Conference on Soft Computing and Signal Processing.   Springer, 2021, pp. 145–154.
  13. Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2,” https://github.com/facebookresearch/detectron2, 2019.
  14. D. Osokin, “Real-time 2d multi-person pose estimation on cpu: Lightweight openpose,” arXiv preprint arXiv:1811.12004, 2018.
  15. D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli, “3d human pose estimation in video with temporal convolutions and semi-supervised training,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  16. I. Grishchenko, V. Bazarevsky, A. Zanfir, E. G. Bazavan, M. Zanfir, R. Yee, K. Raveendran, M. Zhdanovich, M. Grundmann, and C. Sminchisescu, “Blazepose ghum holistic: Real-time 3d human landmarks and pose estimation,” arXiv preprint arXiv:2206.11678, 2022.
  17. J. Dong, Q. Fang, W. Jiang, Y. Yang, H. Bao, and X. Zhou, “Fast and robust multi-person 3d pose estimation and tracking from multiple views,” in T-PAMI, 2021.
  18. S. Sangaraju, R. Paolucci, C. Smerzini, and K. Tarbali, “3d physics-based ground motion simulation of the 2016 kumamoto earthquakes,” in Proceedings of the 6th IASPEI/IAEE International Symposium: The Effects Of Surface Geology On Seismic Motion (ESG6), August 2021, Kyoto, Japan, 2021.
  19. Y. Li, M. Habermann, B. Thomaszewski, S. Coros, T. Beeler, and C. Theobalt, “Deep physics-aware inference of cloth deformation for monocular human performance capture,” in 2021 International Conference on 3D Vision (3DV).   IEEE, 2021, pp. 373–384.
  20. Y. Fang, M. Li, C. Jiang, and D. M. Kaufman, “Guaranteed globally injective 3d deformation processing.” ACM Trans. Graph., vol. 40, no. 4, pp. 75–1, 2021.
  21. B. Michaud and M. Begon, “biorbd: A c++, python and matlab library to analyze and simulate the human body biomechanics,” Journal of Open Source Software, vol. 6, no. 57, p. 2562, 2021.
  22. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision.   Springer, 2014, pp. 740–755.
  23. J. Wang, S. Tan, X. Zhen, S. Xu, F. Zheng, Z. He, and L. Shao, “Deep 3d human pose estimation: A review,” Computer Vision and Image Understanding, vol. 210, p. 103225, 2021.
  24. C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, “Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1325–1339, 2013.
  25. J. Dong, Q. Shuai, Y. Zhang, X. Liu, X. Zhou, and H. Bao, “Motion capture from internet videos,” in European Conference on Computer Vision.   Springer, 2020, pp. 210–227.
  26. H.-S. Chung and Y. Lee, “Mcml: motion capture markup language for integration of heterogeneous motion capture data,” Computer Standards & Interfaces, vol. 26, no. 2, pp. 113–130, 2004.
  27. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.
Citations (9)

Summary

We haven't generated a summary for this paper yet.