Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

THÖR-MAGNI: A Large-scale Indoor Motion Capture Recording of Human Movement and Robot Interaction (2403.09285v1)

Published 14 Mar 2024 in cs.RO

Abstract: We present a new large dataset of indoor human and robot navigation and interaction, called TH\"OR-MAGNI, that is designed to facilitate research on social navigation: e.g., modelling and predicting human motion, analyzing goal-oriented interactions between humans and robots, and investigating visual attention in a social interaction context. TH\"OR-MAGNI was created to fill a gap in available datasets for human motion analysis and HRI. This gap is characterized by a lack of comprehensive inclusion of exogenous factors and essential target agent cues, which hinders the development of robust models capable of capturing the relationship between contextual cues and human behavior in different scenarios. Unlike existing datasets, TH\"OR-MAGNI includes a broader set of contextual features and offers multiple scenario variations to facilitate factor isolation. The dataset includes many social human-human and human-robot interaction scenarios, rich context annotations, and multi-modal data, such as walking trajectories, gaze tracking data, and lidar and camera streams recorded from a mobile robot. We also provide a set of tools for visualization and processing of the recorded data. TH\"OR-MAGNI is, to the best of our knowledge, unique in the amount and diversity of sensor data collected in a contextualized and socially dynamic environment, capturing natural human-robot interactions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (74)
  1. Admoni H and Scassellati B (2017) Social eye gaze in human-robot interaction: a review. Journal of Human-Robot Interaction 6(1): 25–63.
  2. In: Ishikawa H, Liu CL, Pajdla T and Shi J (eds.) Computer Vision – ACCV 2020. Cham: Springer International Publishing. ISBN 978-3-030-69544-6, pp. 566–582.
  3. International Journal of Social Robotics 1(1): 71–81.
  4. Benfold B and Reid I (2011) Stable multi-target tracking in real-time surveillance video. In: CVPR 2011. IEEE, pp. 3457–3464.
  5. In: 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, pp. 1929–1934.
  6. PLoS computational biology 10(4): e1003541.
  7. IEEE Transactions on Human-Machine Systems 43(6): 522–534.
  8. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 10957–10967.
  9. In: International Conference on Social Robotics. Cham: Springer Nature, pp. 154–164.
  10. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 8475–8484.
  11. Charalambous G, Fletcher S and Webb P (2016) The development of a scale to evaluate trust in industrial human-robot collaboration. International Journal of Social Robotics 8: 193–209.
  12. Scientific Data 9(1): 673.
  13. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2518–2527.
  14. Robotics and Autonomous Systems 161: 104335.
  15. In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC). pp. 1269–1276. 10.1109/ITSC57777.2023.10422479.
  16. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops. pp. 2200–2209.
  17. Dendorfer P, Ošep A and Leal-Taixé L (2021) Goal-GAN: Multimodal Trajectory Prediction Based on Goal Position Estimation. In: Ishikawa H, Liu CL, Pajdla T and Shi J (eds.) Computer Vision – ACCV 2020. Cham: Springer International Publishing, pp. 405–420.
  18. ICRA’15 Workshop on Machine Learning for Social Robotics .
  19. Duchowski TA (2017) Eye tracking: methodology theory and practice. London: Springer.
  20. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) : 20951–20960.
  21. Faroni M, Beschi M and Pedrocchi N (2022) Safety-aware time-optimal motion planning with uncertain human state estimation. IEEE Robotics and Automation Letters 7(4): 12219–12226.
  22. Robotics and Autonomous Systems 166: 104450.
  23. Chaos: An Interdisciplinary Journal of Nonlinear Science 28(7).
  24. In: Robotics Research: The 14th International Symposium ISRR. Berlin Heidelberg: Springer, pp. 261–282.
  25. Hart SG (2006) NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the human factors and ergonomics society annual meeting, volume 50.9. Sage publications Sage CA: Los Angeles, CA, pp. 904–908.
  26. Hart SG and Staveland LE (1988) Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In: Advances in psychology, volume 52. Elsevier, pp. 139–183.
  27. In: International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 229–236.
  28. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 3553–3559.
  29. IEEE Robotics and Automation Letters 7(4): 11807–11814.
  30. Kothari P, Kreiss S and Alahi A (2022) Human Trajectory Forecasting in Crowds: A Deep Learning Perspective. IEEE Transactions on Intelligent Transportation Systems 23(7).
  31. IEEE Robotics and Automation Letters (RAL) .
  32. Cham: Springer Nature.
  33. The International Journal of Robotics Research 42(11): 977–1006.
  34. Journal of Manufacturing Systems 65: 279–295.
  35. Lerner A, Chrysanthou Y and Lischinski D (2007) Crowds by example. Computer Graphics Forum 26(3): 655–664.
  36. IEEE Transactions on Neural Systems and Rehabilitation Engineering 30: 1103–1112.
  37. IEEE transactions on pattern analysis and machine intelligence 42(10): 2684–2701.
  38. Frontiers in Computer Science 3: 733531.
  39. Majecka B (2009) Statistical models of pedestrian behaviour in the forum. Master’s thesis, School of Informatics, University of Edinburgh .
  40. In: 10th International Conference on Learning Representations (ICLR).
  41. Robotics and Autonomous Systems 145: 103837.
  42. PloS one 5(4): e10047.
  43. Munaro M and Menegatti E (2014) Fast RGB-D people tracking for service robots. Autonomous Robots 37: 227–242.
  44. In: CVPR 2011. IEEE, pp. 3153–3160.
  45. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. pp. 1–17.
  46. In: 2009 IEEE 12th International Conference on Computer Vision. pp. 261–268.
  47. Pupil Labs AB (Accessed: 2024-02-02a) Pupil Player Documentation. https://docs.pupil-labs.com/core/software/pupil-player/.
  48. Pupil Labs AB (Accessed: 2024-02-02b) Working with pupil Core, Best Practices. https://bit.ly/42JeBVm.
  49. Qualisys AB (Accessed: 2024-02-02) Qualisys Track Manager User Manual v2022.1. https://cdn-content.qualisys.com/2022/07/QTM-user-manual.pdf.
  50. In: ICRA workshop on open source software, volume 3.2. p. 5.
  51. In: European Conference on Computer Vision. p. 549–565.
  52. IEEE Robotics and Automation Letters 5(2): 676–682.
  53. The International Journal of Robotics Research 39(8): 895–935.
  54. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 3358–3364.
  55. IEEE Robotics and Automation Letters .
  56. In: Computer Vision – ECCV 2020. Springer International Publishing. ISBN 978-3-030-58523-5, pp. 683–700.
  57. In: 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).
  58. In: 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 7226–7233.
  59. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4576–4584.
  60. Stiefelhagen R and Zhu J (2002) Head orientation and gaze direction in meetings. In: CHI ’02 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, p. 858–859.
  61. Advances in Mechanical Engineering 9(9): 1687814017726683.
  62. Tobii AB (Accessed: 2024-02-02a) Tobii 2 Glasses User Manual v2.0.2. https://go.tobii.com/Glasses2UM.
  63. Tobii AB (Accessed: 2024-02-02b) Tobii 3 Glasses User Manual v1.18. https://go.tobii.com/tobii-pro-glasses-3-user-manual.
  64. Tobii AB (Accessed: 2024-02-02c) Tobii Pro Lab User Manual v1.217. https://go.tobii.com/tobii_pro_lab_user_manual.
  65. Tomasello M (2014) Joint attention as social cognition. In: Joint attention. Psychology Press, pp. 103–130.
  66. Wang A, Mavrogiannis C and Steinfeld A (2022) Group-based motion prediction for navigation in crowded environments. In: Conference on Robot Learning. PMLR, pp. 871–882.
  67. Yan Z, Duckett T and Bellotto N (2017) Online learning for human classification in 3D LiDAR-based tracking. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 864–871.
  68. Intelligent Service Robotics 13(3): 403–417.
  69. Yue J, Manocha D and Wang H (2022) Human trajectory prediction via neural social physics. In: Avidan S, Brostow G, Cissé M, Farinella GM and Hassner T (eds.) Computer Vision – ECCV 2022. Cham: Springer Nature Switzerland, pp. 376–394.
  70. Zhao H and Wildes RP (2021) Where Are You Heading? Dynamic Trajectory Prediction With Expert Goal Examples. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 7629–7638.
  71. In: Avidan S, Brostow G, Cissé M, Farinella GM and Hassner T (eds.) Computer Vision – ECCV 2022. Cham: Springer Nature Switzerland, pp. 676–694.
  72. Zhou B, Wang X and Tang X (2012) Understanding collective crowd behaviors: Learning a Mixture model of Dynamic pedestrian-Agents. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 2871–2878.
  73. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 3795–3802.
  74. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 3931–3936.
Citations (9)

Summary

We haven't generated a summary for this paper yet.