HOH: Markerless Multimodal Human-Object-Human Handover Dataset with Large Object Count (2310.00723v6)
Abstract: We present the HOH (Human-Object-Human) Handover Dataset, a large object count dataset with 136 objects, to accelerate data-driven research on handover studies, human-robot handover implementation, and AI on handover parameter estimation from 2D and 3D data of person interactions. HOH contains multi-view RGB and depth data, skeletons, fused point clouds, grasp type and handedness labels, object, giver hand, and receiver hand 2D and 3D segmentations, giver and receiver comfort ratings, and paired object metadata and aligned 3D models for 2,720 handover interactions spanning 136 objects and 20 giver-receiver pairs-40 with role-reversal-organized from 40 participants. We also show experimental results of neural networks trained using HOH to perform grasp, orientation, and trajectory prediction. As the only fully markerless handover capture dataset, HOH represents natural human-human handover interactions, overcoming challenges with markered datasets that require specific suiting for body tracking, and lack high-resolution hand tracking. To date, HOH is the largest handover dataset in number of objects, participants, pairs with role reversal accounted for, and total interactions captured.
- AARP. Nationwide Caregiver Shortage Felt By Older Adults. https://www.aarp.org/caregiving/basics/info-2022/in-home-caregiver-shortage.html, Nov 2022.
- Hierarchical convex approximation of 3d shapes for fast region selection. In Computer graphics forum, volume 27, pages 1323–1332. Wiley Online Library, 2008.
- Recruiting and retaining people with disabilities for qualitative health research: Challenges and solutions. Qualitative Health Research, 29(7):1056–1064, 2019.
- Investigating human-human approach and hand-over. In Human centered robot systems, pages 151–160. Springer, Berlin, Germany, 2009.
- The case of dr. jekyll and mr. hyde: a kinematic study on social intention. Consciousness and cognition, 17(3):557–564, 2008.
- What does a hand-over tell?—individuality of short motion sequences. Biomimetics, 4(3):55, 2019.
- Method for registration of 3-d shapes. In Sensor fusion IV: control paradigms and data structures, volume 1611, pages 586–606, Bellingham, WA, 1992. SPIE.
- Behave: Dataset and method for tracking human object interactions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15935–15946, 2022.
- Contactpose: A dataset of grasps with object contact and hand pose. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pages 361–378. Springer, 2020.
- U.S. Department of Labor Bureau of Labor Statistics. Home health and personal care aides. In Occupational Outlook Handbook, 2022.
- Alan Campbell. For their own good: Recruiting children for research. Childhood, 15(1):30–49, 2008.
- Openpose: realtime multi-person 2d pose estimation using part affinity fields. IEEE transactions on pattern analysis and machine intelligence, 43(1):172–186, 2021.
- A multi-sensor dataset of human-human handover. Data in brief, 22:109–117, 2019.
- Characterization of handover orientations used by humans for efficient robot to human handovers. In IROS, pages 1–6, NJ, 2015. IEEE.
- An affordance and distance minimization based method for computing object orientations for robot human handovers. IJSR, 12:143–162, 2020.
- Grip forces and load forces in handovers: implications for designing human-robot handover controllers. In HRI, pages 9–16, NJ, 2012. IEEE.
- Dexycb: A benchmark for capturing hand grasping of objects. In CVPR, pages 9044–9053, 2021.
- On the choice of grasp type and location when handing over an object. Science Robotics, 4(27), 2019.
- Humans adjust their grip force when passing an object according to the observed speed of the partner’s reaching out movement. Experimental brain research, 236(12):3363–3377, 2018.
- Grip-force modulation in human-to-human object handovers: effects of sensory and kinematic manipulations. Scientific Reports, 10(1):22381, 2020.
- Human preferences for robot eye gaze in human-to-robot handovers. IJSR, pages 1–18, 2022.
- Arctic: A dataset for dexterous bimanual hand-object manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12943–12954, 2023.
- Interacting in time and space: Investigating human-human and human-robot joint action. In RO-MAN, pages 252–257, NJ, 2010. IEEE.
- Believing in bert: Using expressive communication to enhance trust and counteract operational error in physical human-robot interaction. In 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN), pages 493–500. IEEE, 2016.
- Honnotate: A method for 3d annotation of hand and object poses. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3196–3206, 2020.
- Keypoint transformer: Solving joint identification in challenging hands and object interactions for accurate 3d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11090–11100, 2022.
- Human–human handover tasks and how distance and object mass matter. Perceptual and motor skills, 124(1):182–199, 2017.
- Adaptive coordination strategies for human-robot handovers. In RSS, volume 11, pages 1–10, Rome, Italy, 2015. RSS.
- Intercap: Joint markerless 3d tracking of humans and objects in interaction. In Pattern Recognition: 44th DAGM German Conference, DAGM GCPR 2022, Konstanz, Germany, September 27–30, 2022, Proceedings, pages 281–299. Springer, 2022.
- Human inspired grip-release technique for robot-human handovers. In 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), pages 694–701. IEEE, 2022.
- A multimodal data set of human handovers with design implications for human-robot handovers. arXiv preprint arXiv:2304.02154, 2023.
- Segment anything. arXiv:2304.02643, 2023.
- A systematic review of handover actions in human dyads. Frontiers in Psychology, 14:1147296–1147296, 2023.
- Accessing interpersonal and intrapersonal coordination dynamics. Experimental Brain Research, 238:17–27, 2020.
- Dataset of bimanual human-to-human object handovers. Data in Brief, page 109277, 2023.
- Specifying and synthesizing human-robot handovers. In IROS, pages 5930–5936, NJ, 2019. IEEE.
- H2o: Two hands manipulating objects for first person interaction recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10138–10148, 2021.
- Predictability or adaptivity?: Designing robot handoffs modeled from trained dogs and people. In HRI, pages 179–180, NJ, 2011. IEEE.
- A 4d egocentric dataset for category-level human-object interaction. In IEEE/CVF Conf. Comput. Vis. Pattern Recog.(CVPR), 2022b, volume 1, 2022.
- Grip forces when passing an object to a partner. Experimental brain research, 163(2):173–187, 2005.
- Recruitment and retention of older adults in aging research: (see editorial comments by dr. stephanie studenski, pp 2351–2352). Journal of the American Geriatrics Society, 56(12):2340–2348, 2008.
- Meet me where i’m gazing: how shared attention gaze affects human-robot handover timing. In HRI, pages 334–341, NJ, 2014. IEEE.
- NCSBN. NCSBN Research Projects Significant Nursing Workforce Shortages and Crisis. https://www.ncsbn.org/news/ncsbn-research-projects-significant-nursing-workforce-shortages-and-crisis, Apr 2023.
- Object handovers: a review for robotics. ToR, 37(6):1855–1873, 2021.
- Exploration of geometry and forces occurring within human-to-robot handovers. In 2018 IEEE Haptics Symposium (HAPTICS), pages 327–333. IEEE, 2018.
- Modeling human reaching phase in human-human object handover with application in robot-human handover. In IROS, pages 3597–3602, NJ, 2017. IEEE.
- Pointnet: Deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593, 2016.
- Understanding movements of hand-over between two persons to improve humanoid robot systems. In Humanoids, pages 856–861, NJ, 2017. IEEE.
- Assembly101: A large-scale multi-view video dataset for understanding procedural activities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21096–21106, 2022.
- Learning the communication of intent prior to physical collaboration. In RO-MAN, pages 968–973, NJ, 2012. IEEE.
- Grab: A dataset of whole-body human grasping of objects. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pages 581–600. Springer, 2020.
- Joint attention, shared goals, and social bonding. British Journal of Psychology, 107(2):322–337, 2016.
- Track anything: Segment anything meets videos, 2023.
- H2o: A benchmark for visual human-human object handover analysis. In ICCV, pages 15762–15771, NJ, 2021. IEEE.
- Impacts of robot head gaze on robot-to-human handovers. IJSR, 7(5):783–798, 2015.
- Freihand: A dataset for markerless capture of hand pose and shape from single rgb images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 813–822, 2019.