MOTPose: Multi-object 6D Pose Estimation for Dynamic Video Sequences using Attention-based Temporal Fusion (2403.09309v1)
Abstract: Cluttered bin-picking environments are challenging for pose estimation models. Despite the impressive progress enabled by deep learning, single-view RGB pose estimation models perform poorly in cluttered dynamic environments. Imbuing the rich temporal information contained in the video of scenes has the potential to enhance models ability to deal with the adverse effects of occlusion and the dynamic nature of the environments. Moreover, joint object detection and pose estimation models are better suited to leverage the co-dependent nature of the tasks for improving the accuracy of both tasks. To this end, we propose attention-based temporal fusion for multi-object 6D pose estimation that accumulates information across multiple frames of a video sequence. Our MOTPose method takes a sequence of images as input and performs joint object detection and pose estimation for all objects in one forward pass. It learns to aggregate both object embeddings and object parameters over multiple time steps using cross-attention-based fusion modules. We evaluate our method on the physically-realistic cluttered bin-picking dataset SynPick and the YCB-Video dataset and demonstrate improved pose estimation accuracy as well as better object detection accuracy
- “BOP challenge 2020 on 6D object localization” In European Conference on Computer Vision (ECCV), 2020, pp. 577–594
- “A survey on vision transformer” In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 45.1, 2022, pp. 87–110
- “Transformers in time series: A survey” In 32nd International Joint Conference on Artificial Intelligence (IJCAI), 2023
- “Transformers in Vision: A Survey” In ACM Computing Survey 54.10s New York, NY, USA: Association for Computing Machinery, 2022, pp. 200:1–200:41
- “EfficientViT: Memory Efficient Vision Transformer With Cascaded Group Attention” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 14420–14430
- “Mask DINO: Towards a Unified Transformer-based Framework for Object Detection and Segmentation” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 3041–3050
- Arash Amini, Arul Selvam Periyasamy and Sven Behnke “YOLOPose: Transformer-based multi-object 6D pose estimation using keypoint regression” In 17th International Conference on Intelligent Autonomous Systems (IAS), 2022, pp. 392–406
- Arash Amini, Arul Selvam Periyasamy and Sven Behnke “T6D-Direct: Transformers for Multi-Object 6D Object Pose Estimation” In DAGM German Conference on Pattern Recognition (GCPR), 2021
- “Gradient Response Maps for Real-Time Detection of Textureless Objects” In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 34, 2012, pp. 876–888
- “Model-based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes” In Asian Conference on Computer Vision (ACCV), 2013, pp. 548–562
- “3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints” In International Journal of Computer Vision (IJCV) 66, 2006, pp. 231–259
- “6-DoF object pose from semantic keypoints” In IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 2011–2018
- “Viewpoints and keypoints” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1510–1519
- “PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes” In Robotics: Science and Systems (RSS), 2018
- Arul Selvam Periyasamy, Max Schwarz and Sven Behnke “Robust 6D Object Pose Estimation in Cluttered Scenes Using Semantic Segmentation and Pose Regression Networks” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018 DOI: 10.1109/IROS.2018.8594406
- “GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021
- “SO-Pose: Exploiting self-occlusion for direct 6D pose estimation” In IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12396–12405
- “BB8: A scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth” In IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3828–3836
- Bugra Tekin, Sudipta N Sinha and Pascal Fua “Real-time seamless single shot 6D object pose prediction” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018
- “Segmentation-driven 6D object pose estimation” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3385–3394
- “PVNet: Pixel-wise voting network for 6DOF pose estimation” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4561–4570
- “Single-stage 6D object pose estimation” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 2930–2939
- “DeepIM: Deep iterative matching for 6D pose estimation” In European Conference on Computer Vision (ECCV), 2018, pp. 683–698
- “Deep model-based 6D pose refinement in RGB” In European Conference on Computer Vision (ECCV), 2018, pp. 800–815
- “CosyPose: Consistent multi-view multi-object 6D pose estimation” In European Conference on Computer Vision (ECCV), 2020
- Arul Selvam Periyasamy, Max Schwarz and Sven Behnke “Refining 6D object pose predictions using abstract render-and-compare” In IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2019, pp. 739–746
- “CRT-6D: Fast 6D object pose estimation with cascaded refinement transformers” In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5746–5755
- “Shape-Constraint Recurrent Flow for 6D Object Pose Estimation” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 4831–4840
- Yinlin Hu, Pascal Fua and Mathieu Salzmann “Perspective flow aggregation for data-limited 6D object pose estimation” In European Conference on Computer Vision (ECCV), 2022, pp. 89–106 Springer
- Catherine Capellen, Max Schwarz and Sven Behnke “ConvPoseCNN: Dense Convolutional 6D Object Pose Estimation” In 15th International Conference on Computer Vision Theory and Applications (VISAPP), 2020
- “PyraPose: Feature Pyramids for Fast and Accurate Object Pose Estimation under Domain Shift” In IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 13909–13915
- “End-to-end object detection with transformers” In European Conference on Computer Vision (ECCV), 2020, pp. 213–229
- “PoET: Pose Estimation Transformer for Single-View, Multi-Object 6D Pose Estimation” In Conference on Robot Learning (CoRL), 2023, pp. 1060–1070 PMLR
- “6-DoF model-based tracking of arbitrarily shaped 3D objects” In IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 5204–5209
- “Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2347–2354
- “Monocular Multiview Object Tracking with 3D Aspect Parts” In European Conference on Computer Vision (ECCV), 2014, pp. 220–235
- “PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking” In Robotics: Science and Systems (RSS), 2019
- “se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
- Philipp Bergmann, Tim Meinhardt and Laura Leal-Taixé “Tracking Without Bells and Whistles” In IEEE International Conference on Computer Vision (ICCV), 2019
- Xingyi Zhou, Vladlen Koltun and Philipp Krähenbühl “Tracking Objects as Points” In 15th European Conference on Computer Vision (ECCV), 2020
- “TransCenter: Transformers With Dense Representations for Multiple-Object Tracking” In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
- “TrackFormer: Multi-object tracking with transformers” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
- “MOTR: End-to-end multiple-object tracking with transformer” In 17th European Conference on Computer Vision (ECCV), 2022
- “TransTrack: Multiple-object tracking with transformer” In arXiv:2012.15460, 2020
- “Exploring intermediate representation for monocular vehicle pose estimation” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 1873–1883
- Harold W Kuhn “The Hungarian method for the assignment problem” In Naval Research Logistics Quarterly 2.1-2 Wiley Online Library, 1955, pp. 83–97
- “YOLOPose V2: Understanding and improving transformer-based 6D‚ pose estimation” In Robotics and Autonomous Systems 168 Elsevier, 2023, pp. 104490
- “Generalized intersection over union: A metric and a loss for bounding box regression” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 658–666
- Arul Selvam Periyasamy, Max Schwarz and Sven Behnke “SynPick: A dataset for dynamic bin picking scene understanding” In IEEE International Conference on Automation Science and Engineering (CASE), 2021, pp. 488–493
- Eric Brachmann “6D Object Pose Estimation using 3D Object Coordinates [Data]” heiDATA, 2020 DOI: 10.11588/data/V4MUMX
- “Fast object learning and dual-arm coordination for cluttered stowing, picking, and packing” In IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 3347–3354
- “NimbRo picking: Versatile part handling for warehouse automation” In IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 3032–3039
- Arul Selvam Periyasamy, Vladimir Tsaturyan and Sven Behnke “Efficient Multi-Object Pose Estimation using Multi-Resolution Deformable Attention and Query Aggregation” In IEEE International Conference on Robotic Computing (IRC), 2023