Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Unified 3D Object Detection via Algorithm and Data Unification (2402.18573v5)

Published 28 Feb 2024 in cs.CV

Abstract: Realizing unified 3D object detection, including both indoor and outdoor scenes, holds great importance in applications like robot navigation. However, involving various scenarios of data to train models poses challenges due to their significantly distinct characteristics, \eg, diverse geometry properties and heterogeneous domain distributions. In this work, we propose to address the challenges from two perspectives, the algorithm perspective and data perspective. In terms of the algorithm perspective, we first build a monocular 3D object detector based on the bird's-eye-view (BEV) detection paradigm, where the explicit feature projection is beneficial to addressing the geometry learning ambiguity. In this detector, we split the classical BEV detection architecture into two stages and propose an uneven BEV grid design to handle the convergence instability caused by geometry difference between scenarios. Besides, we develop a sparse BEV feature projection strategy to reduce the computational cost and a unified domain alignment method to handle heterogeneous domains. From the data perspective, we propose to incorporate depth information to improve training robustness. Specifically, we build the first unified multi-modal 3D object detection benchmark MM-Omni3D and extend the aforementioned monocular detector to its multi-modal version, which is the first unified multi-modal 3D object detector. We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively. The experimental results reveal several insightful findings highlighting the benefits of multi-modal data and confirm the effectiveness of all the proposed strategies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Objectron: A large scale dataset of object-centric videos in the wild with pose annotations. In CVPR, pages 7822–7831, 2021.
  2. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  3. Arkitscenes: A diverse real-world dataset for 3d indoor scene understanding using mobile rgb-d data. In NeurIPS, 2021.
  4. M3d-rpn: Monocular 3d region proposal network for object detection. In ICCV, pages 9287–9296, 2019.
  5. Omni3d: A large benchmark and model for 3d object detection in the wild. In CVPR, pages 13154–13164, 2023.
  6. nuscenes: A multimodal dataset for autonomous driving. In CVPR, pages 11621–11631, 2020.
  7. A review of motion planning for highway autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 21(5):1826–1848, 2019.
  8. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, pages 3354–3361, 2012.
  9. Monocular 3d detection with geometric constraint embedding and semi-supervised training. IEEE Robotics and Automation Letters, 6(3):5565–5572, 2021.
  10. Densely constrained depth estimator for monocular 3d object detection. In ECCV, pages 718–734, 2022a.
  11. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. In AAAI, pages 1477–1485, 2023a.
  12. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. In AAAI, pages 1477–1485, 2023b.
  13. Diversity matters: Fully exploiting depth clues for reliable monocular 3d object detection. In CVPR, pages 2791–2800, 2022b.
  14. Efficient few-shot classification via contrastive pre-training on web data. IEEE Transactions on Artificial Intelligence, 2022c.
  15. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In ECCV, pages 1–18, 2022d.
  16. Grouplane: End-to-end 3d lane detection with channel-wise grouping. arXiv preprint arXiv:2307.09472, 2023c.
  17. Voxelformer: Bird’s-eye-view feature generation based on dual-view attention for multi-view 3d object detection. arXiv preprint arXiv:2304.01054, 2023d.
  18. Petr: Position embedding transformation for multi-view 3d object detection. In ECCV, pages 531–548, 2022a.
  19. Smoke: Single-stage monocular 3d object detection via keypoint estimation. In CVPR Workshops, pages 996–997, 2020.
  20. A convnet for the 2020s. In CVPR, pages 11976–11986, 2022b.
  21. Geometry uncertainty projection network for monocular 3d object detection. In ICCV, pages 3111–3121, 2021.
  22. 3d object detection for autonomous driving: A review and new outlooks. arXiv preprint arXiv:2206.09474, 2022.
  23. Is pseudo-lidar needed for monocular 3d object detection? In ICCV, pages 3142–3152, 2021.
  24. Semantic image synthesis with spatially-adaptive normalization. In CVPR, pages 2337–2346, 2019.
  25. Did-m3d: Decoupling instance depth for monocular 3d object detection. In ECCV, pages 71–88, 2022.
  26. Categorical depth distribution network for monocular 3d object detection. In CVPR, pages 8555–8564, 2021.
  27. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. In ICCV, pages 10912–10922, 2021.
  28. Imvoxelnet: Image to voxels projection for monocular and multi-view general-purpose 3d object detection. In WACV, pages 2397–2406, 2022.
  29. Sun rgb-d: A rgb-d scene understanding benchmark suite. In CVPR, pages 567–576, 2015.
  30. Understanding natural language commands for robotic navigation and mobile manipulation. In AAAI, pages 1507–1514, 2011.
  31. Cagroup3d: Class-aware grouping for 3d object detection on point clouds. NeurIPS, 35:29975–29988, 2022a.
  32. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In ICCV, pages 913–922, 2021.
  33. Probabilistic and geometric depth: Detecting objects in perspective. In CoRL, pages 1475–1485, 2022b.
  34. Towards universal object detection by domain attention. In CVPR, pages 7289–7298, 2019.
  35. Uni3detr: Unified 3d detection transformer. In NeurIPS, 2023.
  36. Towards large-scale 3d representation learning with multi-dataset point prompt training. arXiv preprint arXiv:2308.09718, 2023.
  37. Deep layer aggregation. In CVPR, pages 2403–2412, 2018.
  38. Objects are different: Flexible monocular 3d object detection. In CVPR, pages 3289–3298, 2021.
  39. Objects as points. arXiv preprint arXiv:1904.07850, 2019.
  40. Simple multi-dataset detection. In CVPR, pages 7571–7580, 2022.
  41. Deformable detr: Deformable transformers for end-to-end object detection. In ICLR, 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.