Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries (2110.06922v1)

Published 13 Oct 2021 in cs.CV, cs.AI, cs.LG, and cs.RO

Abstract: We introduce a framework for multi-camera 3D object detection. In contrast to existing works, which estimate 3D bounding boxes directly from monocular images or use depth prediction networks to generate input for 3D object detection from 2D information, our method manipulates predictions directly in 3D space. Our architecture extracts 2D features from multiple camera images and then uses a sparse set of 3D object queries to index into these 2D features, linking 3D positions to multi-view images using camera transformation matrices. Finally, our model makes a bounding box prediction per object query, using a set-to-set loss to measure the discrepancy between the ground-truth and the prediction. This top-down approach outperforms its bottom-up counterpart in which object bounding box prediction follows per-pixel depth estimation, since it does not suffer from the compounding error introduced by a depth prediction model. Moreover, our method does not require post-processing such as non-maximum suppression, dramatically improving inference speed. We achieve state-of-the-art performance on the nuScenes autonomous driving benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yue Wang (678 papers)
  2. Vitor Guizilini (47 papers)
  3. Tianyuan Zhang (46 papers)
  4. Yilun Wang (39 papers)
  5. Hang Zhao (156 papers)
  6. Justin Solomon (86 papers)
Citations (601)

Summary

  • The paper introduces a novel framework that directly predicts 3D bounding boxes from multi-view images without relying on explicit depth estimation.
  • Its 3D-to-2D query mechanism leverages camera matrices and multi-head self-attention to align 2D features with 3D spatial predictions.
  • On the nuScenes benchmark, DETR3D significantly outperforms traditional methods, enhancing both accuracy and inference speed for autonomous driving.

DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries

The paper "DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries" presents an advanced framework for multi-camera 3D object detection. In a departure from traditional methods, which typically rely on depth prediction networks or directly estimate 3D bounding boxes from single images, the authors propose a novel architecture that manipulates predictions directly in 3D space. The performance of this system is validated on the nuScenes autonomous driving benchmark, achieving state-of-the-art results.

Methodology

The DETR3D framework efficiently bridges 2D observations and 3D spatial predictions, eliminating the need for explicit depth prediction modules. The architecture consists of several key components:

  1. 2D Feature Extraction: The framework begins by employing a shared ResNet backbone along with a Feature Pyramid Network (FPN) to process a set of multi-view images, extracting 2D features across multiple scales.
  2. 3D-to-2D Query Mechanism: It introduces a sparse set of 3D object queries that index into these 2D features, effectively linking 3D positions with multi-view images using camera transformation matrices. This connection facilitates extracting image features that are relevant for 3D object prediction.
  3. Attention-based Refinement: The model incorporates a multi-head self-attention layer to allow features gathered by the queries to interact, refining the object predictions iteratively.
  4. Direct 3D Box Prediction: By bypassing explicit depth reconstruction, the framework directly predicts 3D bounding boxes, using a set-to-set loss consistent with the DETR methodology.

A significant advantage of this top-down approach is the dramatic improvement in inference speed due to the elimination of post-processing steps like non-maximum suppression (NMS).

Results and Implications

Experiments conducted on the nuScenes dataset demonstrate the efficacy of DETR3D in improving the accuracy of 3D object detection. The framework notably outperforms existing methods such as CenterNet and FCOS3D, particularly in settings involving camera overlap regions where object detection is challenging. It also demonstrates robustness against depth prediction errors, which are a notable source of compounding errors in conventional methods.

The DETR3D framework opens several avenues for future exploration. The query-based detection head is versatile and could be adapted to integrate other sensor modalities such as LiDAR or RADAR, potentially enhancing detection performance and robustness across varied environments. Moreover, its principles might be applicable to other domains beyond autonomous driving, including indoor navigation and robotic manipulation.

In conclusion, DETR3D represents a significant step forward in 3D object detection by leveraging multi-view image queries in a computationally efficient manner, paving the way for more precise and faster autonomous systems. The elimination of intermediate representation reliance underscores the potential for developing integrated architectures that holistically address the challenges of 3D perception in real-world scenarios.