Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging the View Disparity Between Radar and Camera Features for Multi-modal Fusion 3D Object Detection (2208.12079v2)

Published 25 Aug 2022 in cs.CV

Abstract: Environmental perception with the multi-modal fusion of radar and camera is crucial in autonomous driving to increase accuracy, completeness, and robustness. This paper focuses on utilizing millimeter-wave (MMW) radar and camera sensor fusion for 3D object detection. A novel method that realizes the feature-level fusion under the bird's-eye view (BEV) for a better feature representation is proposed. Firstly, radar points are augmented with temporal accumulation and sent to a spatial-temporal encoder for radar feature extraction. Meanwhile, multi-scale image 2D features which adapt to various spatial scales are obtained by image backbone and neck model. Then, image features are transformed to BEV with the designed view transformer. In addition, this work fuses the multi-modal features with a two-stage fusion model called point-fusion and ROI-fusion, respectively. Finally, a detection head regresses objects category and 3D locations. Experimental results demonstrate that the proposed method realizes the state-of-the-art (SOTA) performance under the most crucial detection metrics-mean average precision (mAP) and nuScenes detection score (NDS) on the challenging nuScenes dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Taohua Zhou (2 papers)
  2. Yining Shi (21 papers)
  3. Junjie Chen (89 papers)
  4. Kun Jiang (128 papers)
  5. Mengmeng Yang (35 papers)
  6. Diange Yang (37 papers)
Citations (62)

Summary

We haven't generated a summary for this paper yet.