Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous Driving (2310.07602v3)

Published 11 Oct 2023 in cs.CV

Abstract: Radar has stronger adaptability in adverse scenarios for autonomous driving environmental perception compared to widely adopted cameras and LiDARs. Compared with commonly used 3D radars, the latest 4D radars have precise vertical resolution and higher point cloud density, making it a highly promising sensor for autonomous driving in complex environmental perception. However, due to the much higher noise than LiDAR, manufacturers choose different filtering strategies, resulting in an inverse ratio between noise level and point cloud density. There is still a lack of comparative analysis on which method is beneficial for deep learning-based perception algorithms in autonomous driving. One of the main reasons is that current datasets only adopt one type of 4D radar, making it difficult to compare different 4D radars in the same scene. Therefore, in this paper, we introduce a novel large-scale multi-modal dataset featuring, for the first time, two types of 4D radars captured simultaneously. This dataset enables further research into effective 4D radar perception algorithms.Our dataset consists of 151 consecutive series, most of which last 20 seconds and contain 10,007 meticulously synchronized and annotated frames. Moreover, our dataset captures a variety of challenging driving scenarios, including many road conditions, weather conditions, nighttime and daytime with different lighting intensities and periods. Our dataset annotates consecutive frames, which can be applied to 3D object detection and tracking, and also supports the study of multi-modal tasks. We experimentally validate our dataset, providing valuable results for studying different types of 4D radars. This dataset is released on https://github.com/adept-thu/Dual-Radar.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Xinyu Zhang (296 papers)
  2. Li Wang (470 papers)
  3. Jian Chen (257 papers)
  4. Cheng Fang (32 papers)
  5. Lei Yang (372 papers)
  6. Ziying Song (23 papers)
  7. Guangqi Yang (2 papers)
  8. Yichen Wang (61 papers)
  9. Xiaofei Zhang (36 papers)
  10. Jun Li (778 papers)
  11. Zhiwei Li (66 papers)
  12. Qingshan Yang (4 papers)
  13. Zhenlin Zhang (8 papers)
  14. Shuzhi Sam Ge (23 papers)
Citations (14)

Summary

Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous Driving

The paper "Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous Driving" details the development of a comprehensive multi-modal dataset dedicated to enhancing environmental perception in autonomous vehicles. This paper reinforces the comparative paucity of 4D radar datasets and introduces a dataset designed to facilitate rigorous analysis of dual 4D radar systems, addressing the need for robust comparative studies and formulation of perception algorithms using 4D radar data.

Overview and Dataset Characteristics

4D radars offer significant potential in autonomous driving applications, particularly due to their enhanced vertical resolution and point cloud density when juxtaposed with traditional 3D radars. The authors recognize the challenges posed by noise levels inherent in 4D radar data, emphasizing the trade-off between noise and point cloud density. They underscore the operational capabilities of radars, such as resilience in adverse weather conditions, which makes them indispensable compared to cameras and LiDARs, which have notable limitations under specific environmental conditions.

The proposed dataset marks a seminal effort in synchronizing two types of 4D radar data simultaneously, alongside auxiliary data from high-resolution cameras and LiDAR systems. This ensures a rich multi-modal dataset, encompassing ten thousand frames, which captures various challenging driving scenarios, including complex lighting conditions and diverse weather patterns. The inclusion of two different 4D radar point clouds in a single dataset setup is particularly noteworthy, allowing researchers to experiment and evaluate the efficiency of different radar perception algorithms comprehensively.

Dataset Composition and Design

The dataset comprises 151 sequences, primarily lasting 20 seconds, featuring 10,007 synchronized frames which yield high annotation quality. It is meticulously designed to address autonomous driving challenges, supporting tasks like 3D object detection, tracking, and multi-modal fusion, aiming to leverage the complementary strengths of distinct sensor modalities.

Importantly, the dataset emphasizes various adverse scenarios, such as night-time driving and adverse weather conditions, where traditional sensors like cameras and LiDARs struggle. The presence of dual radars provides an excellent ground for investigating and optimizing perception algorithms suited for complex real-world environments.

Experimental Findings

The researchers validate their dataset by applying several baseline models, demonstrating the practicality and efficacy of the dataset in fostering research on 4D radar perception. The experimental results show that while LiDAR data yield strong performance results, there is substantial potential for improvement when leveraging radar data, especially in adverse conditions where optical sensors are less reliable. Interestingly, radar's effective detection in such conditions underscores its utility in improving vehicle sensing capabilities.

Implications and Future Work

This dataset's significant contribution lies in catalyzing advanced research on 4D radar systems, potentially influencing future automotive system designs. The dataset acts as a foundational benchmark for developing algorithms capable of handling diverse sensor data, paving the way for more robust autonomous vehicle perception systems. Furthermore, it facilitates a nuanced understanding of the trade-offs and complementarities between different sensor technologies.

For future work, the paper suggests expanding data collection across a broader array of adverse scenarios, focusing on augmenting dataset diversity to further enhance generalizability and application breadth. Emphasis on collecting data under severe weather conditions such as snow or fog could extend the dataset's comprehensiveness, ultimately fostering more resilient autonomous systems capable of navigating any environment.

In summary, the authors introduce a pioneering dual radar dataset that holds significant promise for advancing autonomous driving technologies and supporting the development of new, more capable perception algorithms. This work's comprehensive nature serves as a valuable resource for the broader research community engaged in the continuous evolution of intelligent vehicle systems.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub