Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DSEC: A Stereo Event Camera Dataset for Driving Scenarios (2103.06011v1)

Published 10 Mar 2021 in cs.CV and cs.RO

Abstract: Once an academic venture, autonomous driving has received unparalleled corporate funding in the last decade. Still, the operating conditions of current autonomous cars are mostly restricted to ideal scenarios. This means that driving in challenging illumination conditions such as night, sunrise, and sunset remains an open problem. In these cases, standard cameras are being pushed to their limits in terms of low light and high dynamic range performance. To address these challenges, we propose, DSEC, a new dataset that contains such demanding illumination conditions and provides a rich set of sensory data. DSEC offers data from a wide-baseline stereo setup of two color frame cameras and two high-resolution monochrome event cameras. In addition, we collect lidar data and RTK GPS measurements, both hardware synchronized with all camera data. One of the distinctive features of this dataset is the inclusion of high-resolution event cameras. Event cameras have received increasing attention for their high temporal resolution and high dynamic range performance. However, due to their novelty, event camera datasets in driving scenarios are rare. This work presents the first high-resolution, large-scale stereo dataset with event cameras. The dataset contains 53 sequences collected by driving in a variety of illumination conditions and provides ground truth disparity for the development and evaluation of event-based stereo algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mathias Gehrig (23 papers)
  2. Willem Aarents (1 paper)
  3. Daniel Gehrig (28 papers)
  4. Davide Scaramuzza (190 papers)
Citations (261)

Summary

An Overview of the DSEC Stereo Event Camera Dataset for Driving Scenarios

The paper "DSEC: A Stereo Event Camera Dataset for Driving Scenarios" presents a dataset designed to support the research and development of computer vision algorithms in the context of autonomous driving. The authors, Mathias Gehrig et al., address the challenges posed by varying and complex illumination conditions which traditional cameras struggle with, especially in low light and high dynamic range scenarios such as night and sunset driving.

The dataset, DSEC, encompasses data captured from a suite consisting of stereo RGB frame cameras and high-resolution monochrome event cameras, extending the tools available for advancing stereo vision tasks in the vehicular domain. The inclusion of event cameras, with their high temporal resolution and dynamic range, is particularly noteworthy given the scarcity of such datasets for driving applications. The paper argues that event cameras provide significant advantages in terms of less motion blur and faster responses compared to conventional frame cameras. As a result, these sensors are critical for the development and evaluation of stereo algorithms tasked with operating under non-ideal, real-world driving conditions.

Technical Composition of DSEC

The DSEC dataset stands out due to its composition, which includes:

  • Stereo Camera Setup: Two color frame cameras paired with two high-resolution event cameras provide a challenging stereo vision setup. These sensors offer asynchronous events that are valuable for frame-to-event sensor fusion research.
  • Data Collection: The dataset includes data collected in diverse illumination conditions using a setup mounted atop a vehicle, which facilitates capturing dynamic driving scenes.
  • Additional Sensors: Lidar data and RTK GPS measurements complement the vision data, enabling accurate ground-truth depth estimation essential for developing robust stereo matching algorithms.

Dataset Features and Evaluation Protocol

DSEC includes 53 sequences covering daytime and nighttime scenarios in Switzerland, offering comprehensive ground truth disparities. These disparities support the evaluation of event-based stereo algorithms' performance. The dataset's rigorous calibration and synchronization ensure the reliability of time-stamped events and frames—crucial for effective sensor fusion.

The paper's authors utilize several evaluation metrics typically applied in stereo testing datasets, such as D1 (disparity error), mean absolute error (MAE), and root-mean-square error (RMSE). Their baseline experiments highlight the dataset's complexity and underscore the necessary adaptations algorithms must undergo to generalize and perform under adverse conditions.

Implications and Future Directions

By providing a comprehensive dataset involving both standard and event camera modalities, the work significantly impacts how challenging scenarios in autonomous driving are approached. Importantly, it provides a foundation for exploring new algorithms that leverage the strengths of event cameras, especially in terms of latency and dynamic range capabilities.

Given the increasing popularity of incorporating event cameras in automotive applications, datasets like DSEC offer rich research potential. Future progress may see augmented datasets with more complex scenarios, hardware advancements, and refined disparity ground truth methods that further promote algorithmic robustness. The open availability of such datasets creates a fertile ground for innovation, fostering improvements in security-critical areas like autonomous navigation.

In conclusion, DSEC fills an evident gap in resources available to researchers targeting autonomous vehicle applications, particularly in developing algorithms that thrive in realistic driving conditions. The paper not only introduces this dataset but also emphasizes the importance of combining state-of-the-art sensory information to heighten the situational awareness and performance of automated systems in diverse and challenging environments.