Papers
Topics
Authors
Recent
2000 character limit reached

Deep Sensor Fusion for Real-Time Odometry Estimation (1908.00524v1)

Published 31 Jul 2019 in cs.RO and cs.CV

Abstract: Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.

Citations (10)

Summary

  • The paper introduces a novel CNN-based framework for fusing 2D laser and mono-camera data to accurately estimate odometry in real time.
  • It reformulates odometry estimation as an ordinal classification task to predict rotation and translation, bypassing the need for precise sensor calibration.
  • Empirical results on a road dataset demonstrate that the approach significantly improves accuracy and enables robust real-time performance for autonomous navigation.

The paper "Deep Sensor Fusion for Real-Time Odometry Estimation" introduces a novel framework that leverages Convolutional Neural Networks (CNNs) to achieve real-time odometry estimation through the fusion of data from 2D laser scanners and mono-cameras. The primary advantage of using CNNs in this context is the elimination of the need for precise calibration of the rigid body transform between the sensors, which is traditionally a crucial and complex task in data fusion for robot navigation.

Key Contributions and Methodology:

  1. Sensor Fusion via CNNs: The paper marks the first instance of employing CNNs for integrating 2D laser scanner data with monocular camera data for odometry estimation. CNNs are utilized to extract and fuse features from both types of sensor data, enabling effective correspondence between the extracted features without requiring pre-calibrated sensor transformations.
  2. Ordinal Classification Approach: The problem of estimating odometry is transformed into an ordinal classification task. This approach is particularly insightful because it frames the estimation process in a way that is conducive to CNN-based learning, allowing the model to predict rotation and translation values between consecutive frames more effectively.
  3. Real-time Performance: One of the significant outcomes of the proposed method is its ability to run in real-time, making it highly suitable for practical applications in robot navigation and autonomous driving, where timely and accurate odometry data is essential.
  4. Empirical Validation: The framework has been tested on a real road dataset. The results highlight that the fusion network not only operates in real-time but also significantly improves the accuracy of odometry estimation compared to using individual sensors alone. This demonstrates the effectiveness of the model in learning how to optimally combine the information from 2D laser scanners and monocular cameras.

Implications and Future Work:

The framework proposed in this paper represents a significant advancement in the field of sensor fusion for odometry. By leveraging deep learning techniques, especially CNNs, it opens up new avenues for creating more robust and calibration-free navigation systems. Future work could extend this approach to integrate additional sensor types, further enhance the model's robustness and generalizability, and explore its application in more complex and diverse environments.

This innovation in deep sensor fusion exemplifies how modern neural network architectures can be employed to solve longstanding challenges in robot navigation, promoting the development of more autonomous and reliable systems.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.