Papers
Topics
Authors
Recent
Search
2000 character limit reached

FisheyeDistanceNet: Self-Supervised Scale-Aware Distance Estimation using Monocular Fisheye Camera for Autonomous Driving

Published 7 Oct 2019 in cs.CV, cs.LG, cs.RO, eess.IV, and stat.ML | (1910.04076v4)

Abstract: Fisheye cameras are commonly used in applications like autonomous driving and surveillance to provide a large field of view ($>180{\circ}$). However, they come at the cost of strong non-linear distortions which require more complex algorithms. In this paper, we explore Euclidean distance estimation on fisheye cameras for automotive scenes. Obtaining accurate and dense depth supervision is difficult in practice, but self-supervised learning approaches show promising results and could potentially overcome the problem. We present a novel self-supervised scale-aware framework for learning Euclidean distance and ego-motion from raw monocular fisheye videos without applying rectification. While it is possible to perform piece-wise linear approximation of fisheye projection surface and apply standard rectilinear models, it has its own set of issues like re-sampling distortion and discontinuities in transition regions. To encourage further research in this area, we will release our dataset as part of the WoodScape project \cite{yogamani2019woodscape}. We further evaluated the proposed algorithm on the KITTI dataset and obtained state-of-the-art results comparable to other self-supervised monocular methods. Qualitative results on an unseen fisheye video demonstrate impressive performance https://youtu.be/Sgq1WzoOmXg.

Citations (72)

Summary

  • The paper introduces a self-supervised framework that estimates real-world distances from raw fisheye imagery without relying on rectification.
  • It integrates ego-motion velocity and deformable convolutional networks to overcome scale ambiguity and compensate for fisheye distortions.
  • High-resolution distance maps are generated using super-resolution techniques, validated on KITTI and WoodScape benchmarks for autonomous driving.

An Expert Overview of FisheyeDistanceNet for Autonomous Driving

The paper "FisheyeDistanceNet: Self-Supervised Scale-Aware Distance Estimation using Monocular Fisheye Camera for Autonomous Driving" presents a novel approach for estimating Euclidean distances using raw, unrectified fisheye images within the domain of autonomous driving. Fisheye cameras, known for their broad field of view (FOV greater than 180 degrees), present unique challenges due to their significant non-linear distortions. The paper tackles the intricacies of using fisheye imagery in automotive settings by introducing a scale-aware, self-supervised framework, which seeks to optimize distance estimation without relying on conventional image rectification techniques.

Key Contributions

The paper makes several contributions that advance the state-of-the-art in depth estimation from fisheye cameras:

  1. Self-Supervised Training Framework: The authors propose a novel self-supervised framework that learns Euclidean distances and ego-motion directly from raw fisheye video streams, rather than relying on rectification, which can introduce distortions.
  2. Scale-Awareness in Distance Estimation: The proposed method addresses scale factor ambiguity by integrating ego-motion velocity, which provides metric distance output suitable for practical use in autonomous driving scenarios.
  3. Use of Deformable Convolutional Networks: The architecture employs deformable convolutional layers to effectively model and compensate for fisheye distortions, allowing for more accurate feature extraction.
  4. Super-Resolution Techniques: By applying pixel shuffle techniques inspired by super-resolution networks, the model outputs high-resolution distance maps, ensuring clarity and accuracy even for low-resolution inputs.
  5. Comprehensive Loss Function: The framework integrates a combination of photometric losses, edge-aware smoothness, and distance consistency constraints across sequences to enhance model training and output fidelity.

Evaluation and Results

The paper's evaluation makes a strong case for its framework's effectiveness by comparing its performance on standard datasets. By benchmarking against the KITTI dataset using the Eigen split, FisheyeDistanceNet's results approach or exceed existing state-of-the-art figures achieved by pinhole camera-based methods, particularly in the absence of additional ground truth scaling. The introduction of WoodScape, a dataset specifically for the fisheye assessments, positions this paper as a valuable resource, facilitating future research on omnidirectional distance estimation tailored to real-world, automotive applications.

Practical and Theoretical Implications

In practice, this research enables more robust obstacle detection and trajectory planning for autonomous vehicles operating with fisheye camera systems, potentially replacing more complex lidar systems. This is particularly crucial as the automotive industry seeks cost-efficient, scalable solutions for wide-angle visual perception. Theoretically, the paper advances understanding of self-supervised learning for complex projection models, underscoring the versatility of convolutional neural networks (CNNs) augmented with deformable convolutions in vision-based tasks.

Future Directions

Future work could extend this framework by integrating multi-sensor fusion approaches, enhancing robustness against dynamic environmental conditions, and exploring optimized network architectures. Additionally, with the authors' release of the WoodScape dataset, an exploration of cross-domain adaptability and generalization of fisheye-based models could prove invaluable.

In conclusion, "FisheyeDistanceNet" presents a refined, methodologically sound approach to distance estimation using fisheye cameras, contributing substantively to the field of autonomous vehicles and opening avenues for further research on wide-angle vision systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.