Papers
Topics
Authors
Recent
Search
2000 character limit reached

Camera-Only Bird's Eye View Perception: A Neural Approach to LiDAR-Free Environmental Mapping for Autonomous Vehicles

Published 9 May 2025 in cs.CV | (2505.06113v1)

Abstract: Autonomous vehicle perception systems have traditionally relied on costly LiDAR sensors to generate precise environmental representations. In this paper, we propose a camera-only perception framework that produces Bird's Eye View (BEV) maps by extending the Lift-Splat-Shoot architecture. Our method combines YOLOv11-based object detection with DepthAnythingV2 monocular depth estimation across multi-camera inputs to achieve comprehensive 360-degree scene understanding. We evaluate our approach on the OpenLane-V2 and NuScenes datasets, achieving up to 85% road segmentation accuracy and 85-90% vehicle detection rates when compared against LiDAR ground truth, with average positional errors limited to 1.2 meters. These results highlight the potential of deep learning to extract rich spatial information using only camera inputs, enabling cost-efficient autonomous navigation without sacrificing accuracy.

Summary

Camera-Only Bird's Eye View Perception: A Neural Approach to LiDAR-Free Environmental Mapping for Autonomous Vehicles

The paper presents an innovative approach to autonomous vehicle perception by introducing a camera-only system capable of generating Bird's Eye View (BEV) representations using the Lift-Splat-Shoot neural architecture. This work addresses the reliance on expensive LiDAR sensors by leveraging advanced computer vision techniques to facilitate autonomous navigation with reduced hardware costs.

Methodology and Implementation

The core of the proposed system is the Lift-Splat-Shoot architecture, which has been enhanced with state-of-the-art object detection via YOLOv11 and monocular depth estimation through DepthAnythingV2. The approach efficiently processes inputs from six to seven camera perspectives, depending on the dataset, thus ensuring comprehensive 360-degree environmental awareness. Inputs undergo a sophisticated pipeline comprising depth-aware feature extraction, projection into 3D space using quaternion-based transformations, and semantic segmentation to achieve a unified BEV representation.

A standout feature of the system is the integration of object detection with the BEV generation process, which is achieved through a joint optimization strategy. This integration ensures that object detection results feed directly into the BEV feature space, enhancing both detection accuracy and efficiency.

Experimental Evaluation and Results

The system was extensively evaluated on the OpenLane-V2 and NuScenes datasets, achieving a road segmentation accuracy of 85% and vehicle detection rates of 85-90% relative to LiDAR ground truth. Positional error averages were limited to 1.2 meters, showcasing a strong aptitude for real-world environmental mapping tasks. The proposed camera-only system demonstrated improved performance over baseline approaches like traditional Inverse Perspective Mapping (IPM) and even some contemporary neural network-based systems like BEVFormer.

Through rigorous ablation studies, key performance enhancements were attributed to the seamless integration of DepthAnythingV2 and YOLOv11, alongside the novel BEVLoss function, which collectively improved the segmentation and detection metrics.

Implications and Future Directions

The practical implications of this research are significant, particularly in reducing the cost barriers for autonomous vehicle technology adoption. By demonstrating that reliable BEV maps can be generated without the hefty price tag of LiDAR sensors, the research paves the way for more economically feasible autonomous systems. This advancement could prove transformative for industries seeking scalable implementations without sacrificing perceptual accuracy.

Despite these achievements, the camera-only system still faces challenges in night-time settings, adverse weather conditions, and scenarios involving transparent or reflective surfaces—issues that LiDAR typically handles more robustly. Addressing these limitations would be the logical next step, potentially through hybrid approaches incorporating additional sensor types like event cameras or further refinement of existing algorithms to improve robustness under varied conditions.

Conclusion

This paper contributes significantly to the ongoing dialogue around cost-effective autonomous vehicle technologies by providing a viable camera-only BEV perception system. While improvements in challenging environmental conditions remain a necessary focus for future research, the framework established by this study presents a promising alternative to LiDAR, capable of comparable performance with substantial cost benefits. As researchers continue to innovate within this space, these findings will undoubtedly inform further developments and applications in autonomous driving systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.