Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DA4AD: End-to-End Deep Attention-based Visual Localization for Autonomous Driving (2003.03026v2)

Published 6 Mar 2020 in cs.CV and cs.RO

Abstract: We present a visual localization framework based on novel deep attention aware features for autonomous driving that achieves centimeter level localization accuracy. Conventional approaches to the visual localization problem rely on handcrafted features or human-made objects on the road. They are known to be either prone to unstable matching caused by severe appearance or lighting changes, or too scarce to deliver constant and robust localization results in challenging scenarios. In this work, we seek to exploit the deep attention mechanism to search for salient, distinctive and stable features that are good for long-term matching in the scene through a novel end-to-end deep neural network. Furthermore, our learned feature descriptors are demonstrated to be competent to establish robust matches and therefore successfully estimate the optimal camera poses with high precision. We comprehensively validate the effectiveness of our method using a freshly collected dataset with high-quality ground truth trajectories and hardware synchronization between sensors. Results demonstrate that our method achieves a competitive localization accuracy when compared to the LiDAR-based localization solutions under various challenging circumstances, leading to a potential low-cost localization solution for autonomous driving.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yao Zhou (72 papers)
  2. Guowei Wan (7 papers)
  3. Shenhua Hou (1 paper)
  4. Li Yu (193 papers)
  5. Gang Wang (407 papers)
  6. Xiaofei Rui (2 papers)
  7. Shiyu Song (11 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.