Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Poses as Queries: Image-to-LiDAR Map Localization with Transformers (2305.04298v1)

Published 7 May 2023 in cs.RO and cs.CV

Abstract: High-precision vehicle localization with commercial setups is a crucial technique for high-level autonomous driving tasks. Localization with a monocular camera in LiDAR map is a newly emerged approach that achieves promising balance between cost and accuracy, but estimating pose by finding correspondences between such cross-modal sensor data is challenging, thereby damaging the localization accuracy. In this paper, we address the problem by proposing a novel Transformer-based neural network to register 2D images into 3D LiDAR map in an end-to-end manner. Poses are implicitly represented as high-dimensional feature vectors called pose queries and can be iteratively updated by interacting with the retrieved relevant information from cross-model features using attention mechanism in a proposed POse Estimator Transformer (POET) module. Moreover, we apply a multiple hypotheses aggregation method that estimates the final poses by performing parallel optimization on multiple randomly initialized pose queries to reduce the network uncertainty. Comprehensive analysis and experimental results on public benchmark conclude that the proposed image-to-LiDAR map localization network could achieve state-of-the-art performances in challenging cross-modal localization tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jinyu Miao (13 papers)
  2. Kun Jiang (128 papers)
  3. Yunlong Wang (91 papers)
  4. Tuopu Wen (15 papers)
  5. Zhongyang Xiao (6 papers)
  6. Zheng Fu (8 papers)
  7. Mengmeng Yang (35 papers)
  8. Maolin Liu (3 papers)
  9. Diange Yang (37 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.