Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A 2.5D Vehicle Odometry Estimation for Vision Applications (2105.02679v1)

Published 6 May 2021 in cs.RO and cs.CV

Abstract: This paper proposes a method to estimate the pose of a sensor mounted on a vehicle as the vehicle moves through the world, an important topic for autonomous driving systems. Based on a set of commonly deployed vehicular odometric sensors, with outputs available on automotive communication buses (e.g. CAN or FlexRay), we describe a set of steps to combine a planar odometry based on wheel sensors with a suspension model based on linear suspension sensors. The aim is to determine a more accurate estimate of the camera pose. We outline its usage for applications in both visualisation and computer vision.

Citations (2)

Summary

  • The paper introduces a novel technique that integrates planar odometry with suspension sensor data to improve camera pose estimation.
  • It compensates for vertical displacements and tilt variations, addressing inaccuracies found in traditional wheel sensor models.
  • Experimental results demonstrate enhanced precision in complex terrains, benefiting autonomous driving and computer vision systems.

The paper "A 2.5D Vehicle Odometry Estimation for Vision Applications" explores innovative methodologies for estimating the pose of sensors mounted on a vehicle, a crucial aspect for autonomous driving systems. Traditional vehicle odometry often relies on planar models that use wheel sensors to estimate the vehicle’s movement. However, this approach can lead to inaccuracies in the estimated camera pose, particularly when the vehicle moves over irregular surfaces.

To address these limitations, the authors propose integrating planar odometry with a suspension model based on linear suspension sensors. This combined approach aims to generate a more accurate estimation of the camera pose by accounting for vertical displacements and tilt variations of the vehicle, which are not captured by conventional planar odometry alone. The integration makes use of commonly available vehicular odometric sensors with their outputs accessible via automotive communication buses such as CAN (Controller Area Network) or FlexRay.

The authors detail a series of steps to merge the data from wheel and suspension sensors, leveraging the advantages of both to enhance the precision of the estimated pose. The enriched pose estimation facilitates more reliable data for visualization purposes and improves the performance of computer vision applications, which are critically dependent on accurate spatial information.

They validate their approach through experiments that demonstrate the superiority of the proposed 2.5D odometry model over traditional planar models, particularly in scenarios involving complex terrain. This research contributes to advancements in autonomous vehicle technology by refining sensor data interpretation, leading to more dependable operation in diverse driving conditions.