Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors (1901.03638v1)

Published 11 Jan 2019 in cs.CV

Abstract: Nowadays, more and more sensors are equipped on robots to increase robustness and autonomous ability. We have seen various sensor suites equipped on different platforms, such as stereo cameras on ground vehicles, a monocular camera with an IMU (Inertial Measurement Unit) on mobile phones, and stereo cameras with an IMU on aerial robots. Although many algorithms for state estimation have been proposed in the past, they are usually applied to a single sensor or a specific sensor suite. Few of them can be employed with multiple sensor choices. In this paper, we proposed a general optimization-based framework for odometry estimation, which supports multiple sensor sets. Every sensor is treated as a general factor in our framework. Factors which share common state variables are summed together to build the optimization problem. We further demonstrate the generality with visual and inertial sensors, which form three sensor suites (stereo cameras, a monocular camera with an IMU, and stereo cameras with an IMU). We validate the performance of our system on public datasets and through real-world experiments with multiple sensors. Results are compared against other state-of-the-art algorithms. We highlight that our system is a general framework, which can easily fuse various sensors in a pose graph optimization. Our implementations are open source\footnote{https://github.com/HKUST-Aerial-Robotics/VINS-Fusion}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tong Qin (32 papers)
  2. Jie Pan (36 papers)
  3. Shaozu Cao (4 papers)
  4. Shaojie Shen (121 papers)
Citations (314)

Summary

A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors

This paper proposes a novel optimization-based framework for odometry estimation that accommodates a variety of sensor inputs. Unlike many traditional approaches that are tailored for specific sensor configurations, this framework distinguishes itself by its flexibility and applicability across multiple sensor suites. The authors, Tong Qin, Jie Pan, Shaozu Cao, and Shaojie Shen, illustrate the framework’s capabilities by focusing on visual and inertial sensors, providing a comprehensive analysis with combinations such as stereo cameras, monocular camera with IMU, and stereo cameras with IMU.

Framework Overview

The framework treats each sensor input as a generic factor, incorporating these into a pose graph optimization. This design not only simplifies the integration of various sensors but also enhances the robustness and adaptability of the system in diverse environments. Key aspects of the framework include the ability to handle sensor failure by seamlessly adding or removing sensor inputs, thus maintaining system resilience.

Methodology

The framework employs an optimization-based approach over a sliding window of data, utilizing a combination of camera factors and IMU preintegration for factor graph construction. The innovation lies in its ability to merge these factors into a cohesive state estimation process, using nonlinear least squares to minimize errors. The marginalization technique is applied to reduce computational complexity while preserving essential information for accurate state estimation.

Experimental Validation

The system is rigorously validated on the EuRoC MAV datasets, demonstrating its efficacy against state-of-the-art algorithms like OKVIS. The quantitative results showcased in Table 1 highlight the framework’s ability to achieve competitive or superior accuracy, particularly in scenarios where sensor fusion is beneficial, such as monocular camera with IMU setups. Real-world experiments further corroborate these findings, showing impressive performance in large-scale outdoor environments.

Numerical Results

Strong numerical results are demonstrated, showing reduced RMSE values, especially in configurations involving the IMU. For instance, experiments on the MH_05_difficult dataset indicate significant improvements in both translation and rotation errors compared to purely stereo methods. Such enhancements underline the framework’s strength in leveraging sensor fusion to enhance odometry estimation accuracy.

Implications and Future Work

This research has notable implications in the field of robotics, particularly for applications needing robust and adaptive odometry solutions across varying sensor configurations. The ability to support multiple sensor combinations opens pathways for deployment in autonomous vehicles, drones, and mobile robotics. Future extensions could incorporate global sensors like GPS, aiming for solutions that achieve both locally accurate and globally consistent state estimations.

The authors provide an open-source implementation, encouraging broader community engagement and further development of the system. As the field progresses, this framework could play a pivotal role in pushing the boundaries of sensor fusion in robotics, paving the way for more sophisticated and reliable autonomous systems.

Github Logo Streamline Icon: https://streamlinehq.com