A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors
This paper proposes a novel optimization-based framework for odometry estimation that accommodates a variety of sensor inputs. Unlike many traditional approaches that are tailored for specific sensor configurations, this framework distinguishes itself by its flexibility and applicability across multiple sensor suites. The authors, Tong Qin, Jie Pan, Shaozu Cao, and Shaojie Shen, illustrate the framework’s capabilities by focusing on visual and inertial sensors, providing a comprehensive analysis with combinations such as stereo cameras, monocular camera with IMU, and stereo cameras with IMU.
Framework Overview
The framework treats each sensor input as a generic factor, incorporating these into a pose graph optimization. This design not only simplifies the integration of various sensors but also enhances the robustness and adaptability of the system in diverse environments. Key aspects of the framework include the ability to handle sensor failure by seamlessly adding or removing sensor inputs, thus maintaining system resilience.
Methodology
The framework employs an optimization-based approach over a sliding window of data, utilizing a combination of camera factors and IMU preintegration for factor graph construction. The innovation lies in its ability to merge these factors into a cohesive state estimation process, using nonlinear least squares to minimize errors. The marginalization technique is applied to reduce computational complexity while preserving essential information for accurate state estimation.
Experimental Validation
The system is rigorously validated on the EuRoC MAV datasets, demonstrating its efficacy against state-of-the-art algorithms like OKVIS. The quantitative results showcased in Table 1 highlight the framework’s ability to achieve competitive or superior accuracy, particularly in scenarios where sensor fusion is beneficial, such as monocular camera with IMU setups. Real-world experiments further corroborate these findings, showing impressive performance in large-scale outdoor environments.
Numerical Results
Strong numerical results are demonstrated, showing reduced RMSE values, especially in configurations involving the IMU. For instance, experiments on the MH_05_difficult dataset indicate significant improvements in both translation and rotation errors compared to purely stereo methods. Such enhancements underline the framework’s strength in leveraging sensor fusion to enhance odometry estimation accuracy.
Implications and Future Work
This research has notable implications in the field of robotics, particularly for applications needing robust and adaptive odometry solutions across varying sensor configurations. The ability to support multiple sensor combinations opens pathways for deployment in autonomous vehicles, drones, and mobile robotics. Future extensions could incorporate global sensors like GPS, aiming for solutions that achieve both locally accurate and globally consistent state estimations.
The authors provide an open-source implementation, encouraging broader community engagement and further development of the system. As the field progresses, this framework could play a pivotal role in pushing the boundaries of sensor fusion in robotics, paving the way for more sophisticated and reliable autonomous systems.