Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visual-Inertial Mapping with Non-Linear Factor Recovery (1904.06504v3)

Published 13 Apr 2019 in cs.CV and cs.RO

Abstract: Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches.

Citations (166)

Summary

  • The paper introduces a two-layered mapping approach that uses non-linear factor recovery to optimally approximate and refine trajectory data.
  • It enhances visual-inertial odometry robustness by integrating refined IMU measurements with loop-closing constraints for better roll and pitch observability.
  • Demonstrated on the EuRoC dataset, the method outperforms state-of-the-art VIO systems with superior trajectory accuracy and reduced computational complexity.

Visual-Inertial Mapping with Non-Linear Factor Recovery

The paper "Visual-Inertial Mapping with Non-Linear Factor Recovery" outlines a method for advancing the accuracy and robustness of visual-inertial odometry (VIO) systems by employing non-linear factor recovery for visual-inertial mapping. This work is anchored in the combination of camera and inertial measurement unit (IMU) data for precise ego-motion estimation and environment mapping, specifically addressing the challenges associated with globally consistent mapping.

The research explores the limitations of current VIO systems that operate on a set of keyframes with substantial temporal intervals between them, during which IMU data becomes increasingly unreliable due to noise accumulation. The proposed method leverages non-linear factor recovery to extract pertinent information from the VIO system, introducing a two-layered approach for visual-inertial mapping. The core innovation lies in reconstructing a set of non-linear factors that serve as an optimal approximation of trajectory information accumulated by the VIO process.

A significant aspect of this method is its use of these reconstructed factors alongside loop-closing constraints to enable bundle adjustment. The integration of VIO factors within this framework allows for enhanced observability of the roll and pitch angles in the global map, thereby improving the accuracy and robustness of environmental mapping. This advancement is particularly validated through experimental evaluations on a public benchmark, where the proposed method demonstrates superior performance compared to state-of-the-art techniques.

Technical Contributions

  1. Two-layered Mapping Approach: The method integrates keypoint-based bundle adjustment with inertial and short-term visual tracking, utilizing non-linear factor recovery to facilitate efficient global optimization with high-frame-rate data.
  2. Advanced VIO System: The research introduces a VIO system that surpasses existing methods in trajectory accuracy across numerous evaluated sequences. This is achieved through innovative combinations of components such as patch tracking and landmark representation.
  3. Refined IMU Measurement Integration: Instead of preintegrated IMU measurements, the method encapsulates visual-inertial information into non-linear factors, leading to reduced optimization complexity and improved pose estimates in a gravity-aligned map.

Results and Implications

The method exhibits considerable improvements in trajectory estimation accuracy across multiple sequences of the EuRoC dataset, outperforming existing VIO and mapping systems. Notably, this approach promotes computational efficiency by reducing the state dimensionality, essential for large-scale visual-inertial mapping operations. The implications of this research extend to various domains requiring precise ego-motion estimation, such as robotics and augmented reality, where robust, large-scale mapping is crucial.

By offering a refined approach to integrating sensor modalities through factor recovery, this work sets a benchmark for future explorations in multi-camera setups or additional sensor inputs. The demonstrated capability to achieve a globally consistent map while maintaining an efficient computational footprint suggests a meaningful contribution to the field of visual-inertial SLAM.

This contribution to the landscape of VIO and SLAM technology underscores the potential of hierarchical frameworks for sensor fusion, advocating for further exploration into optimized data integration models and their implementations within real-world applications.

Youtube Logo Streamline Icon: https://streamlinehq.com