Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visual-Inertial Monocular SLAM with Map Reuse (1610.05949v2)

Published 19 Oct 2016 in cs.RO and cs.CV

Abstract: In recent years there have been excellent results in Visual-Inertial Odometry techniques, which aim to compute the incremental motion of the sensor with high accuracy and robustness. However these approaches lack the capability to close loops, and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place. In this work we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas. While our approach can be applied to any camera configuration, we address here the most general problem of a monocular camera, with its well-known scale ambiguity. We also propose a novel IMU initialization method, which computes the scale, the gravity direction, the velocity, and gyroscope and accelerometer biases, in a few seconds with high accuracy. We test our system in the 11 sequences of a recent micro-aerial vehicle public dataset achieving a typical scale factor error of 1% and centimeter precision. We compare to the state-of-the-art in visual-inertial odometry in sequences with revisiting, proving the better accuracy of our method due to map reuse and no drift accumulation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Raul Mur-Artal (5 papers)
  2. Juan D. Tardos (7 papers)
Citations (652)

Summary

  • The paper introduces a tightly-coupled visual-inertial SLAM system that reuses maps to achieve zero-drift localization.
  • It presents a novel IMU initialization method and demonstrates centimeter-level accuracy with a 1% scale factor error on the EuRoC dataset.
  • The system operates in real-time with enhanced loop closures and local mapping, making it ideal for autonomous vehicles and drones.

Visual-Inertial Monocular SLAM with Map Reuse: An Overview

Introduction

This paper presents a novel approach to Simultaneous Localization and Mapping (SLAM) using a tightly-coupled visual-inertial system. The methodology leverages the fusion of monocular vision and Inertial Measurement Unit (IMU) data to achieve zero-drift localization by reusing maps in already mapped environments. This is particularly crucial in situations where the sensor revisits the same locations, addressing the longstanding issue of drift accumulation inherent in odometry.

Key Contributions

The authors introduce several significant contributions:

  1. Tightly-Coupled SLAM System: The proposed system effectively combines visual information with IMU data to perform loop closures and reuse maps, significantly enhancing localization accuracy.
  2. Novel IMU Initialization: A new method for initializing the IMU parameters, including scale, gravity direction, velocity, and biases, is presented. This initialization occurs within a few seconds and boasts high accuracy.
  3. Real-Time Performance: The system operates in real-time, capable of processing monocular data efficiently, which is crucial for applications in autonomous systems, such as drones and autonomous vehicles.

Experimental Results

The paper tests the system on the EuRoC dataset using a micro-aerial vehicle, demonstrating notable precision with a typical scale factor error of 1% and centimeter-level localization accuracy. Comparison with state-of-the-art visual-inertial odometry systems underscores its enhanced accuracy, mainly due to its ability to minimize drift and reuse maps. Notably, in sequences involving revisitation, the system outperforms its peers by delivering superior accuracy.

System Architecture

The system's architecture includes several critical components:

  • Tracking: Employs IMU data for predicting camera poses, thereby replacing traditional ad-hoc motion models. This predictive capability allows for improved feature matching and error minimization.
  • Local Mapping: Utilizes local Bundle Adjustment (BA) to optimize recent keyframes, ensuring robustness and accuracy in map building.
  • Loop Closing: Enhances the system's capability to handle revisitations effectively, performing pose-graph optimization over 6 Degrees of Freedom (DoF) rather than 7 due to scale observability.

Theoretical and Practical Implications

The presented approach has broad implications:

  • Theoretical: By demonstrating a successful integration of visual and inertial data, the paper contributes to advancing our understanding of SLAM and sensor fusion. It highlights the importance of properly initializing and continuously updating sensor biases and environmental scales for reliable SLAM systems.
  • Practical: Given the system's zero-drift localization capabilities, it is well-suited for applications in virtual and augmented reality, where accurate and stable pose estimation is imperative. Autonomous vehicles and drones may benefit from this improved localization accuracy, particularly in complex navigation tasks.

Future Directions

Several avenues for future work are suggested:

  • Stereo and RGB-D Integration: Expanding the system to incorporate stereo or RGB-D cameras could improve accuracy, simplify IMU initialization, and enhance robustness in diverse environments.
  • Automatic Initialization Criteria: Developing automatic criteria for successful IMU initialization could enhance usability and reliability, particularly in scenarios with varying motion characteristics.

Conclusion

The paper effectively tackles the limitations of traditional odometry by providing a robust system for visual-inertial SLAM with map reuse. This capability of maintaining zero-drift during localization in revisited areas marks a substantial advancement in SLAM technology, promising significant impacts across various fields reliant on precise navigation and mapping technologies.

Youtube Logo Streamline Icon: https://streamlinehq.com