Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Open-World Distributed Robot Self-Localization with Transferable Visual Vocabulary and Both Absolute and Relative Features (2109.04569v3)

Published 9 Sep 2021 in cs.CV

Abstract: Visual robot self-localization is a fundamental problem in visual robot navigation and has been studied across various problem settings, including monocular and sequential localization. However, many existing studies focus primarily on single-robot scenarios, with limited exploration into general settings involving diverse robots connected through wireless networks with constrained communication capacities, such as open-world distributed robot systems. In particular, issues related to the transfer and sharing of key knowledge, such as visual descriptions and visual vocabulary, between robots have been largely neglected. This work introduces a new self-localization framework designed for open-world distributed robot systems that maintains state-of-the-art performance while offering two key advantages: (1) it employs an unsupervised visual vocabulary model that maps to multimodal, lightweight, and transferable visual features, and (2) the visual vocabulary itself is a lightweight and communication-friendly model. Although the primary focus is on encoding monocular view images, the framework can be easily extended to sequential localization applications. By utilizing complementary similarity-preserving features -- both absolute and relative -- the framework meets the requirements for being unsupervised, multimodal, lightweight, and transferable. All features are learned and recognized using a lightweight graph neural network and scene graph. The effectiveness of the proposed method is validated in both passive and active self-localization scenarios.

Citations (1)

Summary

We haven't generated a summary for this paper yet.