Papers
Topics
Authors
Recent
Search
2000 character limit reached

CAMAL: Context-Aware Multi-layer Attention framework for Lightweight Environment Invariant Visual Place Recognition

Published 18 Sep 2019 in cs.CV | (1909.08153v2)

Abstract: In the last few years, Deep Convolutional Neural Networks (D-CNNs) have shown state-of-the-art (SOTA) performance for Visual Place Recognition (VPR), a pivotal component of long-term intelligent robotic vision (vision-aware localization and navigation systems). The prestigious generalization power of D-CNNs gained upon training on large scale places datasets and learned persistent image regions which are found to be robust for specific place recognition under changing conditions and camera viewpoints. However, against the computation and power intensive D-CNNs based VPR algorithms that are employed to determine the approximate location of resource-constrained mobile robots, lightweight VPR techniques are preferred. This paper presents a computation- and energy-efficient CAMAL framework that captures place-specific multi-layer convolutional attentions efficient for environment invariant-VPR. At 4x lesser power consumption, evaluating the proposed VPR framework on challenging benchmark place recognition datasets reveal better and comparable Area under Precision-Recall (AUC-PR) curves with approximately 4x improved image retrieval performance over the contemporary VPR methodologies.

Citations (14)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.