Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Object Memory Transformer for Object Goal Navigation (2203.14708v1)

Published 24 Mar 2022 in cs.CV, cs.AI, cs.LG, and cs.RO

Abstract: This paper presents a reinforcement learning method for object goal navigation (ObjNav) where an agent navigates in 3D indoor environments to reach a target object based on long-term observations of objects and scenes. To this end, we propose Object Memory Transformer (OMT) that consists of two key ideas: 1) Object-Scene Memory (OSM) that enables to store long-term scenes and object semantics, and 2) Transformer that attends to salient objects in the sequence of previously observed scenes and objects stored in OSM. This mechanism allows the agent to efficiently navigate in the indoor environment without prior knowledge about the environments, such as topological maps or 3D meshes. To the best of our knowledge, this is the first work that uses a long-term memory of object semantics in a goal-oriented navigation task. Experimental results conducted on the AI2-THOR dataset show that OMT outperforms previous approaches in navigating in unknown environments. In particular, we show that utilizing the long-term object semantics information improves the efficiency of navigation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Rui Fukushima (2 papers)
  2. Kei Ota (17 papers)
  3. Asako Kanezaki (25 papers)
  4. Yoko Sasaki (10 papers)
  5. Yusuke Yoshiyasu (13 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.