Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Object Goal Navigation using Data Regularized Q-Learning (2208.13009v1)

Published 27 Aug 2022 in cs.RO and cs.AI

Abstract: Object Goal Navigation requires a robot to find and navigate to an instance of a target object class in a previously unseen environment. Our framework incrementally builds a semantic map of the environment over time, and then repeatedly selects a long-term goal ('where to go') based on the semantic map to locate the target object instance. Long-term goal selection is formulated as a vision-based deep reinforcement learning problem. Specifically, an Encoder Network is trained to extract high-level features from a semantic map and select a long-term goal. In addition, we incorporate data augmentation and Q-function regularization to make the long-term goal selection more effective. We report experimental results using the photo-realistic Gibson benchmark dataset in the AI Habitat 3D simulation environment to demonstrate substantial performance improvement on standard measures in comparison with a state of the art data-driven baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nandiraju Gireesh (6 papers)
  2. D. A. Sasi Kiran (3 papers)
  3. Snehasis Banerjee (14 papers)
  4. Mohan Sridharan (30 papers)
  5. Brojeshwar Bhowmick (37 papers)
  6. Madhava Krishna (24 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.